text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Prostate volumetric‐modulated arc therapy: dosimetry and radiobiological model variation between the single‐arc and double‐arc technique This study investigates the dosimetry and radiobiological model variation when a second photon arc was added to prostate volumetric‐modulated arc therapy (VMAT) using the single‐arc technique. Dosimetry and radiobiological model comparison between the single‐arc and double‐arc prostate VMAT plans were performed on five patients with prostate volumes ranging from 29−68.1 cm3. The prescription dose was 78 Gy/39 fractions and the photon beam energy was 6 MV. Dose‐volume histogram, mean and maximum dose of targets (planning and clinical target volume) and normal tissues (rectum, bladder and femoral heads), dose‐volume criteria in the treatment plan (D99% of PTV; D30%,D50%,V17Gy and V35Gy of rectum and bladder; D5% of femoral heads), and dose profiles along the vertical and horizontal axis crossing the isocenter were determined using the single‐arc and double‐arc VMAT technique. For comparison, the monitor unit based on the RapidArc delivery method, prostate tumor control probability (TCP), and rectal normal tissue complication probability (NTCP) based on the Lyman‐Burman‐Kutcher algorithm were calculated. It was found that though the double‐arc technique required almost double the treatment time than the single‐arc, the double‐arc plan provided a better rectal and bladder dose‐volume criteria by shifting the delivered dose in the patient from the anterior–posterior direction to the lateral. As the femoral head was less radiosensitive than the rectum and bladder, the double‐arc technique resulted in a prostate VMAT plan with better prostate coverage and rectal dose‐volume criteria compared to the single‐arc. The prostate TCP of the double‐arc plan was found slightly increased (0.16%) compared to the single‐arc. Therefore, when the rectal dose‐volume criteria are very difficult to achieve in a single‐arc prostate VMAT plan, it is worthwhile to consider the double‐arc technique. PACS number: 87.55.D‐, 87.55.dk, 87.55.K‐, 87.55.Qr equivalent or even better target coverage and normal tissue (rectum, bladder and femoral heads) sparing. (7)(8)(9)(10)(11) However, unlike step-and-shoot IMRT, prostate VMAT interplays more dose delivery parameters such as dynamic multileaf movement, dose rate, and gantry speed within a single or multiple photon arcs in the treatment. (12)(13)(14)(15) This complex delivery technique, therefore, requires more dedicated machine and patient quality assurance procedure, MU calculation algorithm, and dosimetric evaluation (such as when patient size reduction due to weight loss) in the treatment. (16)(17)(18)(19) Although single-arc prostate VMAT has target coverage and dose homogeneity comparable to step-and-shoot IMRT, treatment planning dose-volume criteria were sometimes difficult to achieve because of the complex geometry between the prostate and mobile rectum with irregular shape. (7,11,20) To further reduce the rectal dose per planning dose-volume criteria (e.g. D 30% , D 50% , V 17Gy and V 35Gy ), the double-arc technique has to be employed to improve the target coverage and rectal sparing. The dosimetry of prostate VMAT using the single-arc and doublearc technique was studied by some groups. From a retrospective planning study, Guckenberger et al. (7) compared the dose-volume criteria among prostate step-and-shoot IMRT, single-arc, and multiple-arc VMAT. They concluded that the multiple-arc prostate VMAT had a better dosimetric result than the single-arc at a cost of increased delivery time, MU, and spread of low doses. Wolff et al. (11) further compared the homogeneity and conformity index between the single-arc and multiple-arc (one 360° rotation plus 200° second rotation) prostate VMAT. They found that both indexes were higher for the multiple-arc technique with a relatively longer delivery time (3.7 min) compared to the single-arc (1.8 min). In the dosimetry comparison performed by Sze et al., (20) the authors reported that though the single-arc technique was more efficient regarding the delivery time and MU, it resulted in a higher rectal dose compared to the double-arc. They concluded that for a busy treatment unit, the single-arc technique could be an acceptable option provided that all planning dose-volume criteria were fulfilled. In this study, apart from the dosimetry (dose-volume criteria, mean and maximum dose) and MU comparison between the single-arc and double-arc technique, the reason of applying a second arc in the double-arc technique was investigated, based on changes of dose distribution in different directions (left, right, anterior, and posterior). Moreover, prostate tumor control probability (TCP) and rectal normal tissue complication probability (NTCP) were calculated using the Lyman-Burman-Kutcher radiobiological model. (21)(22)(23) The aim of this study is to investigate the dosimetry and radiobiological parameter variation between the single-arc and double-arc prostate VMAT. Results in this study should help medical physicists to understand the rationale of using more than one arc in the double-arc prostate VMAT plan. A. Patient data Computed tomography (CT) image dataset of five patients with localized prostate cancer were selected at the Grand River Hospital in this retrospective planning study. All CT-simulations were carried out with patients in supine position and full bladder. The prostate volumes were in the range of 29 to 68.1 cm 3 . The planning target volume (PTV), clinical target volume (CTV), rectum, bladder, and femoral heads of all patients were contoured by the same person. The gross target volume was equal to the CTV, and PTV was created by expansion of the CTV with 1 cm around, except 0.7 cm posteriorly. Details about the target and critical organ (rectum, bladder, and femoral heads) volumes can be found in Table 1. B. treatment planning Single-arc and double-arc prostate VMAT plans were created by the Eclipse treatment planning system (version 8.5, Varian Medical Systems, Palo Alto, CA) using the Progressive Resolution Optimizer (PRO) in the Rapid Arc optimization (Varian Medical Systems). The treatment planning system was commissioned for a Varian 21 EX linear accelerator (Varian Medical Systems) with a 120-leaf Millennium multileaf collimator (MLC) and 6 MV photon beam. The dose constraints to critical organs, plan objectives, and optimization parameters of prostate VMAT plan can be found in our previous work. (24) Dose calculations were performed using the Anisotropic Analytical Algorithm (ver. 8.9). (25) Prostate VMAT plans were first created using the double-arc technique for all patients. Then, the number of photon arc was reduced to one to generate the single-arc plans for comparison. The calculated MU for the single-arc and double-arc plans can be found in Table 1. The average delivery times of the single-arc and double-arc prostate VMAT were 2.0 and 3.9 minutes, respectively, though the average MU of the double-arc plan was only increased by about 20% (Table 1) compared to the single-arc. Average dose-volume histograms (DVHs), and mean and maximum doses of targets (PTV and CTV) and critical organs (rectum, bladder, and femoral heads) were determined. Moreover, mean dose-volume criteria including the D 99% of PTV, D 30% , D 50% , V 17Gy , and V 35Gy of rectum and bladder, and D 5% of femoral heads were calculated for both techniques. c. tcP and ntcP calculation The prostate TCP was calculated as follows: (1) D is dose, p and q are related to D 50 and γ 50 (normalized slope at the point of 50% probability control), according to Okunieff et al. (26) who summarized clinical data for a variety of tumors that can be related to the slope and dose to control 50% of tumors. Using Eq. (1), control probability for the tumorlet with volume and doses, TCP (v i , D i ) can be inferred from the TCP for the whole volume by: (2) where (v i , D i ) refers to the differential DVH converted from the cumulated DVH. Rectal NTCP was calculated using the Lyman-Burman-Kutcher algorithm with the following equations: (21)(22)(23) ( 3) and (4) where v = V/V ref and TD 50 (v) = TD 50 (1) v -n , as suggested by Burman et al. (22) TD 50 = 80 Gy, n = 0.12, and m = 0.15 were used to calculate the rectal NTCP in this study. Both TCP and NTCP were determined using an in-house TCP/NTCP software running on a MATLAB platform (The MathWorks, Natick, MA). (27) III. rESuLtS Average cumulated DVHs of the PTV, rectum, bladder, and left and right femoral head are shown in Figs. 1(a) to 1(e), planned using the single-arc and double-arc technique. The D 99% of PTV, D 30% , D 50% , V 17Gy , and V 35Gy of rectum and bladder, and D 5% of left and right femoral head of the patients can be found in Table 2, which also shows the average mean and the maximum doses of targets (PTV and CTV) and critical organs using the two techniques. Table 2. Mean dose-volume criteria, and average mean and maximum doses of the PTV, CTV, and critical organs for the single-arc and double-arc prostate VMAT plans. The standard deviations are shown inside the brackets. V 17Gy and V 35Gy are percentage volumes receiving at least 17 Gy and 35 Gy, respectively. D 5% , D 30% , D 50% , and D 99% are doses given to 5%, 30%, 50%, and 99% of volumes, respectively. Mean Left Figure 1(a) shows the average DVH of PTV for all patients planned using the single-arc and double-arc technique. The dose range in Fig. 1(a) is started from 70 Gy instead of zero to focus on the drop-off region of the curve. It is seen in the figure that DVH curves of the double-arc plans had a shaper drop-off than those of the single-arc for all patients. This result agrees with that found by other groups which have proved that the double-arc technique can improve the dose conformity in the target volume. (11,20) Figures 1(b) and 1(c) show average DVHs of the rectum and bladder, respectively. It can be seen that percentage volumes receiving given doses (e.g., V 17Gy and V 35Gy ) were always lower in the double-arc plan than the single-arc. This shows that the double-arc technique resulted in a better rectal and bladder dose-volume criteria than the single-arc. However, for average DVHs of the left and right femoral head in Figs. 1(d) and 1(e), it is found that the femoral head sparing became worse when the double-arc technique was used compared to the single-arc. Based on the results in Figs. 1(a) to 1(e), the double-arc technique is found to improve the dose conformity and coverage of the prostate, and the rectal and bladder dose-volume criteria. However, the cost is to worsen the left and right femoral head sparing. B. dose-volume criteria, maximum and mean dose Mean dose-volume criteria, maximum and mean dose are parameters important in the treatment plan evaluation. Table 2 shows the mean dose-volume criteria of PTV, rectum, bladder, and femoral head calculated by the treatment planning system. In this study, the dose-volume evaluation criteria for the prostate VMAT plan are: D 99% of PTV ≥ 74.1 Gy, D 30% of rectum and bladder ≤ 70 Gy, D 50% of rectum and bladder ≤ 53 Gy, D 5% of femoral head ≤ 53 Gy. For the PTV, it is seen in Table 2 that the mean D 99% of all patients (72.5 Gy) is less than 74.1 Gy based on the single-arc technique. The double-arc technique having higher mean D 99% of 74.6 Gy, on the other hand, satisfied the evaluation criteria. This shows that when the double-arc technique was replaced by the single-arc, D 99% of PTV would get worsen. For the mean D 30% and D 50% of the rectum and bladder, both the single-arc and double-arc technique satisfied the corresponding dose-volume criteria. However, the double-arc technique had lower D 30% and D 50% of rectum (on average 18% and 29%) than the single-arc. The mean D 30% and D 50% of bladder were also found to be lower (on average 22% and 13%) when using the double-arc technique compared to the single-arc. For the left and right femoral head, the double-arc technique had the mean D 5% (on average 34% and 39%) more than the single-arc, but both techniques did not have the D 5% higher than the dose-volume evaluation criteria of 53 Gy. For percentage rectal and bladder volume receiving at least a given dose, lower V 17Gy and V 35Gy of the rectum and bladder can be found when using the double-arc technique compared to the single-arc. In Table 2, it is seen that the double-arc technique can effectively decrease the dose-volume evaluation criteria for the rectum and bladder. However, the effect is increased doses in the left and right femoral head. For the average mean and maximum doses of targets and critical organs (Table 2), when using the double-arc technique, mean doses of the rectum and bladder decreased while those of the left and right femoral head increased. As can be seen in Table 2, the double-arc technique increased the mean doses of the PTV and CTV insignificantly. For the maximum doses of targets and critical organs, no obvious trend of dose variation can be found when using the double-arc technique. This shows that the maximum doses of targets and normal tissues are not sensitive to the number of photon arc in prostate VMAT. C. Dose profiles To investigate how the double-arc technique affects the dose distribution resulting in variations of dose-volume criteria, average mean and maximum dose compared to the single-arc dose profiles along the vertical and horizontal axis crossing at the isocenter were plotted, as shown in Fig. 2. It is seen in Figs. 2(a) and 2(b) that doses in the left and right direction were lower when the single-arc technique was used instead of the double-arc. In contrast, doses in the anterior (Fig. 2(c)) and posterior (Fig. 2(d)) direction for the double-arc technique were lower than those of the single-arc. From dose distributions of all patients along the vertical and horizontal axis, it can be seen that the addition of a second photon arc shifted the delivered dose from the anterior-posterior direction to the lateral direction. This resulted in lower dose-volume criteria (e.g., D 30% , D 50% , V 17Gy , and V 35Gy ) of the rectum and bladder in the anterior-posterior direction, but higher dose-volume criteria (D 5% ) of the left and right femoral head in the left-right direction ( Table 2). Since the increase of dose at the femoral head is within the normal tissue tolerance, the application of the double-arc technique is simply to lower the rectal and bladder dose-volume criteria at the cost of increasing the femoral head dose-volume criteria within tolerance limit, so as to achieve a desired PTV coverage. d. Prostate tcP and rectal ntcP The prostate TCP for the whole treatment (78 Gy/39 fractions) against the prostate volume is plotted in Fig. 3(a). It is seen in the figure that the prostate TCP for the double-arc technique is slightly (0.16%) higher than that of the single-arc. For NTCP of critical organs, since the bladder and femoral head NTCP are generally about 1 × 10 2 and 1 × 10 5 times smaller than the rectal NTCP, only the rectal NTCP is considered in this study. (28,29) It is found in Fig. 3(b) that the rectal NTCP for the double-arc technique was higher than that of the single-arc by about 17.5% on average. The reason is that in prostate VMAT, there is a high-dose region in the rectum overlapped to the PTV having higher mean and maximum dose. (27) Since the rectal NTCP is sensitive to the high-dose region where the PTV and rectum overlapped, and the doublearc technique has a higher mean dose than the single-arc, the rectal NTCP for the double-arc technique is therefore higher than the single-arc. Nevertheless, such increased NTCP is still within the acceptable range when compared to prostate IMRT. (28) In addition, it can be seen in Fig. 3 that there is no dependence of the prostate TCP and rectal NTCP on the prostate volume using the two techniques. For lower rectal dose-volume criteria (D 30% , D 50% , V 17Gy , and V 35Gy ) achieved in the treatment plan, double-arc technique is still worthwhile to be considered, in spite of the higher rectal NTCP compared to the single-arc. V. concLuSIonS Prostate VMAT plans have been analyzed for five patients using the single-arc and double-arc technique. It is found in VMAT plans that the double-arc technique can lower the dose-volume criteria of the rectum and bladder (e.g., D 30% , D 50% , V 17Gy , and V 35Gy ) but increase the rectal NTCP. The increased rectal NTCP in the double-arc technique is due to the increase of dose at the high-dose overlapping region (PTV and rectum), which is sensitive in the NTCP calculation. As the degree of increase of the rectal NTCP is tolerable, it is concluded that the double-arc technique can effectively decrease the rectal and bladder dose-volume criteria in a prostate VMAT plan, and is especially crucial when the criteria are critical or difficult to achieve in planning. The increase in the femoral head dose as a cost of improvements in the rectal and bladder dose-volume criteria is found acceptable in this study.
3,939.6
2013-05-01T00:00:00.000
[ "Medicine", "Physics" ]
Collaborative Irrationality, Akrasia, and Groupthink: Social Disruptions of Emotion Regulation The present paper proposes an integrative account of social forms of practical irrationality and corresponding disruptions of individual and group-level emotion regulation (ER). I will especially focus on disruptions in ER by means of collaborative agential and doxastic akrasia. I begin by distinguishing mutual, communal and collaborative forms of akrasia. Such a taxonomy seems all the more needed as, rather surprisingly, in the face of huge philosophical interest in analysing the possibility, structure, and mechanisms of individual practical irrationality, with very little exception, there are no comparable accounts of social and collaborative cases. However, I believe that, if it is true that individual akrasia is, in the long run, harmful for those who entertain it, this is even more so in social contexts. I will illustrate this point by drawing on various small group settings, and explore a number of socio-psychological mechanisms underlying collaborative irrationality, in particular groupthink. Specifically, I suggest that in collaborative cases there is what I call a spiraling of practical irrationality at play. I will argue that this is typically correlated and indeed partly due to biases in individual members’ affect control and eventually the group’s with whom the members identify. INTRODUCTION People not only have emotions, they also regulate them. In regulating emotions, we select and adjust the situations of affective import, or modulate our attention or behavioral responses (Gross, 1998). It is widely agreed that the way an emotion is experienced closely reflects the way in which it is regulated (Frijda, 1986;Krueger, 2016). But emotion regulation (ER) does not occur in a social void. Rather, it is deeply embedded in and modulated by social interaction, social identity, or group membership. Indeed, the cognitive aspects of interactional and sociocultural influences in emotional co-regulation have received considerable attention (e.g., Eisenberg et al., 1998;Eisenberg and Spinrad, 2004;Mesquita and Albert, 2007;Hofer and Eisenberg, 2008;von Scheve, 2012;De Leersnyder et al., 2013). However, the broad range of potential disruptions and, in particular, the collaborative forms these disruptions often assume, has been very much sidelined and little understood. The present paper aims to fill this gap by focusing on social forms of practical irrationality. Specifically, I will concentrate on two potential disruptions in ER, namely social forms of agential akrasia (AA) and doxastic akrasia (DA). On a first approximation, AA consists of acting against one's own better judgment or against some relevant set of values, norms or reasons, or in performing an action that runs counter to one's intention. Doxastic akrasia, a variant of self-deception, occurs if one believes something against one's own better reasons or epistemic standards, or 'in the teeth of evidence. ' Against this background, the paper pursues two objectives: (1) First, I will argue that specific collaborative forms of AA and DA are possible, and propose a novel model to analyze them. This seems to be a crucial task. After all, in the face of great philosophical interest in analysing the possibility, structure and mechanisms of individual practical irrationality, with little exception concerning self-deception (Harré, 1988;Ruddick, 1988;Tenbrunsel and Messick, 2004;Deweese-Boyd, 2010) and even less akrasia (Pettit, 2003b), there are surprisingly no comparable accounts of social and collaborative irrationality. However, I not only contend that these are common phenomena; moreover, I believe that, if it is true that individual practical irrationality is in the long run harmful for those who entertain it, this is even more so in collaborative cases (Goleman, 1989). (2) Secondly, I shall argue that collaborative engagements often play a contributing or even constitutive role in entering or maintaining practical irrationality. I argue that this is often correlated to ER-biases and to a large extent indeed due to the disruptive role of collaborative irrationality on ER. Specifically, I will suggest that it is largely due to socially biased, motivated misidentification of one's own affects, which eventually biases one's affect control and also the ER-mechanisms of one's group. Finally, I shall show how in a feedback-loop that I call collaborative spiraling of irrationality this ultimately reinforces the irrational tendencies of the respective parties. The paper is organized as follows: I begin by fleshing out the concepts of AA and DA and propose three requirements that agents capable of such irrationalities must fulfill (see Section "Self-deception and Doxastic Akrasia"): the intentionality, the minimal rationality, and the overall rational integrity requirement. Next, I outline the key mechanisms and (social) disruptions of ER. In particular, I suggest that DA inhibits central features of successful ER: subjects' clarity about the type and the evaluative or cognitive content of a given emotion, or even their basic awareness of having a certain type of emotion (see "Emotion Regulation and Its Social Biases"). I then explore practical irrationality in various social contexts and their correlation with ER-disruptions. I distinguish mutual, communal and collaborative forms of social irrationality (SI), and explore especially the collaborative cases. I will mainly draw on the case of a clinical smoking therapy group and demonstrate how 'groupthink' (Janis, 1982) modulates even such allegedly purely physiological arousal patterns like those induced by nicotine. 1 In this section, I resume the issue of dysregulation and focus on the corruption of group-level ER. Here, I will also discuss some further socio-psychological mechanisms, which account for the emergence of SI in deliberative groups, notably group-polarization, choice shift and the pooling of unshared information (see "The Collaborative Spiraling of Irrationality"). Finally, I provide a conceptual explanation of collaborative irrationality in terms of group identification. Drawing on the overall rational integrity requirement, I claim that what happens is a failure of integrating first-person singular and firstperson plural rational point of views, while maintaining group identification (see "Explaining Collaborative Irrationality"). I conclude by pointing to some directions for future research (see "Conclusion and Future Directions"). SELF-DECEPTION AND DOXASTIC AKRASIA There has been much debate as to whether synchronous forms of practical irrationality are possible at all. The question is whether one can synchronously hold contradictory beliefs as to what would be best to do. The issue poses itself with particular force when it comes to the role of emotions. The issue is neither that an agent, under the influence of the emotion at the time of the practical deliberation, changes her view about what it would be best to do; nor do we necessarily have to accept the view that emotions can be so powerful as to directly change the behavior of an agent at the very instance of the respective action, or to refer to 'irresistible desires'-a notion that some have rightly rejected (Watson, 1977;Elster, 2010). Rather, the issue is that emotions influence or motivate an action that is, at the time of its execution, contrary to the agent's beliefs about what, all things considered, is best to do. Thus, some have suggested that emotions will only "cloud or bias" the cognitive processes (information-gathering, etc.) upon which the agent's practical deliberation is based, or influence the agent's rational choice. The-subsequent-irrational action is then due to a (temporary) 'preference reversal' (Elster, 2006). As we will see, I agree with those accounts that argue that reference to a 'partition of the mind' in the irrational agent as prominently suggested by Davidson (1982), is in such cases "little more than hand waving" (Elster, 2010, p. 270). However, I believe that by the same token mere reference to preference reversal over time or diachronic accounts of practical irrationality are dangerously close to reducing practical irrationality to a 'change of mind' and will not do either. 2 Thus, we need an account that captures the tension arising from holding synchronic contradictory beliefs, desires or reasons for action. Such an account is provided by affective modulations and emotion-regulative disruptions. For such an account, see Collins' (2004) congenial sociological analysis of the broader socio-normative context of so-called "tobacco rituals and anti-rituals" and the way in which they are scaffolded by material and bodily culture (e.g., smoking lounges, advertisement culture, gestures, etc.), and co-constitute the regulation of smoking pleasure and displeasure, as well as the very experience of tobacco enjoyment, and eventually the very psycho-physiological effects of addiction. 2 I cannot argue here against analyses of akrasia in terms of a change of mind or change of volition, see McIntyre (2006). However, I come back to the partitioning of mind accounts below. what Mele (1987) has aptly labeled "last-ditch" cases of practical irrationality. In the following, I will rely on last-ditch cases and assume that they are psychologically possible and real. Here, then, is what last-ditch AA and DA amount to: Assume a subject S who is a non-pathological rational believer under normal circumstances (S doesn't have severe Alzheimer, is not a drug-addict, is not hallucinating, unconscious, etc.) who attends more or less strictly, but given normal epistemic standards, to logical and rational consistency. For now, such an admittedly liberal, rough-and-ready characterization of rationality will suffice; I will come back to that below, however. (AA) An action A is strictly akratic iff (1) S is an intentional agent such that S intentionally will A at t only if S judges that A-ing at t is all-things-considered better than B-ing at t, and S believes that she can either A or B at t (or be given the alternate possibility of not A-ing at t); (2a) S holds a belief at t to the effect that all things considered she has sufficient reason for her not A-ing at t (or for doing B, where B is incompatible with doing A); (2b) Based on the evaluation of A in attaining S's goal at t, S decisively judges that it is best not to A at t; (3) S (intentionally) A-s at t. A number of technicalities set aside, and however, differently one might then wish to explain how rational agents are led from (1) to (3), or how AA is possible at all, this fairly mirrors the standard picture of what synchronic AA would amount to (Davidson, 1970;Bratman, 1979;Pears, 1984;Mele, 1987;Walker, 1989). Consider now the similar case of DA, sometimes also called "incontinent belief " (Mele, 2001). Here, we have a motivated case of believing something against one's own better reasons, assuming, again, a reasonably rational, non-pathological subject. (DA) S is subject to doxastic akrasia, or holds or retains an akratic belief, iff (1) S believes that p and q are incompatible; (2) S has a reason R to believe that p; (3) S acknowledges that R is a stronger reason than an alternative set of R * s, which warrant q (where R and R * s are warranted by all the evidence available to S relevant to p and q, respectively); (4) S believes that q. Note that (DA) shares almost all features with another phenomenon of practical irrationality, namely self-deception, except for the fact that both p and q may be true propositions, whereas according to the standard view, in self-deception, q will be false, and S knows or at least takes q to be false (Heil, 1984;Mele, 1987). But, as this is the only relevant difference between DA and self-deception proper, most of what I shall discuss below will hold for both DA and self-deception. Given these definitions, let me now come back to the issue of an alleged partition of mind, which according to some explains what happens in AA and DA. It is crucial to get this point right to appreciate the very force of the tension that subjects are confronted with when engaging in irrational beliefs and action. And more importantly for our present purposes, it is a good starting point to discuss cases of AA and DA where we have interpersonal and collaborative forms of irrationality. Note that is seems to make no sense to speak of inconsistency, let alone irrationality, if what we have is just a conflict between sets of reasons or beliefs partitioned or distributed across two or more agents or believers. In marked contrast, in usual cases, the subject having a true or sufficiently warranted belief and the subject of self-deception, akrasia or DA are essentially identical: it is an individual subject. However, many have argued that the best or only possible explanation of how agents can arrive at the conclusion of AA or DA-without leading to irresolvable paradoxes-is to assume a multiplicity of rational centers within individuals. The conflicts or inconsistencies are then construed with reference to different aspects in terms of hierarchies of values, epistemic imbalances, non-alignment between motivational strength and evaluative judgments, or a gap between conative and rational poles or cognitive subdivisions (Wiggins, 1978(Wiggins, /1979Davidson, 1982Davidson, , 1986Pears, 1984Pears, , 1985Rorty, 1985;Mele, 1987). The details need not concern us here; what is important is that practical irrationality is construed here as a fragmentation or partitioning of "rational homunculi" (De Sousa, 1976) within individuals. Whether or not one subscribes to such homunculi views or rather argues for anti-partitioning accounts (Bach, 1981;Ruddick, 1988;Talbott, 1995;Barnes, 1997;Johnston, 1988), the issue does not hinge on whether one conceives of practical irrationality as a conflict, for example, between emotional and doxastic contents or as involving outright contradiction between incompatible judgments (Döring, 2008(Döring, , 2009. Though I clearly favor the first view, in both cases, accounts that refer to a partitioning of rational and/or affective faculties of agents cannot do justice to the sense in which practical irrationality involves a certain tension or precisely a conflict within one and the same agent. To be sure, the concept of the identity of agents must be construed differently in cases in which we have, on the one hand, ordinary individual agents considered by themselves, and those where we have an interpersonal or a collaborative context with a plurality of agents, on the other. Thus, even if one does not accept anti-homuncular and anti-partitioning arguments as conclusive, the question is still how we can capture the conflict in practical irrationality when we start with a plurality or collective of individuals. After all, in such cases, we would normally simply speak of some conflict of interest or social conflict, which is an all-too ordinary phenomenon. In order to address this question, I shall propose three requirements that agents capable of practical irrationality must fulfill, to wit, requirements that both individual as well as a collective of agents, deliberating and acting upon an integrated or unified point of view, can fulfill. (1) The first is a standard requirement of agency, accepted by almost all philosophers of mind and action today, concerning practical and theoretical intentionality. It amounts to claiming that an (individual or collective) agent capable of any form of practical irrationality (akrasia, DA, or self-deception) must be an intentional agent. The general idea is fairly straightforward. A subject S is an intentional agent, i.e., S is the bearer of intentional properties. S can over time and under various practical and epistemic circumstances form, hold, and robustly entertain intentional attitudes and beliefs with propositional or some otherwise specified intentional content, practical intentions, and/or desires or other motivational, so-called 'proattitudes.' Moreover, it is these intentional properties that figure in folk-psychological explanations of S's behavior. Call this the Intentionality Requirement. (2) The second requirement builds directly on this ability of agents entertaining intentional states, but infuses them with certain inferential norms or a minimal form of rationality. Call this the Minimal Rationality Requirement. According to this, S will not only have intentional states, but will typically hold relatively consistent, or at least non-contradictory, beliefs, aesthetic, moral, etc., attitudes, rank her preferences, and attend to them and their transitivity (e.g., if S prefers A to B and C to B, then S will also prefer C to A). S will be sensitive to available means and options for attaining her goals, form intentions on the basis of such options, preferences, beliefs and desires and in normal circumstances reason and act upon those. Though such minimal rationality is necessary in order to exhibit practical irrationality, it is not sufficient. This has to do with a distinction between two ways of being sensitive to the normativity of reasons for action and belief-formation. 3 What we need is a requirement that captures the sense in which agents must be sensitive not simply to available means, preference rankings, etc., but more robustly be sensitive to an overall coherence as full-fledged rational agents or persons. (3) I want to argue that we need a more robust requirement, which both individual and collaborating (and indeed collective or group agents 4 ) must, and can, fulfill. I shall label this the Overall Rational Integrity Requirement. The central concept at stake, the concept of a rational unified point of view (RPV), was introduced by the social ontologist Rovane (1998), who use it to characterize the personal unity of individual and group agents (cf. Korsgaard, 1989;Pettit, 2003a; see more in Szanto, 2014). Though the notion has some structural similarities with the first-person-perspective of subjective experience, importantly, it captures the idea of having a first-person (singular or plural) perspective, but without reference to any subjective phenomenology associated with it (the 'what-it-is-like, ' if you will, to have that first-person-perspective, or to be that person). What then is an RPV? It is a unified set of reasons, in the light of which S assesses her given beliefs, preferences and intentions, and which, in the course of practical deliberation and theoretical reasoning, yields conclusions as to what all-things-considered S ought to believe or do. In terms of integrity and autonomy, an agent is an autonomous intentional agent if the agent acknowledges and deliberately endorses the normative practical and theoretical conclusions provided by her own RPV and if necessary modifies her beliefs, preferences or intentions accordingly. This will entail not only a structural or instrumental rationality but also a form of reflective rationality. It will entail that agents are aware and deliberatively reflect upon the reasons and motivations they have and act not only in accordance with but also by virtue of their normative force, or else modify them. Thus, having an RPV is dependent on agents having minimal rationality [hence on (2)], but it furthermore provides the normative force to act in accordance with that structural or instrumental rationality. When it comes to collaborative contexts and groups, RPV serves both for members and non-members as the basis for the normative and epistemological evaluation of the coherence of shared attitudes or goals. Here, sensitivity to the norms and rational standards of an RPV can be construed in terms of group members' rational dispositions. These will consist, in particular, in minimizing inconsistencies between the perspectives of the members in view of the pursuit of some shared goal. And it will entail such group-deliberative processes as aiming at majoritarian views, minimizing disagreements or trying to solve them consensually, and, importantly for present purposes, without falling prey to the socio-psychological biases I discuss below (see "Groupthink as a Case of Collaborative Akrasia, " "Further Socio-Psychological Mechanisms Underlying Collaborative SI"). In more complex, especially institutional, forms of groups there will be some normatively binding rational meta-standards for integrating the relevant attitudes. This will include non-contradictory voting procedures and aggregation functions, mechanisms ensuring consistency with other dispositional attitudes, values or sub-goals of the group, predetermined levels of expertise, or even (non-authoritarian) hierarchies in order to rationally evaluate the beliefs in view of group-level goals. In the following sections, I shall suggest that practical irrationality is typically enhanced in collaborative contexts, and that this is correlated and indeed partly due to disruptions in ER. So first we need an understanding of what ER exactly is and how its mechanisms may be disrupted, particularly in social settings. EMOTION REGULATION AND ITS SOCIAL BIASES Psychological work on individual ER, or self-regulation, abounds ever since the work of Thompson (1994) and Gross (1998Gross ( , 2002Gross ( , 2013 5 . ER involves the ways in which individuals monitor, modulate or change their emotional elicitation, experience or expression. Less technically, it refers to "influencing which emotions one has, when one has them, and how one experiences and expresses these emotions" (Gross, 1998, p. 271). According to whether ER concerns modulating the primarily situative or dispositional preconditions of emotion-elicitation on the one hand, or the actual emotional episodes or behavioral or expressive effects of those on the other, it is common to distinguish between antecedent and response focused ERprocesses. In particular, ER involves one of the following five processes, or some combination thereof: (i) situation selection; (ii) situation modification; (iii) change of attentional deployment; (iv) cognitive change or reappraisal; and finally (v), on the response-focused side, response modulation. The idea can be brought out by way of the following example. Consider the regular commuter, James, a choleric developmental psychologist, who decides after years of frustration with traffic jams to take the not too crowded train to the office instead. James thus selects a situation in which the likelihood of his anger and frustration is less likely to be elicited (viz. i). One evening on his way back from work, James enjoys his glass of wine and a book in the dining car when a mother with her crying baby chooses to sit right next to him. James feels his anger rising; in order to regulate his emotional upheaval, he might continue to tailor the situation, for example by changing his seat or starting to talk to somebody on the phone with his earphones plugged in (ii). But there are surely more subtle ways of modulating his emotions: For one, James might try to focus his attention on specific perceptual or cognitive features of the situation to alter its emotional impact. He might try to distract himself from the auditory input by looking out the window, or concentrating more on his book (iii). Another possibility of ER is to tell himself or try to make himself think that the book was boring anyway and it's actually more interesting to observe how mothers interact with young babies outside his university laboratory. Hence, by means of cognitive reappraisal, he might select among alternative meanings attached to the situation (annoying noise vs. interesting 'field-study') in order to alter its emotional significance (iv). Finally, if all these mechanisms fail, James would still have a last-ditch move and could modulate the expressive or affective responses and actiontendencies his anger would elicit. He might try to influence his behavioral responses, once initiated, by some props (drinking the whole bottle of wine, or plugging in his earphones and turning the music up to full volume) or expressive behavior (smiling or keeping a neutral expression in order to eventually calm himself down) (v). However, complex the process, the relevant point for present purposes is that all these dimensions and aspects are robustly shaped by sociocultural factors. Moreover, all these mechanisms and dimensions are often disrupted precisely by these factors. Ample empirical research from developmental, social and cross-cultural psychology supports the thesis that interpersonal or small group settings as well as professional and broader sociocultural contexts robustly shape, modulate or even dictate emotional (self-)regulation (Hochschild, 1983;Kitayama et al., 2004;Parkinson et al., 2005;Mesquita and Albert, 2007;Hofer and Eisenberg, 2008;Mauss et al., 2008;Poder, 2008;Trommsdorff and Rothbaum, 2008;Kappas, 2011;Parkinson and Manstead, 2015). Thus, at the very level of emotion elicitation, affective experiences are often congruent with cultural norms and deeply shaped by interpersonal coregulation as well as structural and sociocultural affordances (e.g., shame-, or honor-based value-systems, ethnic-pride or caste frameworks) (De Leersnyder et al., 2013). Moreover, emotional reciprocity and reactions to one another also shape groups' overall "regulatory styles" (Levenson et al., 2014;cf. Maitner et al., 2006). Finally, there is evidence that in large-group contexts with negative emotional exposure (e.g., mass suffering), there are specific negative biases affecting individual ER (e.g., insensitivity, "collapse of compassion") (Cameron and Payne, 2011). 6 But even more intriguingly, negative modulations, biases or disruptions in both individual and group-level ER-processes are correlated to irrational tendencies in collaborative deliberations and actionsor so I shall argue. Thus, just like most other socio-psychological processes emotional co-regulation has not only a bright but also a dark side. Very often it misfires precisely in social, collaborative, intra-and intergroup contexts. And certain collaborative, intra-and intergroup engagements play also a contributing or even constitutive biasing role in entering or maintaining individual as well as group-level practical irrationality. Combining these insights, the guiding hypothesis I begin to explore here is that the negative impact of collaborative contexts on practical rationality is partly due to their specifically disruptive role on ER. But what exactly is it in ER-processes that social forms of irrationality disrupt? We have seen that ER involves a number of cognitive mechanisms, including situation selection and modification, change of attentional deployment, cognitive change and response modulation. Now, when it comes to assessing dysfunctional ER, in an integrative study on the so-called "Difficulties in Emotion Regulation Scale, " lack of emotional clarity and, even more seriously, the lack of emotional awareness has been suggested to be a crucial component (Gratz and Roemer, 2004). 7 Moreover, it has been reported that the ability to consciously perceive and correctly identify one's own conative and affective states is key in affect training aimed at better regulating certain negative emotions (e.g., aggressiveness) (Berking and Schwarz, 2014, p. 531ff.). Finally, some have suggested that (self-)deception is a much-used tactic to regulate emotions (Hrubes et al., 2004). Thus, in deceiving others or oneself, one may modify a situation by manipulating the emotions of oneself or others (situation-selection). One may also change one's cognitive appraisal by means of (self-)deception (re-appraisal): For example, one may convince others or oneself that one's performance was not so bad after all, or that one's 6 Interestingly, however, there is hardly any work on how co-regulation may 'scale up' so as to include not only dyads and face-to-face groups, or one's sociocultural context, but also ER-processes of large communities or nations, e.g., in the wake of large-scale traumatic events such as the Katrina or Fukushima disasters, the present European 'refugee crisis, ' or 9/11 (cf. Levenson et al., 2014, p. 279). 7 To be sure, some have recently argued that the picture looks somewhat different when it comes to addiction, where not all types of self-knowledge about one's own addiction (e.g., first-, third-personal, critical, impersonal self-knowledge) are equally apt to improve self-control; see Levy (2014Levy ( , 2016, Holton (2016), and Morgan and O'Brien (2016); for empirically well-informed research on addiction and self-deception, which is congenial to my argument, see, however, Pickard (2016). For further useful philosophical accounts of addiction and irrationality, however, different they may be (see Wallace, 1999b;Schlimme, 2010;Uusitalo et al., 2013). For a review of literature on emotion regulation in drug abuse, see Kober (2014). poor performance was not really one's fault. Or one may redirect one's focus of attention away from particular situations or negative beliefs (attentional deployment): "ideas or beliefs that trigger negative affect may be shifted out of awareness whereas more favorable thoughts or ideas are shifted into awareness" (ibid.: 237-238); or, by means of self-deception, one may try to stop occurring thoughts about negative situations (e.g., situations evoking guilt-feelings), and hence eliminate the guilt-feelings altogether (see also Whisner, 1989). Building on, further developing or reversing these findings, I want to suggest that certain forms of practical irrationality, and especially DA, inhibit precisely these two crucial features of successful ER, notably clarity about the type, the evaluative or cognitive content, or the more fine-grained qualitative aspects of a given emotion. Moreover, they may even hinder the subject to have a basic awareness of having a certain type of emotion at all. 8 Below, I will argue that these inhibitions and biases are typically facilitated in collaborative contexts. For now, consider a case illustrating how emotions come into play in individual practical irrationality and how practical irrationality eventually disrupts ER. 9 Mele (2003, pp. 169-170) gives the example of a jealous husband Bob, who fears that his wife Ann is unfaithful. Bob's fear may be constitutive of his desire that she is innocent and hence plays a role in his self-deceptive behavior to properly assess evidence to the contrary. Not only his fear of Ann's guilt or desire that she is innocent but also his initial affection may weaken his motivation to assess such evidence (Forgas, 1995). Furthermore, not only might Bob's emotions increase the probability of the careless assessment of information, but it may even present some information proving Ann's innocence (e.g., Ann is more affectionate to him lately than ever before) more vividly and saliently than it might upon reflection or to an impartial observer, in fact, be. What are the effects on Bob's emotion regulation? His self-deceptive behavior will not only result in informational biases regarding the actual facts of the matter but will ultimately lead him to inappropriately assess the evaluative content of his emotional state ('Well, I have no right to be jealous; everybody is unfaithful nowadays.'), to not fully recognize its phenomenal content and impact (' After all, I'm not really jealous.'), or even to misidentify the very emotion he is experiencing, which he may re-interpret for example in terms of pride ('My wife is the most 8 I certainly cannot enter here into the complex discussion of whether one can have emotions that one is unaware of having or the issue of non-felt emotions. Suffice it to say that I agree with Roberts that one can indeed have both emotions that one does not feel and emotions that one is unaware of having (Roberts, 2003, pp. 60-69, 318-323). In any case, the point I'm trying to make here is orthogonal to that issue. I want to put forth only the more modest claim that one can be unclear or confused about certain emotional aspects or unaware of the content or type of a given emotion. Hence, my concern here is only with types of emotional error or misrepresentation induced by practical irrationality (see more on this below, and again Roberts, 2003, esp. Chap. 4). For useful discussions of the veridicality, correctness and justification of emotions, see Deonna and Teroni (2012, chapters 1, 4, and 8). 9 I will not argue for the claim here that in some or maybe all cases of practical irrationality emotions play a direct or indirect biasing role; see Mele (2003). Let me just mention that there is a vast body of empirical literature demonstrating that emotions often prime cognitive faculties in gathering and assessing evidence in a biased way (Nisbett and Ross, 1980;Derryberry, 1988;Forgas, 1990;Kunda, 1990;Dalgleish, 1997;Trope et al., 1997;Tiedens and Linton, 2001;Schwarz and Clore, 2003). attractive woman; everybody always wants to date her.'). In either case, and there may of course be combinations thereof, he will not successfully regulate his jealousy, and will behave, for example, increasingly irritated, nervous or depressive when his wife is not at home. 10 But how do such irrationally motivated ER-biases spell out in collaboration with others, and how do they eventually enhance practical irrationality to a point where we are left with what I call a 'spiraling' of such? This is the question I wish to address in the following section. Social, Mutual, and Collaborative Irrationality Recall the above discussion of partition of mind accounts of practical irrationality: I have argued that they are insufficient to accommodate the intuition that practical irrationality is not simply about conflicting reasons, intentions or emotions, which are distributed across agents or homunculi within agents. Here, I want to argue against the related claim that all there is to SI are forms of irresolvable conflicts of interest. In contrast, I shall show that there is an intriguing variety of cases in which irrationality is modulated, facilitated or even triggered by social contexts and collaborative engagements. Before going into any detail, it should be noted that not only such distinctively social but also the overwhelming majority of strictly speaking individual forms of practical irrationality are socially co-constituted. More often than not, entering and retaining akratic self-deception or performing akratic actions is facilitated by some social facts, or reactions, deliberative ignorance, or the witting or unwitting assistance by others (Snyder, 1985;Harré, 1988;Ruddick, 1988;Statman, 1997;Landweer, 2001;Tenbrunsel and Messick, 2004;Deweese-Boyd, 2010). In a Sartrean and Marxist spirit, some have even argued that ideology is a form of social 'illusion' or selfdeception, understood "as the ignorance or the possession of false belief about, [the] social consciousness one has" (Wood, 1988, p. 352). But leaving aside the issue of the broader social context of practical irrationality that is at play in virtually all cases of irrationality, let me now distinguish three types of the genera of what I shall call social irrationality (SI), namely (i) mutual, (ii) communal and (iii) collaborative SI. Far from being a mere exercise in taxonomy, this is crucial in order to get a firm grip on the exact sense in which sociality is or is not involved in modulating the affective and rational life of individuals. In particular, it shall help getting clearer about what it means to properly speaking collaboratively engage in practical irrationality. (i) Consider first mutual SI. Suppose two individuals A and B who do not engage in any proper collaborative engagement, let alone share any common intentions or goals, but just share a more or less ephemeral situational context. Mutual SI will arise whenever A's practical irrationality is wittingly or unwittingly assisted or reinforced by B's appropriate reaction or vice versa. Imagine a patient-doctor interaction in which a terminal-stage cancer diagnosis looms large. In this affectively highly charged situation one or both parties might foreclose or defer an otherwise much more painstaking ER procedure by mutually assisting one another in self-deceptive belief formation about the actual state of affairs. For example, the patient's selfdeceptive report of his actual condition will be facilitated or unraveled during the meeting by the doctor's (true or false, honest or dishonest) display of an (overly) optimistic attitude, or their respective self-deceptive strategies will be reinforced by the behavior of the other (cf. Ruddick, 1988;Trivers, 2002). (ii) Communal SI is similar in structure. The main difference to the mutual case is that the participants are either bound together by a more robust framework of communal, though not necessarily shared interests, habits or policies, or cases in which their respective behavior creates a pattern of 'quasi-collective' behavior, for example due to mechanisms of emotional contagion and mutual reinforcement of biasing affects. To illustrate the first scenario, consider a group of professional cyclist who are befriended and all doping. Here the individual cyclists' akratic or self-deceptive practices are (maybe even unwittingly) assisted by some tacit communal method of concealing certain facts or employing certain habitualized strategies: for example by everybody's over-optimism ('Nobody is caught for this, come on.') or euphemistic jargon-talk ('It's just a kind of anti-oxidant.'). Notice that there mustn't be any explicit communal policy or some shared goal that directly motivates SI, as, say for a doping cycling-team ('We have to do this, how else could we ever win?'). All there is are some more or less diffuse communal patterns of behavior or discourse. The important point is that, if the individuals were not engaged in the given communal context, they would have a much harder time not just to rationalize but simply to be clearly aware of what exactly they are doing (being akratic cheaters) or to fully realize that they deceive themselves about prospects of being caught. Similar deceptive discourses often facilitate individuals' irrational behavior in corporate professional settings (Ruddick, 1988;Tenbrunsel and Messick, 2004). Consider another communal case, a sudden stock-marketmeltdown, which is an example discussed by Salmela and Nagatsu (2016) in terms of emotional contagion. Imagine a group of purely egoistically motivated shareholders only concerned with minimizing their individual losses. Now, via emotional contagion (Hatfield et al., 2014), behavioral mimicry or similar socio-dynamic processes motivating the individuals' actions, the individual shareholders' mass-selling of their own stocks creates an affectively charged situation (a spiral of fear, distrust or 'collective hysteria') and results in a quasi-collective behavior, eventually harming all shareholders. 11 (iii) So much then for non-collaborative cases. What I now want to dwell upon are collaborative forms of practical irrationalities. Their first distinctive feature is that it involves two or more individuals who are bound together by some collaborative enterprise from the start and collaboratively engage in the very formation, performance or maintenance of an akratic belief or action. There are various scenarios to consider here. First, consider social dilemmas. They come in many varieties, and I shall only focus on the problem of the commons. But before doing so, as a good way to enter the problem, consider a structurally similar type of irrationality, which although not a genuine case of practical irrationality effectively illustrates how choices, preference rankings or actions may be, individually viewed, fully rational, but turn out to be inconsistent when set into a collective context. 12 Let there be three subjects with the following three-item preference ranking: S 1 prefers A to B to C, S 2 B to C to A and S 3 C to A to B. Let each individual be fully consistent and sensitive to the transitivity of their own preferences. However, if we aggregate all the rankings by simple majority vote we end up precisely with equal preference rankings for all options, and hence non-transitivity: We then have two collectively aggregated preferences for A to B, two for B to C, and two for C to A. As Hurley puts the point, "collective choices may be irrational despite individual rationality" (Hurley, 1989, p. 138), though this surely doesn't amount to collaborative practical irrationality of the type we are interested in. Consider now another social choice problem, individuals' akratic action concerning natural resource commons. Imagine a fishery in which according to agreed-upon procedures individual fishers must cooperate in order not to over-fish a given sea sector (Ostrom, 1990). Viewed from an individualistic perspective, each fisher has an incentive to maximize one's own payoff, defect in cooperation (e.g., going out fishing during agreedupon breaks when others are not), harm the cooperative. If most or all engage in such short-sighted behavior, obviously they will ultimately harm themselves, even though they might profit in the short term. But notice that defecting free-riders, considered separately, will not represent a case of collaborative irrationality. In fact, the rationally dominant choice of freeriders is precisely to defect in cooperation. Viewed from the group level, however, suppose that the co-proprietors fail to agree upon general principles for governing the common together or do not succeed in a robust institutional design for collaborative governance (Dietz et al., 2003). The situation becomes collaboratively irrational if they fail to do so even though they know that in a collaborative framework by failing to do 11 There is a slightly different but cogent case discussed by David Lewis in Convention (Lewis, 1969, p. 87) in which a self-deceptive agreement on conforming to a convention ultimately destroys the normative force of the convention and hence hinders coordination. 12 Cf. again a similar but distinct case of a so-called discursive dilemma as discussed by List and Pettit (2011;see also Pettit, 2003b). Notice, however, that discursive dilemmas are not irrational in any of the discussed senses here; cf., however, Sugden (2012). so they risk the resource to dry up and hence are ultimately harming themselves. Even though the members adopt the group's perspective they fail to reason as "team reasoners" and to act in terms of "team preferences" (Sugden, 2000;Bacharach, 2006)to wit, preferences that they individually have precisely as coproprietors of a common. This will often, but need not necessarily, happen because of mutual negative influences of individuals' akratic behavior. 13 Compare this to another case that involves a properly collaborative activity. Consider a similar scenario to that of the doping cyclists above, but with the relevant difference that now we have a genuinely collaborative framework: Let two or more individuals be engaged in some collaborative activity involving a shared goal, collectively accepted beliefs or policies or some similar robustly group-level dimension. Suppose that by agreed-upon policies (all) members of a risk management unit of a bank jointly downplay acknowledged high speculative risks, because they individually or collectively aim to maximize profit. In doing so, each member holds the same type of akratic action or belief for similar or even the same reasons and by the same or similar means. This may involve a division of labor and hence a differentiation of specific means of irrational behavior. All members being motivationally biased, they jointly deceive themselves. Importantly, as in most collaborative instances of SI, such agency will have more serious negative consequences and result in a negative spiral of lack of self-control and hypocrisy. Akratic actions and beliefs performed in tandem with others may become more easily habitualized and more strongly entrenched as in individual agency, especially when role-models lead by negative examples. But even if this is not the case, the practical implications of one's own irrational action are usually magnified in collaboration and joint irrational agency (Goleman, 1989;Surbey, 2004). Moreover, individuals' rational and epistemic control will typically be reduced and individual epistemic responsibility will be weakened or become more diffuse to oneself or others. Groupthink As a Case of Collaborative Akrasia Before I move on, in the next sections, to explain what happens exactly in such collaborative SI, let me finally discuss in some more detail the probably most intriguing case of collaborative (doxastic) akrasia. I will draw upon a real-life small group case study, a clinical group of around 30 would-be non-smokers. It was analyzed by the social psychologist Irving Janis in terms of a paradigmatic instance of what he famously coined "groupthink" 13 Cf. also Gilbert (2001), who convincingly argues against Sugden (2000) for the stronger claim that "collective preferences" give sufficient normative reasons for the members of a group to act in the light of those preferences or even "obligate" them do so. That doesn't mean that members could not "rationally deviate" from group preferences. But if their reasons are not based on rational deliberation of what is best to do in light of those collective preferences to which they have initially committed themselves, then the group is "entitled" to "rebuke" the member. And even if based on rational deliberation, still the members are obligated to an explanation and indeed an apology of why they are not acting in the light of the collective preferences. See more on the implications of Gilbert's theory for my argument below, see the Section "Explaining Collaborative Irrationality." (Janis, 1982, pp. 7-8). I will follow Janis' main thrust but slightly adapt the description of the scenario for reasons of clarity. Consider then a clinical therapy group of heavy smokers gathering on a regular basis for informal conversation, exchanging views about coping with withdrawal symptoms, motivational advice and clinically supervised medication, in the fashion of an anonymous alcoholic group. At one meeting, a member of the group shyly announces that he succeeded in stopping since the last meeting, an achievement, to wit, that none of the others have attained at that time, or at least have not informed the others about. Now, instead of congratulating him and getting more confident about their own prospects, the other members start slowly, but with increasing expressivity ('Hey, that's great for you, but why are you still coming then'?; 'Not everybody is a hero like you, ' etc.), to treat him as an outsider who deviates from group consensus. In particular, two ferocious members start a heated discussion and voice the claim that smoking is an almost incurable addiction. The debate soon results in a consensus that this is clinically proven. The 'deviant' member who has taken issue with the emerging consensus at first realizes quickly that the others have ganged up against him, and eventually declares: When I joined [. . .], I agreed to follow the two main rules required by the clinic-to make a conscientious effort to stop smoking and to attend every meeting. But I have learned from experience in this group that you can only follow one of the rules, you can't follow both. And so, I have decided that I will continue to attend every meeting but I have gone back to smoking two packs a day and I will not make any effort to stop smoking again until after the last session (Janis, 1982, p. 8). Taken at face value, this very sincere, clear-sighted and consistent avowal is followed by the others "beam[ing] at him and applaud[ing] enthusiastically" (ibid.). But the member refrains from quitting the group or leaving the actual session or from reflecting more carefully upon his conflicting desires and emotions (not smoking versus his emotional affiliation to and support by the group, possibly even pride for standing out and succeeding as the only member to stop smoking), in order to modulate his emotions accordingly. The reason is that his deliberative and ER-capacities are overridden by the most salient alternatives and arguments provided by the cohesive social context. Eventually he acts upon these 'corrupted' capacities and sticks to the group and smoking. But there is another side to the story: (Doxastic) akrasia and groupthink do not stop short in exerting their powers, topdown, from group-level to the individual level. As a case of properly collaborative SI, there is a two-way modulation of, or interaction in, performing akratic belief formation and action. As the example clearly illustrates, by the very akratic avowal and behavior of the initially deviant member, the members of the group consider themselves reassured in their own akratic belief that it is impossible to quit smoking all of a sudden. Moreover, there is an affectively motivated irrational tendency of other group members, aggregated on the group-level, to exert pressure on individuals to smoke (even more), especially as the final session approaches, 14 so as not to lose the affective value attached to the group sessions, such as mutual dependence and affiliation, or in-group solidarity and bonding. But this clearly contradicts the members' individual goal (to quit smoking) as well as what Tuomela (2007, pp. 32-35;2013, p. 15) calls the "group ethos" (to help each other to stop smoking, or to this together). It is important not to misunderstand the notion of group ethos here. To be sure, members do not strictly speaking act jointly in pursuing their goal of quitting to smoke. In contrast for instance to the above mentioned case of the risk-management unit or the doping cyclist-team, in this scenario the group does not constitute a group agent proper and hence there is no shared or common goal either. After all, in an obvious sense, it is not the group that wants to stop smoking. However, given that we deal here precisely with a smoking therapy group and not just with randomly meeting individual smokers, the framework within which members pursue their individual goals is indeed a collaborative one. This framework represents precisely the group's ethos. It is constituted by the group's implicit and explicit rules, shared values and norms, such as the rule to help other members to quit smoking by all means-viz. disregarding purely egoistic motives for (individual) success. As Tuomela puts it, a group's ethos is something that "functions as kind of underlying presuppositional reason for the participants' actions" (Tuomela, 2007, p. 34). The notion of group ethos, then, specifies the normative background or rational presupposition for collaborating in light of the members' integrated rational point of view. Now I suggest to call the reciprocal, top-down and bottomup, mutually reinforcing irrational influences outlined here the collaborative spiraling of irrationality. Such reciprocal dynamics may happen not only in small groups but just as well in more robust organizational and corporate contexts. There, too, self-deceptive or irrational policies may generate, facilitate or more deeply entrench akratic actions or beliefs in the members. As we have already seen, these policies may include euphemistic or jargon-talk (Tenbrunsel and Messick, 2004) ('That's not risky investment; just represent the standard financial challenge to enter the sub-prime real-estate market in emerging countries'), social ramifications of self-deceptive or overly optimistic attitudes. By way of mutual reinforcement, this will result in an overall negative shift in the 'corporate culture' of reasoning and acting or to the "corruption" of initially collectively accepted (moral or non-moral) values (Gilbert, 2005;cf. Brief et al., 2001;Darley, 2005). In short, what we see here again is that both parties mutually reinforce akratic tendencies. The group-level irrational tendencies-induced by in-group affiliation, solidarity or social comparison among members-will reinforce individuals' akrasia, while individuals' akrasia-induced by the group in the first place-further foster group cohesion, which, in turn, reinforces akratic behavior. One mechanism that might explain this negative spiraling is due to a structural feature of shared intentions. In his influential account, the social ontologist Michael Bratman has explored this in terms of agents' "mutual responsiveness" to the intentions and beliefs of one another when they engage in shared agency. As Bratman explains, this responsiveness Involves [among other features] practical thinking on the part of each that is responsive to the other in ways that track the intended end of the joint activity [. . .] Since the other's intentions and actions are themselves shaped by her analogous beliefs or expectations, there can be versions of Schelling's (1980) 'familiar spiral of reciprocal expectations' (Bratman, 2014, p. 79). I want to suggest that it is precisely this spiral that may backfire, as it were, in collaborative SI and cause a negative spiral of reciprocal irrational influences. In the following (see "Further Socio-Psychological Mechanisms Underlying Collaborative SI" and "Explaining Collaborative Irrationality"), I will discuss further underlying socio-psychological and structural dynamics that help explain what exactly causes the collaborative irrational belief and behavioral tendencies. But for the present case, I suggest that the core mechanism at play is what Janis has described as the phenomenon of groupthink. Janis has shown that groupthink reliably occurs in small-and mid-sized, deeply cohesive groups. As paradigmatic settings for groupthink, he mentions such groups as "infantry platoons, air crews, therapy groups, seminars, and self-study or encounter groups of executives receiving leadership training" (Janis, 1982, p. 7). 15 In such groups, "members tend to evolve informal norms to preserve friendly intragroup relations and these become part of a hidden agenda at their meetings" (ibid.). Thus, groupthink is characterized by a mode of thinking that people engage in when they are deeply involved in a cohesive in-group when the members' strivings for unanimity override their motivation to realistically appraise alternative courses for action. [. . .] Groupthink refers to a deterioration of mental efficiency, reality testing, and moral judgment that results from in-group pressures. (Janis, 1982, p. 9) What exactly are the defects of a group engaging in groupthink? Janis mentions seven defects in "decision-making tasks" (ibid.: 10): (i) limitation of group discussion to a few alternative courses of action, ignoring alternatives; (ii) lack of surveying the goals and objectives and their implicated values; (iii) failure to re-examine the initially preferred actions regarding non-obvious risks and potential drawbacks; (iv) failure to re-consider courses of action initially deemed unsatisfactory or to consider non-obvious gains or factors that make the chosen alternative appear desirable; (v) lack of any attempt to gain (external) expert opinion about alternatives; (vi) similarly, selective bias to available factual or expert information supporting desirable courses of action and ignorance of external critical views against it; (vii) finally, failure to work out contingency plans to cope with various foreseeable setbacks that might endanger the success of the chosen course of action. But what are the structural and situational factors leading groups to such irrational behavior in the first place? Janis suggests a number of "structural faults" of groups fostering groupthink (ibid.: 248-249): they include the above-mentioned group cohesiveness, strong group loyalty and an increasing need for affiliation, especially when facing a crisis situation or being subject to stress; an insulation of cohesive decisionmaking subgroups from qualified intragroup experts considered as 'outsiders' or deviants until the decision is taken; the lack of previously established organizational constraints and norms to adopt methodological checks-and-balances or assess critical information; or an alleged or emerging (false) group norm in favor of a particular action or decision toward which members would feel obliged. Additionally, one of the key, though neither sufficient nor necessary, "situational factors" facilitating groupthink is high stress induced by external factors, such as the "threat of losses to be expected from whatever alternative is chosen and [. . .] low hope of finding a better solution than the one favored by the leader" (ibid.: 250). This explains how groupthink is directly linked to emotion-regulative processes. Arguably, immediate threat of losses or high stress affect not only deliberative but in the first instance affective processes. On the one hand, despair, threats, need for affiliation or stress are prevalent targets of ER; on the other hand, they are precisely those type of affective states that tend to heavily interfere and disrupt successful ER. Now, with a view to the specific focus of this paper, I want to suggest that groupthink, viewed as an underlying, cognitive and affective mechanism facilitating (doxastic) akrasia, arises when members' intragroup affective bonds and socioemotional affiliation overrides their motivation or their very ability to rationally assess alternative courses for intentions and actions. What is more, strong groupthink may also inhibit to access one's own emotional arousal or even some bodily affective states, even those induced by addictive physiological processes. Here then we have a clear case of what I introduced above (see Self-Deception and Doxastic Akrasia) as the lack of emotional clarity or emotional awareness that deeply affects ER. At the same time, this cognitive-cumaffective corruption leads the members' capacity to assess alternative ways of ER. Here is how Janis describes this mechanism: [field experiments indicate] that under certain conditions, increased social contact among group members increases not only the attractiveness of the group but also adherence to norms of self-improvement (for example, giving up smoking). Under other conditions, however, the informal norms that develop may subvert the original purposes for which the group was formed. (Janis, 1982, p. 277) It should be emphasized that Janis and Hoffman (1970) also discovered the reverse, positive effect of successful non-smoking tendencies due to social affiliation. In a similar small-group setting, they observed members of a group of patients with high-contact partners developing "more unfavorable attitudes toward smoking" and even "fewer withdrawal symptoms of anxiety." They conclude that "the most plausible mediating factor appears to be the increase in interpersonal attraction produced by daily contact, which makes for increased valuation of the clinic group and internalization of the norms conveyed by the consultant leader" (ibid.: 25). However, in contrast to the cases above, this effect concerns not mutual or group-level but, rather, interpersonal influences on ER (i.e., a level of sociality that corresponds to what I have analyzed under the heading of mutual SI). Secondly, it does not negatively affect my main line of argument anyway. Quite the contrary, it adds further ammunition to the claim that even deeply affective symptoms, such as (addiction related) anxiety, are, for better or worse, socially mediated or co-constituted. The Corruption of Group-Level Emotion Regulation But there might be a further concern. One might wonder whether ER-disruptions occur only on the individual level or whether there might be forms of ER that are distinctively grouplevel phenomena. In other words, are there any group-level mechanisms that play the role of ER and eventually has a feedback on the ER-tendencies of individuals, or vice versa. As already indicated, ER processes are not only deeply embedded into sociocultural contexts; for some tightly coupled pairs of individuals (e.g., infant-caretaker, romantic partners) there are also ERdynamics at play that can only be viewed as a dyadic 'loop' of emotional co-regulation (Krueger, 2016;Krueger and Szanto, 2016). But above and beyond that, might interpersonal ER result in group-level patterns of ER, which may be either disrupted by individual ER-deficiencies, or, conversely, have biasing feedback effects on the latter? I want to briefly argue now that this might indeed be so. In order to understand these complex socio-emotive interrelations, it is helpful to consider again what has been discussed as interpersonal ER (Zaki and Williams, 2013;Parkinson and Manstead, 2015). Interpersonal ER is a process by means of which people are shaping the emotions of others in their immediate social environment. It has been shown that interpersonal ER positively or negatively impacts intrapersonal ER and vice versa. Making others with whom one interacts feel better heightens one's own moods (and thus this might be an indirect way of self-regulation), whereas, say, down-regulating one's own anxiety might lessen concern for oneself by friends (Niven et al., 2012). But interpersonal ER is also an effective manipulator within small-group settings, such as therapeutic support or encounter groups, viz. in groups in which there is a high likelihood for groupthink. Thoits (1996), for example, demonstrates how in a psychodrama encounter group the group's ER-strategies strongly influence the emotions of a targeted individual. Even more strikingly for present purposes, these group-level ER-strategies have significant feedback effects on the group-level display of emotions and the solidarity of the group as such. Thoits (1996) describes how the group generates intense negative emotional states in targeted protagonists, ultimately with a view to 'cathartic personal insight.' The group uses for example dramatic enactments, teasing, provocations, and non-verbal communication tasks, physical-effort techniques, often enhanced with dramatic music, light effects, etc. In the wake of emotional 'crashes' on the part of the individual targets, group members are emotionally affected ("moved collectively" to tears or discomfort) and eventually engage in "group supportive acts." These involve the collaboration of many participants and include for example collectively displayed acts of bodily comfort to a targeted member (e.g., collaborative lifting, rocking or massaging). Such group-supportive acts of providing ER-aides for an individual would in turn have a feedback on the other members. This happens by means of direct emotional contagion (e.g., seeing someone crying lets the observers breaking out in tears themselves) or vicarious participation (imaginatively taking up the perspective of the targeted protagonist), and leads to group-level display of uplifting affects (e.g., group-wide hugging or energetic dancing). Ultimately, these dynamics produce what Thoits-somewhat hastily, to be sure-calls "shared" or "collective emotions, " increasing group solidarity. Tellingly, Thoits cites a protagonists stating that "'[It's not one person's scene;] it's our scene. This is not just one-to-one work that is being done here; it's for all of us."' (Thoits, 1996, pp. 104-105). What we clearly see in this example is that via group-supportive acts the group displays ER-mechanisms that affect individual members' self-regulation, but also how individuals' self-regulation has a feedback on the group-level ER process. The important point is that when ER-processes are disrupted either on the individual or the group level there will also be according ER-feedbacks on both the individual and the group level. And in terms of groupthink, I want to suggest that groupthink might not only lead to individual ER-deficiencies but also individual's ER-failures might facilitate and reinforce groupthink. Coming back to our initial example of the smoking therapy group, the group-level ER-disruption might deploy as follows: in the face of the akratic disruption of the deviant member and his reinforced irrational behaviorreinforced precisely by the group-the group itself might fail to appropriately regulate its affective dynamics. Due to the affective focus on the deviant member's threatening the ingroup coherence and the reinforced group-alignment and solidarity induced by groupthink, members may be unable to re-deploy attentional focus away from the ingroup/outgroup dynamics and appropriately assess the new situation, i.e., 'appropriately' relative to the initial goal of quitting smoking. They will be incapable of cognitively reappraising the deviancy in terms of actual success, to wit, as a success of the group and not just of the member. After all, they might-indeed, according to their rational point of view ought to-view the alleged deviancy as exemplifying the group's emotion regulatory success upon the given individual. And they are unable to view the situation in light of the RPV of the group precisely because this is now corrupted by groupthink, or by increasing ingroup affiliation and outgroup demarcation of the deviant member. Hence, they will fail to act in light of or "promote" their group ethos (Tuomela, 2007, p. 24), according to which the whole raison d'être of the group is to collaborate in helping each other to quit smoking, or to do this together. Further Socio-Psychological Mechanisms Underlying Collaborative SI In this section, I want to briefly discuss two further closely related socio-psychological mechanisms underlying SI discussed in literature on choice theory: first, group polarization and choice shift, and second, the pooling of (unshared) information. Group polarization refers to the widely observed phenomenon that deliberative groups regularly and predictably shift toward more extreme views than pre-deliberation medians, indicated by the members' pre-deliberation tendencies (Friedkin, 1999;Sunstein, 2002). Consider a group deliberating on whether or not to allow for more social welfare for refugees. Let the discussion members exhibit a representative sample of political views in a given country, ranging from more to less extreme views, for example from the view that 'refugees should fully pay for the time of their asylum procedure on their own, ' to 'no additional social welfare, ' or 'just as much social welfare as for citizens, ' to the most liberal view, 'more social welfare than for citizens given their precarious state.' As shown by a number of studies, due to their stronger salience in the group discussion (discussion time, more ferocious voices, etc.), the more extreme views will gain more currency. Eventually, each and every group member will leave the discussion with more extreme views than the mean-range view of the members when entering the discussion (the mean being, say, 'no additional social welfare' or 'just as much social welfare as for citizens'). A similar phenomenon is choice shift. If polled anonymously after a group discussion, individual members of deliberative groups will typically tend to have more extreme positions than the mean initial view (Zuber et al., 1992). The main factors fostering group polarization and choice shift are: (i) a natural tendency toward social comparison among individuals, i.e., the disposition of members to adjust their positions to dominant or salient positions, in order not to stick out too much; (ii) the role of limited, unequally distributed or disproportionate pools of persuasive arguments at the group level, which may push those members who are pre-deliberatively already inclined to the respective views in extreme directions; (iii) inequalities due to interpersonal influences, which emerge during group discussions, for example more or less dominant voices; (iv) finally, and very much as in the case of groupthink, ingroup/outgroup divides and other insulating factors yielding underexposure to views differing from already like-minded ingroup members (Friedkin, 1999;Sunstein, 2002). Secondly, another cogent phenomenon has often been observed in deliberative groups, namely a certain pooling of (unshared) information (Larson et al., 1994;Stasser et al., 2000). There is a strong tendency, especially in smaller face-to-face discussion groups, to focus on already shared or commonly known and available information, while not taking into account unshared, though maybe highly relevant, information. Again, the pre-deliberation and pre-discussion distribution of decisionrelevant information has a strong influence on the content of intragroup and group attitudes, and this significantly affects individual as well as collective reasoning. As should be obvious, these socio-dynamic tendencies resonate well with the irrational biases and tendencies of groupthink. Moreover, when considering the effect that the salience of views and biased distribution of information has on irrational tendencies, the role of group polarization and the pooling of unshared information in the cognitive and attentional aspects of awareness in ER should also be fairly clear. They will have a key role in corrupting the capacity of group members to change their attentional deployment, and in deficiencies regarding cognitive change or reappraisal. For example, social comparison will push individuals to adjust their views to affects voiced by dominant members (e.g., irrational fears regarding immigration), and a disproportionate pool of persuasive arguments will also make more extreme affects regarding certain positions appear more salient. Thus, these biases will hinder individuals' emotional clarity, or even lead to a lack of emotional awareness about one's own or group-level emotional preferences. Just like groupthink, these socio-dynamic processes will significantly modulate or inhibit individuals' successful ER. EXPLAINING COLLABORATIVE IRRATIONALITY In the previous two sections, I have mainly focused on explaining which socio-psychological mechanisms are responsible for collaborative SI. What is still missing is a conceptual explanation of what exactly happens in such cases. This is the task I wish to pursue in this final section. To begin with, recall the three requirements that agents capable of practical irrationality must fulfill (see "Emotion Regulation and its Social Biases"). I have argued that it is in particular the Overall Rational Integrity Requirement that is crucial for establishing the possibility of collaborative irrational agency. According to this requirement agents have a unified rational point of view in the light of which they assess their beliefs, preferences or intentions and which, in the course of practical deliberation, yields conclusions as to what, all-thingsconsidered, they ought to believe or do. Now, I want to argue that in genuine cases of collaborative SI-just as in individual cases-the irrationality amounts to a motivated non-conformity to this overall rational integrity requirement. Importantly, the non-conformity is not simply irrational but there is a motivated bias-and hence some, if not good, reasons-for the members qua members to act or believe accordingly. The motivation deploys its force via the above-mentioned mechanisms and stems from the familiar affective and cognitive biases. In order to appreciate this point, consider again the subtle but important difference between mutual and collaborative SI. In the first case the irrational tendencies are also reinforced and facilitated by the interaction with others, but the irrationality of the action or belief remains on the side of the individuals alone. The rationality of the agency, values and beliefs they fail to conform to is determined by the respective individual's own RPV. And those respective RPV might of course conflict; that is, A's RPV mustn't overlap with B's RPV, and might even be opposed to it. And yet, A might 'rely' on B for assisting her not to conform to her own RPV. This is different in genuinely collaborative cases. The irrationality amounts not simply to individuals but to individual members not reasoning or acting in light of their group's rational point of view. That is, they fail as members to integrate their preferences, intentions or beliefs into the group's RPV. Let me explain this in terms of the familiar notion of group identification as it is employed in social identity theories (Hogg and Abrams, 1988). Group identification here refers to a psychological process that involves a cognitive and affective dimension and by dint of which individuals subscribe, acknowledge and indeed phenomenologically experience their group's perspective, or RPV, as their own. Thus, what happens in the cases I am interested in is an irrationally motivated failure of the individuals qua group members to integrate their own RPV into the group's RPV-the group's with which they keep to groupidentify, and that means, whose RPV they keep subscribing to, notwithstanding this failure. Arguably, this is irrational because the members have no good, or no affectively unbiased, reason for both identifying with their group's RPV and their own. But notice that this does not simply mean that the members are sticking to their own preferences, beliefs and intentions, which are in conflict with those dictated by their group's RPV. Rather, they stick, as group-identifying members, to their own RPV-a point of view, however, of which part and parcel is precisely the group's RPV. Their own RPV would dictate either to quit the group or to stop acknowledging the group's overall RPV or identifying cognitively and affectively with it. Instead, the individuals keep trying to integrate the group's RPV into their own, an integration that is doomed to fail, however. But this failure, as I have tried to show, is itself not immediately apparent to the individuals, which explains why they keep trying. The reason for that is that the individual members' own emotion regulatory processes, and indeed their awareness of their own desires, intentions, and affective statesincluding their affective attachment to the group-are biased or clouded by the deficiencies of the group's ER and vice versa. Now, critics may wonder whether such collaborative cases would not simply amount to a certain internal division or partitioning within deliberative groups, such that we end up with conflicting sub-groups or, to put it bluntly, conflicts between what individuals want and what the group wants. Thus, one might think that we are dealing here with a first-order (individual) and a second-order (collaborative) case of irrationality. If so, we would just re-introduce the partitioning picture on a higher level. But this, as I have argued, would not suffice to capture the sense of irrationality at stake in SI. After all, we would then simply be left with competing reasoning or goals and, at best, a compromise of negotiated reasons and intentions. On an alternative construal, DA would either be based on the model of other-deception, where one individual or subgroup deceives another, or amount to a mere problem of practically motivated (epistemic) miscoordination. The latter possibility might be seen as the result of certain epistemological opacities (maybe due to affective biases), which in turn may result in non-cooperative behavior or the members' poorly executed coordination regarding a shared goal. But however well they may fit some cases, both these explanations seem unable to account for the collaborative nature of irrationality we are looking at. Relatedly, it might seem that complete conformity of the individual's overall rational point of view with that of the group's is too strong a requirement for group-membership as such. Put differently, does any participation in a collaborative endeavor necessarily requires integrating the individual's overall RPV into that of the group on pain of irrationality? This surely seems too hard a bullet to bite in most ordinary cases of collaboration. Against these potential objections, I want to emphasize again that collaborative irrationality does not amount to a mere problem of an individual and the group or some subgroups having conflicting reasons or interests. Neither do we have one individual or subgroup deceiving another (in which case, again, we would certainly not have any proper self -deception or practical irrationality), or one subgroup succumbing to the irrational tendencies of another subgroup. Rather, there is a temporary disintegration of collaborative reasoning and deliberation. In other words, what we have here is a motivated, but nonetheless irrational, disruption of group identification and eventually of the overall rational unification of the members. The result of such will be a temporary disintegration of the rationalcum-practical and often affective integration of the individuals, initially bound together by shared reasons, goals and intentions, or a shared affective life. Specifically, what happens is a failure of integrating one's own rational point of view into that of the group's rational point of view. But notice again that the failure does not simply amount to a conflict between two incompatible perspectives. For, again, in genuinely collaborative cases, in which the members identify cognitively, psychologically or phenomenologically with their group, part and parcel of one's own RPV is precisely the group's RPV. Conversely, if collaboration is supposed to be taken seriously-as in therapy group settings for instance-the group's RPV is supposed to be 'in line' with their members, and indeed purports to be a more or less practically and affectively coherent integration of each and every member's RPV. So we don't simply have two or more RPVs-the group's and the members'-conflicting with each other but a practically irrational disintegrating force working among or across the members and ultimately disrupting the overall unification of the group as a whole. Viewed from the individuals' perspective, what goes wrong in the process is that the relevant aspect of individuals' RPV is not sufficiently rationally integrated into the group's RPV-to wit, the aspect which, via group identification, is in pursuit of a collaborative (and not just an individual) goal. Surely, not in all cases in which individuals cannot integrate their RPV into the group's do we have individual akrasia or self-deception at play. It might simply be that the individual's norms of rationality are stronger than the ones dictated by the group or that the group identification of the individual is not strong enough for an irrationality to arise in the first place. In such cases, there is room for "rational deviation" (Gilbert, 2001) from the collective goals and preferences. One will then have rationally valid excuses for not fulfilling the obligation associated with a collective preferences, due for example to conscience reasons. This will be a matter of contextual differences and one will have to look closely into the individual cases. But if the individual's group identification is strong enough, and that means if the group's RPV is part and parcel of or rationally and relevantly integrated into the individual's, then the conflict will not be a matter of first-and second-order (ir-)rationality conflicting, but itself be a collaborative conflict of rationality. 'Relevant' integration here, again, means that the individual must integrate those elements of the group's RPV into her own that concern pursuing a collaborative goal in which the individual aims to partake herself (e.g., 'our goal to stop smoking together is my goal'). So this doesn't mean that individual couldn't at all, or rationally, deviate from the group's RPV. She may still try to weigh both the psychological and cognitive evidence for what's best for her to do and what's best for the group (e.g., 'Is it best for me to stop smoking at risk of tipping the affective in-group balance and dissolving group homogeneity'; 'Should I not stop smoking and follow the group's rules, ' etc.). But there are also casesand these are the ones I have been discussing-where one may not have rationally valid excuses or reason (for oneself) not to fulfill the obligations associated with a collective goal. Moreover, because the individual is torn not only by her own akratic tendencies but also, and more importantly, given her initial group identification, biased by in-group affiliation tendencies or similar dis-regulatory mechanisms, she might simply not be any more in a position to rationally (what is all things considered best to do) and also psychologically (what is best for me given my group identification and my own goals) adequately assess the available evidences. Similarly, on the group level, the disintegration is not simply a partitioning of the group into conflicting subgroups, but rather a dissolution of the normative force of an initially joint commitment to collective reasons, values and practical conclusions, which are provided by its rational point of view. That is, not only do all of the above requirements (1)-(3) still hold, such that the irrationality can arise in the first place. What is more, the members acknowledge and also continue to lay claim to their group's rational point of view, such that its normative force still exerts its influence. Another way to put this, is to point to a certain disintegration or drifting apart of individuals' 'personal' intentions, on the one hand, and the group's 'joint' intention, on the other. In terms of Gilbert's (1989Gilbert's ( , 2009, prominent account of collective intentionality this may happen due to the fact that-being motivationally biased-none of the participants of a certain collaborative endeavor has a personal commitment to a shared belief or intention, even though they continue to be jointly committed to it. To appreciate this point, consider Gilbert's notion of 'joint commitment.' Gilbert uses this notion as a technical concept to highlight a difference between 'joint' and 'personal' commitments to collective beliefs or actions. In her view, when parties jointly commit themselves to a shared belief or intention, they must see to it "as far as possible to emulate, by virtue of the actions of each, a single body that intends to do the thing in question" (Gilbert, 2009, p. 180). By doing so, they are jointly committed to the intentional action. The central idea is that in sharp contrast to personal commitments none of the parties can suspend the normative force of the commitment thus created individually or separately but, rather, precisely only through joint deliberation. But as I have tried to argue, what happens in collaborative SI is precisely the opposite: The parties, due to collaboratively induced ER-deficiencies and the ensuing affective and cognitive biases, do not properly realize that they in fact are still jointly committed to the collaboration, while they only act upon their (akratic) personal intentions. CONCLUSION AND FUTURE DIRECTIONS I have argued that given three requirements agents are capable of mutual, communal, and in particular collaborative forms of AA and DA. I have provided a conceptual model to analyze these in terms of a failure to integrate individual members' rational point of views into the overall rational point of view of the group with which the members keeps to group-identify. Indeed, I hope to have shown that such social forms of irrationality are not only real but are also rather common and prevalent phenomena. Moreover, I have emphasized that in collaborative cases there is a two-way modulation, a bottomup, leading from individuals' irrational tendencies to that of the group, and a top-down, leading from groups' irrational policies to individuals' irrational action and belief-formation. I have explored this reciprocal, reinforcing dynamics in terms of what I call 'collaborative spiraling of practical irrationality.' Furthermore, I have argued that in some instances collaborative irrationality is due to a salient deficiency in ER, namely to the socially motivated misidentification of one's own affects. I have suggested that this biases one's own affect control and eventually one's group's ER. Consequently, I have claimed that various social engagements often play a contributing or even constitutive role in entering or maintaining practical irrationality, and this is, in turn, partly due precisely to their disruptive role on ER. Now, even if the argument goes through, some may wonder whether there would lurk a circularity at its very dialectical core, and especially when considering the latter claims: Thus, it might seem that there is a circularity between the claim that some social forms of irrationality have a disruptive role on ER, on the one hand, and the claim that the resulting biases in individual and group-level ER are reinforcing collaborative irrationality, on the other. Put differently, one may wonder about the direction of the causal influence regarding the emergence and indeed intensification of irrational biases: is it failures in emotional (co-)regulation leading to or increasing collaborative akrasia or vice versa? Or does the influence go both ways? However, I believe that the 'circularity, ' though in fact real, represents a virtuous and not a vicious circle, which rather than threatens the argument, lends credit to it. For, indeed, there is a certain feedback between disruptions of collaborative and individual rationality, on the one hand, and disruptions of individual and group-level ER, on the other, and this mirrors the spiraling of collaborative irrationality that I have elaborated upon. Typically, in real-life scenarios, once this spiraling is set in motion, it proves almost impossible to stop the feedback-loop between disruptions of what one or one's group can emotionally access and regulate, on the one hand, and disruptions of what one or one's group is rationally able to do or to think, on the other. Let me close with a remark on a lacuna and the potential direction for future research. In this article, I have concentrated on mutual, communal, and especially collaborative forms of practical irrationality. However, I contend that given our three requirements there is also room for genuinely collective and organizational forms of SI. The subjects of akrasia and selfdeception would then not be collaborating individuals but fully fledged group or corporate agents (Pettit, 2003a;cf. Sugden, 2012). But if one admits such cases, one might also wonder how collective or shared emotions might play a role here and, furthermore, whether there might be not just interpersonal and group-level but genuinely collective forms of ER-biases. Though this is still contentious and depend on a number of further assumptions about the possibility of collective agency and emotions, there is already a large body of literature in philosophy that may path the way to move ahead in this direction (e.g., Schmid, 2009;List and Pettit, 2011;von Scheve and Salmela, 2014;Szanto, 2015Szanto, , 2016Szanto, , 2017Tollefsen, 2015;León et al., under review). Above and beyond the need to properly analyze these cases in and for themselves, I believe that in order to get clear about the exact sense in which sociality is modulating the affective and rational life of individuals, future research should analyze this whole variety of cases. Only then shall we adequately understand the sense in which we not only help regulating our emotions in tandem with others or together but also the sense in which ER and rational behavior systematically fails precisely given the presence of others. AUTHOR CONTRIBUTIONS The author confirms being the sole contributor of this work and approved it for publication. FUNDING Work on this paper was supported by the European Union (EU) Horizon-2020 Marie Skłodowska-Curie Individual Fellowships research project SHARE (655067): Shared Emotions, Group Membership, and Empathy. ACKNOWLEDGMENTS Earlier versions of this paper were presented at the University of Manchester, University of Copenhagen, University College Dublin, University College Cork, and Trnava University. I am grateful for the comments that I have received on these occasions, especially to Lillian O'Brien, Philip Pettit, and Maria Baghramian. I am also deeply indebted to Olle Blomberg, Carina Staal, and Michela Summa, who have read and commented on an earlier version of this paper, and three reviewers for their constructive criticism.
19,035.8
2017-01-04T00:00:00.000
[ "Philosophy", "Psychology" ]
Unveiling a new structure behind the Milky Way Context. The ZOA does not allow clear optical observations of extragalactic sources behind the Milky Way due to the meaningful extinction of the optical emission of these objects. The observations in NIR wavelengths represent a potential source of astronomical discoveries supporting the detection of new galaxies, completing the picture of the large scale structure in this still little explored area of the sky. Aims. Our aim is to decipher the nature of the overdensity located behind the Milky Way, in the tile b204 of the VVV survey. Methods. We studied an area of six arcmin around a galaxy concentration located at l = 354.82{\deg} and b = -9.81{\deg}. We selected five galaxies taking into account the source distribution on the sky, in order to optimise the requested time for the observations, and we obtained the spectra with Flamingos 2 long-slit spectrograph at Gemini South 8.1-meter telescope. To identify and characterise the absorption features we have fitted the galaxies underlying spectrum using the starlight code together with the IRTF stellar library. In addition, the spectroscopic findings are reinforced using complementary photometric techniques such as red-sequence and photometric redshift estimation. Results. The mean spectroscopic redshift estimated from the NIR spectra is z = 0.225 +- 0.014. This value presents a good agreement with that obtained from photometric analysis, photoz = 0.21 +- 0.08, and the probability distribution function of the galaxies in the studied region. Also, the red-sequence slope is consistent with the one expected for NIR observations of galaxy clusters. Conclusions. The redshifts obtained from both, photometric and spectroscopic techniques are in good agreement allowing the confirmation of the nature of this structure at z = 0.225 +- 0.014, unveiling a new galaxy cluster, VVVGCl-B J181435-381432, behind the Milky Way bulge. Introduction Observing extragalactic sources beyond the Milky Way implies a continuing challenge since the observations are hampered by the Galactic dust absorption.In this area of the sky, called the zone of avoidance (ZOA), the dust and stars therein obstruct optical observations and a lack of information is produced by dust absorption presenting an incomplete picture of the existing galaxies, and therefore of the extragalactic structures behind the ZOA. To obtain better information of this region several galaxy catalogues have been developed in different wavelengths.The optical catalogues of Kraan-Korteweg & Lahav (2000) and Woudt et al. (2004) have allowed the detection of new galaxies at low Galactic latitudes, although the gathering of information is restricted by Galactic dust and stars.Furthermore, near-infrared (NIR), X-ray and H I radio surveys (Roman et al. 1998;Ebeling et al. 2002;Vauglin et al. 2002;Koribalski et al. 2004;Paturel et al. 2005;Skrutskie et al. 2006;Huchra et al. 2012) have detected galaxies and galaxy clusters at low Galactic latitudes.Also, Jarrett et al. (2000) have identified and extracted extended sources from the Two Micron All-Sky Survey (2MASS) catalogue and Macri et al. (2019) presented redshifts for 1041 2MASS Redshift Survey galaxies, that previously lacked this information, mostly located within the ZOA. Moreover, several works have attempted to reveal the presence of extragalactic structures, such as groups or clusters, behind the Milky Way.In this line, Nagayama et al. (2004) performed a deep NIR survey with 19 confirmed galaxies and 38 galaxy candidates in a region of 36 × 36 arcmin 2 centred on the giant elliptical radio galaxy PKS 1343-601, on the core of an unknown rich cluster located in the Great Attractor region.Also, Skelton et al. (2009) published a deep Ks band photometric catalogue containing 390 sources (235 galaxies and 155 galaxy candidates), in a region of 45 × 45 arcmin 2 , around the core of the rich nearby Norma cluster (ACO3627). Furthermore, using the H I Parkes All-Sky Survey, Staveley-Smith et al. (2016) have observed 883 galaxies, delineating possible clusters and superclusters in the Great Attractor region, at low Galactic latitude.On the other hand, Schröder et al. (2019) published a catalogue with 170 galaxies, from the blind H I sur-vey with the Effelsberg 100m radio telescope, located in the northern region of the ZOA.These new large-scale extragalactic structures could be part of possible filaments at the edge of the local volume.Also, Kraan-Korteweg et al. (2017) discovered a supercluster of galaxies in the ZOA by performing optical spectroscopic observations of galaxies on the Vela region.These results imply a great advance but still leaving a large unexplored area. The NIR public survey VISTA Variables in Vía Láctea (VVV; (Minniti et al. 2010;Saito et al. 2012)) has proven that although the main scientific goals of VVV are related to stellar sources (Minniti et al. 2011;Beamín et al. 2013;Ivanov et al. 2013), its exquisite depth (3 magnitudes deeper than 2MASS) and angular resolution make it an excellent tool to find and study extragalactic objects in the ZOA.For example, Amôres et al. (2012) have identified 204 new galaxy candidates from the VVV photometry of a 1.636 square degrees region near the Galactic plane, increasing by more than an order of magnitude the surface density of known galaxies behind the Milky Way.Further, Baravalle et al. (2018) found 530 new galaxy candidates in two tiles in the region of the Galactic disk using a combination of SExtractor and PSFEx techniques to detect and characterise these candidates, and Baravalle et al. (2021) presented the VVV NIR galaxy catalogue containing 5563 galaxies beyond the MW disk. Related to extragalactic structures, Coldwell et al. ( 2014) have found the VVV NIR galaxy counterparts of a new cluster of galaxies at redshift z = 0.13 observed in X-ray with SUZAKU (Mori et al. 2013).They detected 15 new candidate galaxy members, within the central region of the cluster up to 350 kpc from the X-ray peak emission, with typical magnitudes and colours of galaxies.In addition, Baravalle et al. (2019) confirmed the existence of the first galaxy cluster, discovered by the VVV survey beyond the galactic disk, by using spectroscopic data from the spectrograph Flamingos 2 (hereafter F2) at Gemini South Observatory.More recently, Galdeano et al. (2022) present an NIR view of Ophiuchus, the second brightest galaxy cluster in the X-ray sky, finding seven times more cluster galaxy candidates than the number of reported Ophiuchus galaxies in previous works. In Galdeano et al. (2021, here after G21) we found an unusual concentration of galaxies by exploring the b204 VVV tile, located at low latitude in the bulge region.In this area, of 1.636 square degrees, we detect 624 extended sources where 607 correspond to new galaxy candidates that have been catalogued for the first time.By exploring the spatial galaxy distribution we found a smaller region with a radius of 15 arcmin with a noticeable higher density, representing approximately 12% of the whole tile b204.This region contains 118 visually confirmed galaxies, three times higher density than the remaining tile area.Also, the comparison of the number of galaxies in this area with the values obtained from mock catalogues allowed to reinforce the existence of a galaxy overdensity.Even though the results from G21 provide strong evidence of extragalactic structures, the confirmation throughout spectroscopic redshifts measuring ensures a better knowledge of the nature of this structure.Based on these studies, in this paper we present the results of Gemini South Observatory spectroscopic NIR observations of galaxies from the detected overdensity in G21.Moreover we complement the analysis with systematic photometric studies of this region with the aim of unveil new extragalactic structures behind the ZOA. This paper is structured as follows: in §2 we describe the data and the galaxy sample used for the analysis.Also, the spectroscopic observations and data reduction are presented in this section.Afterwards, in §3 we present our general results, describing in §3.1 the techniques to obtain spectroscopic redshift.Afterwards, the red-sequence analysis is presented in §3.2, and in §3.3 we describe the method to obtain photometric redshift.Finally, we discuss and summarise our main conclusions in §4.The adopted cosmology throughout this paper is Ω = 0.3, Ω Λ = 0.7, and H 0 = 100 kms −1 Mpc. The data The results obtained from G21 motivated us to perform a deeper research of the reported overdensity area of the sky.With this aim we selected galaxies to carry out NIR spectroscopy and decipher the nature of this structure.In order to optimise the requested time for the observations we take into account the source distribution on the sky.Then, we choose five galaxies to perform the spectroscopic confirmation, by using two F2 slit positions to observe two and three galaxies simultaneously in each position.In addition, the selected galaxies are bright enough to guaranteed a suitable S/N, reducing the observation time, and presenting photometric features, such as morphology and colours, typical of galaxies in dense environments.The coordinates and total magnitudes of the spectroscopic NIR observational targets are detailed in Table 1. The overdensity of galaxies, behind the Milky Way, can be noticed in Fig. 1 (top panel) where the density profile estimated as a function of the angular distance from the geometric centre of this region (l = 354.82• and b = −9.81• ) is presented.In this figure a clear excess is observed toward the region closer to the considered central position.Therefore, with the aim of confirming the existence of an overdense structure consistent with a galaxy cluster, in this paper we restricted the analysis to a smaller region around the central coordinates of the galaxy concentration considering a radius of six arcmin.This limiting value approximately corresponds to the radius where the excess becomes noticeable.In this area 58 galaxy candidates fulfil the selection criteria described in G21. The spatial distribution of the visually confirmed galaxies in the considered area can be observed in Fig. 1 (bottom panel).From the density map is possible to detect the region with higher density of extragalactic sources.The five selected galaxies are shown in red squares.In addition in Fig. 2 we show a VVV false-colour multi-band (Z, J, Ks) image of the six arcmin radius studied area.It is possible to appreciate the five galaxies observed with F2 (red squares).Most of the detected extragalactic sources exhibit extended morphology, typical of galaxies, and present redder colours than the foreground stellar sources as can be clearly observed for the 58 galaxy candidates zoomed in the right boxes the of Fig. 2.This remarkably large number of extragalactic sources, found in a very limited region of the sky, could indicate the presence of a group or cluster of galaxies. For the spectroscopic observations we selected F2, which is a NIR imaging, long-slit, and recently multi-object, spectrograph at Gemini South 8.1-meter telescope, located in Cerro Pachón, Chile.This instrument offers a wavelength range of 0.9 -2.5 µm and a circular field of view of 6.1 arcmin on a 2048 × 2048 pixels HAWAII-2 detector array.F2 has a refractive all-spherical optical system providing 0.18 arcsecond pixels. We observed a total of five galaxies with F2 in April 2019 (Program ID: GS-2019A-Q-123).These sources were selected from the list of 58 candidates, according to their spatial distribution and brightness in order to optimise the requested time for spectroscopic observations.Fig. 1: Top panel: Density profile as a function of the angular distance to the geometric centre of the overdensity region detected in the work of G21.The uncertainties were derived via a bootstrap resampling technique (Barrow et al. 1984).Bottom panel: Density map of visually confirmed galaxies around the central overdense region.Filled contours are colour-codded according to number counts in pixels of 2 arcmin x 2 arcmin.The dashed circle represents the six arcmin radius studied area.Grey dots indicate the positions of the 58 visually detected galaxies (dot sizes are weighted according size is proportional to Ks magnitude).The open red squares correspond to the galaxies observed with F2. The observations were made by using HK (1.2-2.4 µm) band grism.Taking into account the VISTA telescope scale and the half-light radius calculated with SExtractor for the five galaxy candidates, we estimate the apparent width of these objects in approximately 1.1 arcsec.In this way we selected the 6 pixels longslit width in order to collect the galaxy signal but avoiding the light of the stars on the field.In this configuration we obtained a spectral resolution close to 1400.The observations were performed with an airmass lower than 1.2 and no cloud coverage.The average seeing during the observing run was estimated at 0.9 arcsec.For each target we observed 12 x 110 sec (0.37 hr) of exposure time and the observational sky conditions were optimal allowing to reach a S/N∼70 for all galaxies in the spectral region between 2,1 µm and 2,2 µm.With the first slit position we observed three galaxies with Ks magnitudes of 15.18, 14.32 and 13.69, respectively.The angular projected distance between the first and second galaxy is 2.25 arcmin and between the first and third is 3.81.We located the longslit at a position angle of 122°from E to N. At the second slit position we observed two group/cluster galaxy candidates with magnitudes Ks = 15.09 and Ks = 14.82, respectively.The distance between galaxies is 0.42 arcmin.In this case the position angle of the longslit was 97°from E to N (see for instance green lines in Fig. 2).In order to provide telluric standards at similar air masses we observed HIP 92688, an AO V star. The spectra were reduced with Gemini IRAF package Version 1.14.As part of the basic data reduction steps we created the necessary dark images and the normalised flat field.Then, we reduced the arc and determined the wavelength solution.We reduced and combined the telluric data and applied the wavelength calibration to extract the telluric spectrum.Next, we reduced and combined the science data.These steps were carried out with nsreduce, nscombine, nsfitcoords, nstransform and nsextract routines.Afterwards, the wavelength calibration was applied to the science data in order to extract the science spectrum, using suitable apertures to reduce the noise in each one in order to reach the best S/N.Finally, we applied the telluric correction. Results The knowledge obtained from spectroscopic observation using F2, from Gemini, can be reinforced analysing the photometric information available from VVV survey.In this section we describe both, spectroscopic and photometric results. Spectroscopic redshift estimation The galaxies inhabiting high density regions have identifiable characteristics such as, e.g., absorption lines and red colours.In Fig. 3 we present the NIR spectral features and the detected lines.The RGB images from these galaxies show their bulgetype morphology and red colours, as can be expected for galaxies within galaxy clusters following the morphology-density relation from Dressler (1980). The observed spectra clearly present absorption features.Using as a first approach the photometric redshift estimates (see § 3.3), these characteristics are located in the NIR spectral region, which is rich in absorption components (see Riffel et al. 2011Riffel et al. , 2015Riffel et al. , 2019)).To correctly identify these features we have cross correlated the observed spectra with a set of stellar spectra, using the starlight code (Cid Fernandes et al. 2004Fernandes et al. , 2005;;Asari et al. 2007;Cid Fernandes 2018).The procedures we have followed are described in Riffel et al. (2009Riffel et al. ( , 2015Riffel et al. ( , 2022), including, for example, the methods used to handle extinction, emission lines and differences in spectral resolution.Briefly, starlight fits an observed spectrum O λ with a combination, in different proportions, of N * spectral base elements, solving the equation: where M λ is a model spectrum, b j,λ r λ is the reddened spectrum of the jth N * normalised at λ 0 ; r λ = 10 −0.4(A λ −A λ0 ) is the reddening term; M λ0 is the theoretical flux at the normalisation wavelength; x is the population vector, and G(v * , σ * ) is the Gaussian distribution used to model the line-of-sight stellar motions, which is centred at velocity v * with dispersion σ * .The final fit is carried out searching for the minimum of the equation: where emission lines and spurious features are masked out by fixing w λ =0 (normally, w λ = 1/e λ , with e λ being the uncertainty in F λ ).The quality of the fit is accessed by χ 2 Red which is the χ 2 given by equation 2 divided by the number of points used in the fit and by adev = |O λ − M λ |/O λ , which is the percentage mean deviation over all fitted pixels.For a detailed description of starlight see its manual1 . Since we are interested in identifying the stellar features to define the set of spectra used by the code to fit the underlying continuum we have followed the approach of Riffel et al. (2015) and used their stars approach, which is a base composed of all the 210 dereddened stars (spectral types F-S/C, F being the hottest available) in the IRTF spectral library (Rayner et al. 2009;Cushing et al. 2005).After this procedure, we have been able to recognised absorption and emission lines considering Rayner et al. (2009).The spectral lines detected in the majority of galaxies are FeI, SiI, TiI, and MgI.Besides that, CrI, KI, CaI, and MnI lines have been detected in a some cases.-ID 01 -VVVJ181430.28-381332.7: presents TiI (1.259µm)The measured redshifts of the five observed galaxies are pretty similar suggesting that they belong to a common system of galaxies with an estimated mean redshift of z = 0.225 ± 0.014.This finding allows to speculate about the existence of a galaxy cluster behind the galactic bulge.In the following section the photometric information of these galaxies, and the remaining ones in the studied region, could give light to this finding. Red sequence Galaxy colours provide strong information about the evolutionary processes they have been subjected to, and therefore indicative of the environment where they reside.The colour-magnitude diagrams of the galaxy clusters, known as the red-sequence (Gladders & Yee 2000) contain a well-defined, highly regular population of early-type galaxies, and have also been used by several authors to successfully identify clusters of galaxies (e.g.Gladders & Yee (2000); López-Cruz et al. (2004); Söchting et al. (2006)). In order to determine the nature of this overdense region we use the VVV photometric information (as described in G21) of the galaxies within six arcmin radius from the estimated position of the overdensity to analyse the colour-magnitude relation.To further strengthen the identification of this region as a galaxy cluster we used the model of Stott et al. (2009) calculated from semi-analytical model of Bower et al. (2006), considering the measured spectroscopic redshift z = 0.225 ± 0.014, obtaining a red-sequence model as shown in the colour-magnitude diagram of Fig. 4.Then, we performed χ − squared goodness-ofthe-fit test finding a p > 0.05 value.Therefore, we can not reject the null hypothesis that this red-sequence model suitably fits our data. Fig. 4 shows that all the galaxies in the 6 arcmin radius area are found within ±3σ around the red-sequence model.Further, by restricting the selection to ±1σ around the linear model we found 40 galaxies (∼ 69 %).This finding suggests that galaxies from this structure can reproduce a well-defined red-sequence corresponding to a galaxy cluster at the spectroscopic redshift estimated in Sec.3.1.This result is consistent with that found by Baravalle et al. (2019) and Galdeano et al. (2022) for other galaxy clusters studied using the VVV survey. Photometric redshift In order to investigate the redshift distribution of the 58 galaxy candidates in the area under study we estimate photometric redshift by running the software EAZY (Brammer et al. 2008).We consider the default set of parameters using all templates simultaneously, v1.0 template error function and the K-extended prior.To calculate the limiting redshift we consider that VVV photometry is three magnitude deeper than 2MASS (Minniti et al. 2010).Therefore taking into account the limiting redshift of 2MASS extended sources around z = 0.2 and Ks magnitude limit Ks = 15 (Bilicki et al. 2014) we can calculate an absolute magnitude limit M Ks ∼ −25 for 2MASS extended sources.If we take the limiting magnitude of our candidates Ks = 17, we can observe a galaxy with absolute magnitude M Ks ∼ −25.5 up to z ∼ 0.45, therefore we allowed photometric redshift solutions in the range 0 < z < 0.45 with a step of 0.01. We built a photometric input catalogue that contains photometric fluxes and uncertainties observed in YJHKs VVV filters, requiring a minimum of three fluxes to perform the fit.Also we require that peak-prob>0.9 to prevent unreliable photometric redshift fits.Under these constrains we can estimate photometric redshifts for 34 galaxy candidates, obtaining a mean redshift photoz = 0.21 ± 0.08. In Figure 5 we show the spatial distribution of galaxies in our sample, colour-coded according to the obtained photometric redshift.From this figure we can appreciate the good agreement of the galaxy photometric redshifts in the area under study.Also, there are some galaxies with estimated photometric redshift differing significantly from the obtained spectroscopic redshift (six with photoz ∼0.05 and three with photoz ∼0.4).To study contaminant galaxies we plot in the top panel of Fig. 6 the distribution of photoz as a function of the projected distance to the centre.We include, in this plot, individual uncertainties in the photometric redshift estimation and show the obtained spectroscopic redshift and its associated errors.As can be appreciated three of the most distant galaxies with photoz ∼0.05 can be considered as interlopers because they have a difference with the spectroscopic redshift larger than 3σ.The errors associated to galaxies with photoz ∼0.4 are too large to draw any conclusions. To strengthen our result we carry out an inspection of the probability distribution functions of galaxies with reliable photometric redshift.In the bottom panel of Fig. 6 we show the average probability distribution function for these 34 objects finding that the resulting distribution is well behaved.We also plot the obtained spectroscopic redshift and its associated error (vertical red and grey lines, respectively).From the figure it can also be appreciated that the peak of the average probability distribution function is slightly moved toward higher redshifts with respect to the measured spectroscopic redshifts, being the difference lower than within 3σ. Discussion and summary Considering the high contamination by dust, gas and stellar objects in the Milky Way central region, it is well known the lack of information about the extragalactic objects located behind this zone.To overcome this constraint, we analysed NIR images with the aim to reduce the contamination and highlighting the light coming from external galaxies.In this work we present an analysis of an extragalactic overdense region located in the tile b204 of the VVV survey. In this study we restricted the analysis to an area of six arcmin around a galaxy concentration located at l = 354.82• and In this area 58 galaxy candidates can be observed fulfilling the selection criteria described in G21.From this sample we carefully selected five galaxies to obtain NIR spectra taking into account the magnitude and spatial distribution in order to optimise the requested time for spectroscopic observations.In this way five spectra were taken with Flamingos 2 long-slit spectrograph at Gemini South 8.1-meter telescope. We found that all the spectra display absorption features, in agreement with expected characteristics considering the morphology of the observed galaxies.Also we carried out a stellar population synthesis on the mean spectrum using the starlight spectral synthesis code.In this way we found out that most galaxies show the continua dominated by stellar absorption features.The most abundant elements are Fe I, Si I, and Ti I.The spectroscopic redshift was calculated for every single galaxy, considering all lines identified in each spectrum, finding a mean redshift z = 0.225 ± 0.014.Taking into account this result, the six arcmin radius used for the analysis performed in this paper corresponds to ≈ 1h −1 Mpc, which represents a typical galaxy cluster radius, located at this redshift. In order to reinforce our result, complementary techniques such as red sequence and photometric redshift were applied.In this way, we analysed the NIR colour-magnitude diagram, considering the Ks magnitude and the J-Ks colours.Then, the red sequence model was performed following Stott et al. (2009) for a galaxy cluster at measured spectroscopic redshift z = 0.225 ± 0.014 finding 58 and 40 galaxies (∼100 % and ∼69 %), within ±3σ and ±1σ around the red-sequence linear model, respectively. Furthermore, we estimate the photometric redshift of the galaxy candidates by running the software EAZY.We applied constraints requiring a minimum of three fluxes and peak-prob>0.9 to ensure reliable fits.In this sense the final sample has 34 galaxy candidates with reliable photometric redshift estimations with a mean value photoz = 0.21 ± 0.08.Also, we made an inspection of the probability distribution functions of galaxies with reliable photometric redshifts, finding that the resulting average distribution is well behaved with a mean and dispersion in good agreement with the average photoz. Finally, considering the spectroscopic redshifts measurement of five galaxy members we performed the estimation of the lineof-sight velocity dispersion of the cluster candidate using the gapper estimator described by Beers et al. (1990).Thus, we obtained for the velocity dispersion a value of σ ≈ 400kms −1 .In addition, we calculated the virial radius and the virial mass of the galaxy cluster candidate following Merchán & Zandivarez (2005) finding R vir ≈ 1.19h −1 Mpc and M vir ≈ 4.43 × 10 13 M .Although a high amount of spectroscopic observations will allow to obtain accurate values for the cluster parameters, these results are consistent with that expected for rich galaxy groups (Domínguez et al. 2002;Merchán & Zandivarez 2005) or a galaxy cluster with a virial mass according to the redshift range z = 0.1 − 0.4 (Wiesner et al. 2015). The agreement of the redshifts obtained from the three different methods and the estimated cluster parameters allow to confirm the nature of this structure as a galaxy cluster at z = 0.225 ± 0.014, named VVVGCl-B J181435-381432, unveiling a new extragalactic system that was hidden behind the Milky Way bulge. Fig. 2 : Fig.2: False-colour Z (blue), J (green), and Ks (red) image of a region corresponding to the galaxy group/cluster candidate.The red dashed circle delimits the six arcmin radius central area, the green lines indicate the two long-slit positions and the red squares show the five galaxies observed with F2.In the right panels we zoomed the 58 galaxy candidates within the studied area.The length of each box side is 20 arcsec. The NIR detected lines for every individual galaxy, shown in Fig3(left panels), are summarised as follow: Fig. 3 : Fig. 3: Left: Final reduced and redshift-corrected NIR spectra.Right: RGB false colour image of the observed galaxies as in Fig. 2. The length of each box side is 20 arcsec.The green lines indicate the slit position. Fig. 4 : Fig. 4: Colour-magnitude diagram J − Ks versus Ks.The points are colour-coded according to the projected distance at the centre of the overdensity zone.The red sequence model is shown as a red line, the dark grey lines represents ±1σ around the model and the light grey lines represents ±3σ.The black squares represents the five galaxies observed with F2. Fig. 5 : Fig. 5: Sky distribution of the 58 extended sources in the overdensity zone, colour-coded according to the obtained photometric redshift photoz, grey dots represent objects with unreliable estimates and the open red squares are the galaxies observed with F2.The dashed circle represents the six arcmin area under study. Fig. 6 : Fig. 6: Top: Distribution of photoz as a function of the projected distance to the centre.The error bars correspond to 1σ individual uncertainties in the photometric redshift estimates.F2 observed galaxies are marked as red squares.Bottom: Average probability distribution function of the 34 galaxies with reliable photometric redshifts in our sample.The line corresponds to the mean values and the shaded region to 1σ from the mean.The lines show the obtained spectroscopic redshift (grey) and its associated errors (red).
6,430.2
2022-10-28T00:00:00.000
[ "Physics" ]
Dual Minkowski Loss for Face Verification of Convolutional Network Despite face recognition and verification have achieved great success in recent years, these achievements are experimental results on fixed data sets. Implementing these outstanding technologies in the field of undeveloped data sets presents serious challenges. We adopt three state-of-the-art pre-trained models on an entire new dataset University Test System Database (UTSD), however the results are far from satisfactory. Therefore, two methods are adopted to solve this problem. The first way is data augmentation including horizontal flipping, cropping and RGB channels transform, which can solve the imbalance of label pairs. The second way is the combination of Manhattan Distance and Euclidean Distance, we call it Dual Minkowski Loss (DML). Through the implementation of photo augmentation and innovative method on UTSD, the accuracy of face verification has been significantly improved, achieving the best 99.3%. Introduction Nowadays, a large variety of photos, videos and text scripts were applied to deep learning. The large scale of datasets contributes a lot to the improvement of recognition accuracy, such as Labeled Faces in the Wild (LFW), YouTube Faces (YTF), CASIA-WebFace, and CAS-PEAL et al [6]. Thanks to the valuable datasets that models and algorithms have been greatly developed and promoted. The recent state-of-the-art face recognition models such as DeepFace have achieved an accuracy of 97.35% on LFW dataset, the later published technique FaceNet refreshed the latest record and pushed the precision to the highest 99.63%. So far, FaceNet is considered as a baseline for face verification and recognition [1,2]. Due to the existence of an intermediate bottleneck layer, the operation speed and accuracy of the convolution neural network have been greatly affected. By abandoning the bottleneck layer and choosing optimized embedding, FaceNet has an obvious advantage in image processing. Therefore, FaceNet model is transferred to our new database (UTSD) for learning, three pre-trained models are adopted as initial input. Comparing the performances on the novel database, several improved measurements are adopted for the effect of the model and accuracy of prediction. Face Alignment and Label Generation Photos in UTSD are 250*250 pixels, in order to put all the images into Inception Network for training, we change the whole database of pictures cutting into 160*160 shape through programs [3,4]. Then photos of the same person are placed in a fixed folder and each of them is renamed. Once the photos have been processed, the following task become straight-forward. For the same identities in a folder, pictures are randomly formed into a set as positive sample tag pairs. As for the negative sample tag pairs, we do not assemble all of the distinct photos. Because if all the possible negative tag pairs are generated, the result in triplets is more inclined to get satisfied. Taking this into consideration, only the first and second pictures from different folders are selected as negative ones. As shown in Fig. 1, the output figures are Euclidean space distance between two pairs of faces,which from either the same person or two different ones. The compared photos are sampled respectively from identity card and certificate of identification. The figure of 0.0 represents the two pictures are of the same person, and a figure of 5.0 are completely distinct. You can see that a threshold of 0.98 would be fine to distinguish the pairs of person. Pre-trained Network Architecture Although AlexNet has showed that sufficient large and deep Convolution Networks trained by standard backpropagation can achieve an excellent recognition accuracy, millions of parameters have to be trained and plenty of hours need to be consumed under this condition. The more important thing is the hard solving problem of bottleneck layer, which reducing the performance of the whole network [10]. Avoiding extra time consumption and aiming to have a good performance start, we adopt 3 pre-trained dataset models for training. The network architectures are selected including CASIA-WebFace dataset which contains more than 10,000 persons and 500,000 photos, VGGFace2 which has 9131 identities and 3.31 million number of images, and MS-Celeb-1M consists of 1 million famous people and 100 photos of each. Each of the models is trained for a great deal of time and all of them have achieved extraordinary accuracy. The whole datasets are well trained in neural network. Therefore, choosing 3 pretrained dataset models as the initial input of CNN is a pretty good choice for efficient training. Database and data augmentation 1280 different identities of boy students and girl students among the age 17 to 21are stored in University Test System Database(UTSD), and nearly 3980 sample photos are contained including identity card, certificate of identification and instant photos. For the sake of comparing effects of face verification, data augmentation methods are adopted to enlarge the amount of database. The first form of data augmentation consists of generating image translations and horizontal flipping. Four corners are cropped on the original and flipping images. The second way of data augmentation consists of altering the intensities of the RGB channels in training photos. We perform PCA on the set of RGB pixel values throughout the whole training set. In order not to affect the accuracy of face verification, we slightly changed the RGB channels. The two steps of procedure making photos expand to 12-15 times but size of pixels are maintained. Model Design The models we chosen are treated as a black box, the main works lie in the end-to-end learning of the whole system. To achieve face verification we employ a novel method. Under this circumstance, the Triplet Loss Function is modified according to the performance of real data. We calculate the distance between two pictures by using the combination of Manhattan Distance and Euclidean Distance rather than Euclidean Distance only. The main idea of face verification is to minimize the distance of the same category (Anchor and positive) and maximize the distinct (Anchor and negative). In order to have fast convergence, hard positive should be seriously selected. The Dual Minkowski Loss Triplet Loss is found that there is still a certain distance from the higher accuracy in practice. As a result we improve the method through the combination of Manhattan Distance and Euclidean Distance, which is called Dual Minkowski Loss (DML). In DML, we want to meet the following three conditions. is the generated possible triplets assemble, including positive and anchor tag pairs, negative and anchor tag pairs, the cardinality of is N. For the first condition, which is a classic expression of Euclidean Distance in [1], the second is the expression of Manhattan Distance. Combining of the first and second 1 α 2 α Φ Φ conditions is the method we found to be more effective in improving experimental results. The loss function is represented as follows, one condition will be needed. (2) 1 are the weights of adjustment, they can get distinct values according to the effect of experiment in iteration process, and they follow the equation that the sum of and equals 1 . The loss function is a weight control combination of Manhattan Distance and Euclidean Distance. Training In the process of training convolutional networks, we use the basic Stochastic Gradient Descent with standard back propagation [5,7,8]. The learning rate is set to 0.06 at the beginning and gradually decreases as the number of iterations increasing. The model is trained on a CPU running for nearly a week, the loss dramatically drop down when running at 120 hours of training. The margin is set to 0.2 and is 0.08. We fine tune the parameters and get good performance when value of is around 0.375 and around 0.624. Experiments We evaluate our method on three Pre-trained datasets with Dual Minkowski Loss Function comparing to the FaceNet with Triplet Loss Function. As shown in Table 1, the latter method trained on CASIA-WebFace, VGGFace2 and MS-Celeb-1M datasets is fine, the accuracy is around 0.875, 0.893 and 0.885. Surprisingly, validation rates are quite dissatisfied. We analyze the distribution and characteristics of the data in UTSD, trying to find out the reasons affecting validation rate. The out-off-balance distribution of data in the labeled pairs' dataset may be one of the reasons accounting for this phenomenon. Two effective methods are applied to practice in order to solve this problem. The first solution is photo augmentation, because of the limitation of data acquisition, there is no more than 5 photos of the same identity including identity card, certificate of identification and instant photos. The generated negative labels are obvious more than the positive ones, giving rise to imbalance of data distribution. Therefore, we employ horizontal flipping, cropping and RGB channels altering. Another solution contributed to this is Dual Minkowski Loss Function. As shown in Table 1, the accuracy of the three models shows excellent results. Not only the accuracy promotes, but the Validation Rates are also outstanding.
2,042.6
2018-01-01T00:00:00.000
[ "Computer Science" ]
Stimulus Variability Affects the Amplitude of the Auditory Steady-State Response In this study we investigate whether stimulus variability affects the auditory steady-state response (ASSR). We present cosinusoidal AM pulses as stimuli where we are able to manipulate waveform shape independently of the fixed repetition rate of 4 Hz. We either present sounds in which the waveform shape, the pulse-width, is fixed throughout the presentation or where it varies pseudo-randomly. Importantly, the average spectra of all the fixed-width AM stimuli are equal to the spectra of the mixed-width AM. Our null hypothesis is that the average ASSR to the fixed-width AM will not be significantly different from the ASSR to the mixed-width AM. In a region of interest beamformer analysis of MEG data, we compare the 4 Hz component of the ASSR to the mixed-width AM with the 4 Hz component of the ASSR to the pooled fixed-width AM. We find that at the group level, there is a significantly greater response to the variable mixed-width AM at the medial boundary of the Middle and Superior Temporal Gyri. Hence, we find that adding variability into AM stimuli increases the amplitude of the ASSR. This observation is important, as it provides evidence that analysis of the modulation waveform shape is an integral part of AM processing. Therefore, standard steady-state studies in audition, using sinusoidal AM, may not be sensitive to a key feature of acoustic processing. Introduction The auditory steady-state response (ASSR) is a clinically robust tool [1][2][3], which is used to study the dynamics of cortical following responses to sinusoidally amplitude modulated stimuli, and may be recorded with both EEG [4][5][6] and MEG [7][8][9]. Although the ASSR is known to be highly reliable, the order of stimulus presentation can affect amplitude modulation (AM) detection thresholds. Behavioural studies have shown that preexposure to AM affects AM detection thresholds, with both sinusoidal and non-sinusoidal adapting AM stimuli [10][11][12][13], and also that the degree of adaptation is dependent on the waveform shape [11]. Neurophysiologically, AM adaptation has also been shown to affect neural firing rates in the auditory cortex of marmoset monkeys [14]. Time-reversing asymmetric triangular AM, to generate 'ramped' and 'damped' AM, results in stimuli that have different behavioural detection thresholds but identical modulation spectra [15][16]. The discrimination of ramped AM is dependent on the slope of the onset ramp, relative to the modulation cycle [17]; indicating that modulation processing is dependent on waveform shape, rather than the modulation spectrum. A comparable finding was observed by Prendergast et al. [18] using MEG to study the ASSR to different widths of cosinusoidal pulsed AM stimuli, who show that the magnitude of the ASSR is dependent on the waveform shape rather than the modulation spectra, and is selective for the most prevalent waveform shapes in speech [19]. In this MEG study we use raised cosinusoidal pulsed AM stimuli, used by Prendergast et al. [18]. A key property of these stimuli is that they allow manipulation of the modulation waveform shape, independent of the modulation rate. We use these stimuli to explore whether stimulus variability affects the amplitude of the ASSR. We use three different pulse widths of cosinusoidal AM, and present them as stimuli which either have a repetitive waveform shape, or a waveform that varies pseudorandomly between pulse widths, to test whether variability in the waveform shape affects the amplitude of the ASSR. Participants and Ethics Statement Data were recorded from 21 participants. All participants had no known hearing disorders. Participants provided written informed consent. The study was approved by the ethics committee of the York Neuroimaging Centre, and was in accordance with the Declaration of Helsinki. One participant was removed from the study due to an anomaly on their MRI scan, and two further participants were removed due to moving too much during data acquisition. The 18 participants (11 female, 7 male) whose data were analysed had a mean age of 22.3 years, with a standard deviation of 3.1 years. Stimuli The stimuli used in this study were specifically chosen to evoke a strong ASSR. We use the three widths of raised cosinusoidal pulsed AM from Prendergast et al. [18] that gave the greatest average responses; these were cosinusoidal AM pulses with pulse half-widths of 16 ms, 24 ms and 32 ms. These pulsed AM stimuli were either presented as repetitions of the same modulation halfwidth (referred to as fixed-width stimuli), or as a stimulus that had a combination of the three modulation half-widths (referred to as mixed-width stimuli), see Figure 1. The design of the study has an internal control, and simply tests whether the ASSR to the mixedwidth AM pulsed stimuli is significantly different to the average ASSR to the three fixed-width AM stimuli. Our null hypothesis is that there will be no significant difference between the ASSR to the mixed-width AM stimuli, and the average ASSR to the three fixed-width AM stimuli. The cosinusoidal pulsed AM modulated a 500 Hz carrier waveform, with a modulation depth of 90%. Each AM stimulus was presented at 4 Hz, and had a duration of 3 s; hence each AM waveform contained 12 cosinusoidal pulses. The fixed-width AM stimuli had 12 repetitions of either the 16 ms, 24 ms or 32 ms modulation half-widths, the mixed-width AM had 4 of each of the 16 ms, 24 ms or 32 ms modulation half-widths, presented in a pseudo random order (see Figure 1). There were 42 repeats of each AM stimuli, plus 42 repeats of a 3 s 500 Hz pure tone, and 42 repeats of 3 s of silence. The six stimulus sets were interleaved and presented in a random order, with an inter-stimulus-interval of 1 s. Stimuli were presented monaurally to the left ear only. The whole experiment took 16 minutes and 47 seconds. Stimuli were presented via Etymotic Research ER3-A insert headphones (Etymotic Research Inc., Illinois) at 75 dB SPL. Acquisition Data were collected using a Magnes 3600 whole-head 248channel magnetometer (4-D Neuroimaging Inc., San Diego). The data were recorded with a sample rate of 678.17 Hz and low-pass filtered at 200 Hz. Prior to acquisition, five facial landmark headcoils and a digital head-shape were recorded using a Polhemus Fastrak Digitization System, which derive the landmark head-coil locations, and the digital head-shape location in relation to the position of the MEG sensors. The landmark head-coil locations were used to measure the head position in the scanner before and after acquisition. The digitised head-shape was used for coregistering the MEG data with the participants structural MRI. Coregistration Participants digitised head-shapes were coregistered with a participants' T1 weighted structural MR scan using an adaptation of the technique described by Kozinska et al. [20]. T-1 weighted MR images were acquired with a GE 3.0 T Signa Excite HDx system (General Electric, Milwaukee, USA) using an eight-channel head coil and a 3-D fast spoiled gradient-recalled sequence: TR/ TE/flip angle = 8.03 ms/3.07 ms/20u; spatial resolution of 1.13 mm61.13 mm61.0 mm; in-plane resolution of 25662566176 contiguous slices. For each participant, their structural MRI scan was skullstripped using the BET tool in FSL [21][22]. We then spatially normalized the skull-stripped MRI scans to the Montreal Neurological Institute (MNI) 152 standard 1 mm brain, which is based on the average of 152 individual T-1 weighted structural MR images [23]. Spatial normalisation was performed using the diffeomorphic non-linear SyN transform within ANTS [24]. Analysis MEG datasets were manually artefact rejected by visually inspecting trials and excluding from the analysis any trials that contained physiological or non-physiological artefacts. Across the 18 participants, 252 epochs were analysed per subject, and a mean of 15.1 epochs (s. dev. = 7.5 epochs) were rejected. A group analysis was performed in source-space using beamformer inverse modelling. A uniform 5 mm grid was generated on the MNI brain, and for each individual this grid was transformed to an irregular grid on their individual T1 structural MRI using the inverse of their nonlinear SyN transform. The data were inverse modelled using a vectorized, linearly constrained minimum-variance (LCMV) beamformer [25], modified as referenced in Huang et al. [26] as a Type I beamformer. To measure the 4 Hz ASSR at each location in source space, we averaged across the trials for each stimulus condition and measured the amplitude of the 4 Hz component of the FFT in each of the x, y and z directions, and then summed these to get the total activity at that location. To generate mean and variance estimates for the FFT calculations across all trials, we used jackknife re-sampling [27][28]. To enable us to compare the mean 4 Hz component of the three fixed-width ASSRs with the 4 Hz component of the mixed- width ASSR, we pooled the mean and variance jackknife statistics across the three fixed-width conditions. Pooling of the jackknife mean (eq. 1) and standard deviation (eq. 2) across the three fixedwidth conditions was done using the following formula: Where Jm is the jackknife mean, Js the jackknife standard deviation, Js 2 the jackknife variance, i is the condition (fixed-width 16 ms, fixed-width 24 ms, fixed-width 32 ms) and n is the number of jackknife re-samples for that condition, determined by the number of clean epochs. For group level analysis, the pooled mean 4 Hz component for the fixed-width ASSRs was compared with the mean 4 Hz component for the mixed-width ASSR, using a non-parametric permuted unpaired t-test [29]. These group statistics were performed on one region of interest (ROI) in the right hemisphere. In the defined ROI, maximum statistics on voxel values (single threshold as opposed to cluster size) were used to correct for the Family-Wise Error in individuals [29]. The ROI was based upon the location of the most consistent response to a variety of cosinusoidal pulsed AM, and a sinusoidal AM, in Prendergast et al. [18], which was centred at the MNI coordinate 70, 226, 22. This location was used as a seed point to choose a specific ROI from the Harvard-Oxford cortical atlas. The seed MNI co-ordinate was located on the border between the posterior divisions of the Middle and Superior Temporal Gyri (MTG/STG), in the right hemisphere. Hence, an ROI was defined that included the posterior divisions of the both the middle and superior temporal gyri, by selecting the right hemisphere section of areas 10 and 12 in the Harvard Oxford atlas (see Figure 2). To confirm the suitability of this area as an ROI in this study, we perform two analyses. Firstly, using a virtual electrode at the MNI co-ordinate 70, 226, 22, we calculate the average spectra of the ASSR to each of the four AM stimuli. We sum the spectra across the x, y and z directions, and average these across the 18 participants. These four spectra are then normalised by the amplitude of the 4 Hz component in the response to the mixedwidth stimuli. We also plot the 4 Hz component of each of the four ASSRs against the 4 Hz component of the respective stimulus waveforms. The energy in the stimulus waveforms we normalised by the amplitude of the 4 Hz component in the mixed-width stimuli. These initial analyses are principally performed to confirm the presence of a robust 4 Hz response at the MNI co-ordinate 70, 226, 22. The virtual electrodes were generated using a vectorized, linearly constrained minimum-variance (LCMV) beamformer [25,30]. We identified the MNI coordinate 70, 226, 22 in the non-linearly transformed brain in each participant, and then this location was re-warped back using the inverse SyN transform within ANTS, back to the individual's structural MRI. Virtual electrodes were generated from the rewarped, inverse transformed beamforming grid, and were unfiltered. As a secondary confirmation of suitability we also performed group level beamforming analyses following the beamforming methods outlined previously, and compare the mean 4 Hz component in the ASSR to each of the four AM conditions, to the 4 Hz component in the response to the unmodulated 500 Hz pure tone. This secondary analysis is to confirm that a strong 4 Hz response is observable with the spectral amplitude measure we use in our experimental contrast. It also allows us to compare the sources from this amplitude based metric, with the amplitude and phase based T2 metric used by Prendergast et al. [18]. As a final analysis we use the same beamforming methods to contrast the 4 Hz component in ASSR to the mixed-width stimuli, with the 4 Hz component in ASSRs to each of the three fixedwidth stimuli. This allows us to compare the mixed-width responses with the individual fixed-width responses, rather than with the pooled fixed-width responses as is done in the main experimental contrast. Verification of ROI selection Virtual Electrode Analysis. To confirm that we observed a clear 4 Hz following response at the location of the most consistent following response in Prendergast et al. [18], MNI co-ordinate 70, 226, 22, we calculate the grouped average spectra in the responses to each of the four AM stimuli. The spectra are then normalised by the amplitude of the 4 Hz component in the response to the mixed-width AM stimuli, see Figure 3 (left plot). We also plot the normalised 4 Hz components of the four ASSRs against the normalised 4 Hz components of the stimulus waveforms, see Figure 3 (right plot). In these plots of the average virtual electrode spectra we observe a distinct peak at 4 Hz, indicating that there is a strong 4 Hz ASSR for each condition, which is present across the group of participants. The normalised amplitudes of the 4 Hz components in the group-averaged ASSRs are; fixed-width 16 ms, 0.88; fixed-width 24 ms, 1.02; fixed-width 32 ms, 0.99; mixed-width, 1. The mean normalised amplitude across the fixed-width presentations is 0.96 of the amplitude of the Group Analysis. We compared the 4 Hz component of the ASSR to each of the four AM conditions, with the 4 Hz component of the response to the unmodulated 500 Hz pure tone, using an unpaired non-parametric permuted t-test [29]. Statistical thresholds were determined using maximum statistics on voxel values [29]. These t-maps are shown in Figure 4, and peak locations and max t-values are in Table 1. For each of the four AM conditions, when we contrast the 4 Hz components in the respective ASSRs to the 4 Hz component in the response to the pure tone, we see highly significant peaks of activity within the ROI. The p = 0.05 values range between t = 2.52 to t = 2.75, across the four AM conditions, and the max t-values range between t = 14.70 and t = 17.16 (see Table 1). Therefore, there are clear and statistically significant ASSRs to each of the AM stimuli. The location of the peaks in all four AM conditions; mixed-width, MNI coordinate (70, 232, 22); fixed-width 16 ms, MNI coordinate (70, 226, 212); fixed-width 24 ms, MNI coordinate (70, 226, 28); fixed-width 32 ms, MNI coordinate (70, 226, 28); are in close proximity to the seed location from Prendergast et al. [18], MNI coordinate (70, 226, 2). Note, the location of the mixed-width peak is slightly posterior to the location of the three fixed-width peaks. Analysis of Mixed-width vs Pooled Fixed-width responses Individual z-maps. At the individual level, before we perform the group level analysis, the 4 Hz component from the pooled fixed-width ASSR are contrasted with the 4 Hz component from the mixed-width ASSR, and plotted as z-maps for each participant (see Figure 5). These individual z-maps for each participant show where the 4 Hz component in the mixed-width ASSR is greater than the 4 Hz component for the pooled fixedwidth ASSR (positive z-values, plotted in a hot colour scheme); and where the 4 Hz component in the pooled fixed-width ASSR is greater that the 4 Hz component from the mixed-width ASSR (negative z-values, plotted in a cool colour scheme). The MNI coordinates of the peak locations for when the mixed-width ASSR is greater, max values, and when the pooled fixed-width ASSR is greater, min values, are in Table 2 Group level t-maps. For the group level beamformer analysis we compared the 4 Hz component from the pooled fixed-width ASSR, with the 4 Hz component from the mixedwidth ASSR, across the 18 participants using an unpaired nonparametric permuted t-test [29]. Statistical thresholds were determined using maximum statistics on voxel values [29]. A group level t-map, thresholded at p = 0.05 (t = 3. 19), is plotted in Virtual Electrode Analysis. To confirm that a clear following response was present at the peak of the difference in the group analysis, MNI coordinate 46, 226, 22, we calculate the grouped average spectra in virtual electrodes from the 18 participants, using the same methods that were used to generate the plots in Figure 3. In the FFT spectra for each waveform, see Figure 7, there are distinct peaks at 4 Hz, and notably the 4 Hz peak for the mixed-width ASSR is greater than the 4 Hz peak in any of the fixed-width ASSRs. This is consistent with the Analysis of Mixed-width vs Individual Fixed-width responses Group level t-maps. To further understand the relationship between the mixed-width responses and each individual fixedwidth response, we use the same group-level beamformer contrasts to compare the 4 Hz components in ASSR of the mixed-width response to the 4 Hz component in each of the fixed-width responses. In these contrasts plotted in Figure 8, a large area of the ROI showed significantly greater activity for the mixed-width condition compared to the fixed-width 16 ms condition; p = 0.05 threshold is t = 3.08; max t-value is t = 4.40, at the MNI coordinate 64, 220, 4; no voxels have negative t-values. Only one voxel showed significantly greater activity for the fixed-width 24 ms contrast; p = 0.05 threshold is t = 3.13, max t-value is t = 3.13, at the MNI co-ordinate 68, 226, 222; min t-value is t = 20.87. There were no voxels significant for the fixed-width 32 ms contrast; p = 0.05 threshold is t = 3.26; max t-value is t = 2.26; min t-value is t = 21.1. Discussion This study tested whether variability in waveform shape affects the amplitude of the ASSR. This was done by presenting three cosinusoidal pulsed amplitude modulations as stimuli which either have a repetitive waveform shape, or have a waveform that varies pseudo-randomly between different widths of cosinusoidal pulsed AM. The principal finding is that when variability is introduced to stimuli that have a fixed modulation rate, the average responses to the same individual AM pulses are altered. A key factor in the design of the paradigm is that the spectra of the variable mixedwidth AM stimuli, and the average spectra of the fixed-width AM stimuli, are identical. Hence, the assumption that there is a direct linear relationship between the spectra of the stimuli and the spectra of the responses is flawed. The specific null hypothesis; that there will be no significant difference between the ASSR to the Table 1), and a max t-value of t = 18.0. For peak locations, refer to Table 1. Anatomical axis are labelled as follows; R, right; L, left; A, anterior; P, posterior; S, superior; I, inferior. doi:10.1371/journal.pone.0034668.g004 Table 1. Max t-values and p = 0.05 threshold t-values, for each MNI peak co-ordinate taken from the contrasts of the 4 Hz components of the ASSRs to each of the four AM conditions; mixed-width; fixed-width 16 ms; fixed-width 24 ms; fixed-width 32 ms, and the 4 Hz component of the response to a 500 Hz pure tone, plotted in Figure 4. mixed-width AM stimuli, and the average ASSR to the three fixed-width AM stimuli, is therefore rejected. The beamforming contrasts in this study were performed within a defined ROI, the selection of which was based on the most consistent locus of activity in a previous study by Prendergast et al. [18]. Before we performed the experimental contrast between the mixed-width and pooled fixed-width responses, we verified the suitability of the ROI. Firstly, at the seed location, MNI 70, 226, 22, we calculated the grouped average spectra in virtual electrodes from the 18 participants. These spectra showed clear peaks at 4 Hz, and when the 4 Hz component in the responses is plotted against the 4 Hz energy in the respective AM stimuli, we see that the relationship is non-linear, see Figure 3. Notably, at this location, the relative amplitude of the fixed-width responses is similar to the Prendergast et al. [18] study, with the fixed width 24 ms giving the greatest response, the fixed-width 32 ms is the next largest, and the fixed-width 16 ms is the smallest response. In the beamformer contrasts, which are based on the amplitude of the 4 Hz component in the response spectra, we compare the 4 Hz activity in the ASSRs to the four AM conditions with the 4 Hz activity in response to a pure-tone. These contrasts all generate peaks of activity within the ROI that are in close proximity to the seed location. Hence this study which contrasts the amplitude of the response spectra generates similar peak loci to those in the Prendergast et al. [18], which uses both the amplitude and phase of the spectra with a T2 statistic. We are therefore confident that these analyses both verify the selection of the ROI, and also implicitly confirm that our beamformer methods are appropriate. The main group-level ROI beamformer analysis demonstrates that within the defined ROI there is a significantly greater 4 Hz component to the mixed-width AM, than to the pooled fixedwidth AM. This greater response to the mixed-width AM stimuli has a locus near the medial boundary of the STG and MTG, with a peak at the MNI coordinate 46, 226, 22. This significant difference at the group level was consistent with the trend observed at the individual level. In the analysis of the individual z-maps, 16 of the 18 participants showed some areas in the ROI that gave a greater response to the mixed-width AM stimuli, and other areas that gave a greater response to the fixed-width AM stimuli. However, across the group, there was selectivity to AM that was presented as mixed-width stimuli. The average maximum z-value mixed-width ASSR is greater than the fixed-width ASSR. The cool colour scheme shows negative zvalues, where the fixed-width ASSR is greater than the mixed-width ASSR. Activity is plotted between the respective p = 0.05 threshold (see Table 1), and a max z-value of z = 18.0. For peak locations, within the defined ROI, refer to Table 1. Anatomical axis are labelled as follows; R, right; L, left; A, anterior; P, posterior; S, superior; I, inferior. doi:10.1371/journal.pone.0034668.g005 for the positive z-maps was greater than the average minimum zvalue for the negative z-maps, Figure 5, and at the group-level the largest positive and negative t-values are t = 3.32 and t = 20.2, Figure 6. Analysis of the grouped average spectra in virtual electrodes generated for each of the mixed-width and fixed-width conditions, at the MNI coordinate 46, 226, 22, confirm that we are observing an ASSR at this peak location rather than spurious non-phase locked activity, Figure 7 (left plot). Moreover, when the 4 Hz component in the responses is plotted against the energy at 4 Hz in the respective AM stimulus waveforms, Figure 7 (right plot), we again see a non-linear relationship, and we also observe that the responses to all the fixed-width stimuli are less than the response to the mixed-width stimuli. At this location, the mean normalised amplitude across the fixed-width responses is 0.89 of the amplitude in the mixed-width response. Interestingly, although the fixed-width 16 ms response has the smallest average amplitude, it is the only response that is relatively larger than what would be predicted from the energy in the response waveform. Hence, this further demonstrates the non-linearity of the fixed-width responses. However, the main finding from these figures is that the responses to the variable mixed-width stimuli have a greater average 4 Hz component than the fixed-width stimuli, even when the fixed-width stimuli have an equal or greater amount of 4 Hz energy in the stimulus waveform. To further understand the relationship between the responses to the mixed-width stimuli, and to each of the fixed-width stimuli, we performed a final set of beamformer contrasts on the 4 Hz components in four respective ASSRs. These contrasts show that the responses to the mixed-width stimuli are significantly greater than the responses to the fixed-width 16 ms stimuli, however there is little or no significant difference with respect to the fixed-width 24 ms and fixed-with 32 ms responses. The observation that the significantly smaller response amplitude to the fixed-width 16 ms responses in not mirrored by a significantly larger response to the fixed-width 32 ms responses is further evidence for the non-linear relationship between the modulation spectrum and the response waveform. The principal finding of this study is that adding variability in to the stimulus waveform generates a greater steady-state response than stimuli which have an equivalent mean energy at the stimulus modulation rate, but a waveform shape that is repetitive. We replicate the findings from Prendergast et al. [18], that the relationship between the spectra of the AM stimuli and the ASSR is non-linear when the spectra of the AM stimuli is varied at a fixed modulation rate. However, we also find that this relationship is non-linear when the spectra of the AM stimuli are matched, but one set of stimuli has variability in the waveform shape, and the other does not. A dissociation between the modulation spectrum of an AM stimulus and behavioural discrimination thresholds is a well known phenomena in psychoacoustics. If the AM used is triangular rather than sinusoidal, then by using an asymmetric triangular modulation and time-reversing it, so called 'ramped' and 'damped' AM can be generated; which have different rates of onset of modulation, but identical AM spectra. These 'ramped' and 'damped' AM are easily discriminated [15][16], and the discrimination of ramped AM can be predicted from the change in the slope of an onset ramp, relative to the modulation cycle and independent of modulation rate [17]. Hence, there is strong perceptual evidence that modulation envelope processing is dependent on the shape of a modulation envelope, and independent of modulation rate; which is analogous to what we observe in this study. Whilst this study may appear to be consistent with a model of modulation processing based on modulation waveform shape, rather than the modulation spectrum of the stimulus, the most parsimonious explanation for the relatively greater response to the variable mixed-width stimuli may be due to adaptation in the fixed-width ASSR. Psychoacoustically, modulation detection thresholds to sinusoidal AM are known to be affected by preexposure to both sinusoidal and non-sinusoidal AM stimuli [10][11][12][13]31]. Green & Kay [11] also demonstrate, using sinusoidal, triangular and square wave AM adaptors, that the degree of adaptation is also dependent on the shape of the adapting waveform. Adaptation to AM stimuli is also seen neurophysiologically. Bartlett & Wang [14] studied AM adaptation in the auditory cortex of marmoset monkeys and found that the spiking of neurones in response to sinusoidal AM stimuli could be both suppressed and facilitated by pre-exposure to another sinusoidal AM stimuli. The observed suppression was tuned to modulation frequency, and they note that the suppression was not solely based upon spectral properties of the stimuli, but was sensitive in particular to the temporal characteristics of preceding stimuli. They also note that the pattern of suppression was not related to spiking habituation. An alternative explanation to for the greater response to the mixed-width stimuli may come from studies of ordered and disordered tone-pips. Chait et al. [32] studied the transition from either constant or regularly alternating tones, to a random sequence of tone-pips which alternate in frequency. The study found that there was an extra component in the average MEG response at the transition from the constant or regularly alternating tones to the random tone, with respect to what is observed at the transition from random tones to constant or regularly alternating tones. The inference from Chait et al. [32] is that in this study there may be extra components in the MEG response to the mixed-width AM stimuli. However, there is little evidence for such an interpretation, as this study specifically looks at the response at the modulation rate, and also, when we look at the averaged waveform to the mixed-width, and fixed-width AM stimuli, no extra component in the averaged waveform is observed. With respect to the locus in the ROI at which we find the significant difference in ASSR between the mixed-width and fixedwidth AM, it is at the medial boundary of the STG and MTG, at the MNI coordinate 46, 226. 22. However, the most consistent responses to cosinusoidal modulation pulse widths, as observed by Prendergast et al. [18] was at the MNI coordinate 70, 226, 22; and when we compare the 4 Hz component of the ASSR to each of the four AM stimuli in this study, with the 4 Hz component of the response to a pure tone, the peak response locations were similar to Prendergast et al. [18]. Hence, the location at which we observe the significantly greater ASSR to the mixed-width stimuli is different to the location of the greatest response to each of the respective cosinusoidal amplitude modulations. The contrasts between each of the four AM stimuli and a pure tone, in Figure 4, suggest that whilst the peaks to the three fixedwidth AM stimuli are relatively focal, the peak to the mixed-width AM stimuli is less focal, and more disparate. Hence, in the grouplevel contrast between the mixed width ASSR and the pooled fixed width ASSR, where there is a greater response to the mixed-width AM stimuli at the medial boundary of the STG and MTG, the greater response may be explained by the mixed-width AM stimuli stimulating a greater area of cortex. Alternatively, it may be that these medial and lateral loci have different functional roles, with the medial loci being selective for waveform shape. There are functional consistencies in the temporal processing literature with the locus of the peak in the ROI at which we see the significant differences between the ASSRs to the mixed-width and fixed-width conditions. Boemio et al. [33] using fMRI to study the spectro-temporal properties of auditory processing, observe that both STG are sensitive to the local temporal structure of a stimulus, but the right hemisphere Superior Temporal Sulcus STS shows selectivity for slow temporal cues, of the order 200-300 ms, and the left hemisphere STS selectivity for rapid temporal cues in the order of 25-30 ms. Consistent with this is the Abrams et al. [34] evoked potential study which finds slow temporal features of speech (3)(4)(5) lateralizing to the right hemisphere, and rapid temporal feature of speech lateralizing to the left hemisphere. We caution making too close a comparison between the Boemio et al. [33] and Abrams et al. [34] studies and this study, as this study used monaural presentation of the stimuli, to the left ear only. In this study, the same component modulation stimuli, cosinusoidal AM pulses with half-width durations of 16 ms, 24 ms and 32 ms, were presented either as repetitive stimuli; the fixed-width AM stimuli, or pseudo-randomly; the mixed-width AM stimuli (see Figure 1). This internally-controlled presentation of the same pulsed AM stimuli generated a significantly greater response when the stimuli were presented in a pseudo-random order. Hence, by using pulsed AM, rather than continuous sinusoidal AM, and by adding variability into the presentation of the AM, we are able to observe changes in the ASSR. Therefore, whilst one-component sinusoidal modulations of auditory and visual cues are highly desirable for their simplicity, we feel it is important to acknowledge, as Georgeson et al. [35] observed in vision, that to understand the complexity of neural processing in brain, we may need to use complex stimuli, and look for nonlinearities in sensory processing mechanisms. Conclusion We find that stimulus variability does affect the amplitude of the auditory steady-state response. We therefore reject our null hypothesis, as we find that the ASSR to the mixed-width AM stimuli is greater than the average ASSR to the fixed-width AM stimuli. This finding is consistent with previous studies of AM adaptation, and suggests that analysis of waveform shape is a key feature of acoustic processing. The location at which we find the greater response to the mixed-width AM stimuli is different to the location where we find the greatest response to periodicity.
7,578.2
2012-04-03T00:00:00.000
[ "Physics" ]
A Simple and Transparent Alternative to Repeated Measures ANOVA Observation Oriented Modeling is a novel approach toward conceptualizing and analyzing data. Compared with traditional parametric statistics, Observation Oriented Modeling is more intuitive, relatively free of assumptions, and encourages researchers to stay close to their data. Rather than estimating abstract population parameters, the overarching goal of the analysis is to identify and explain distinct patterns within the observations. Selected data from a recent study by Craig et al. were analyzed using Observation Oriented Modeling; this analysis was contrasted with a traditional repeated measures ANOVA assessment. Various pitfalls in traditional parametric analyses were avoided when using Observation Oriented Modeling, including the presence of outliers and missing data. The differences between Observation Oriented Modeling and various parametric and nonparametric statistical methods were finally discussed. Creative Commons CC-BY: This article is distributed under the terms of the Creative Commons Attribution 3.0 License (http://www.creativecommons.org/licenses/by/3.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/open-access-at-sage). Article Repeated measures ANOVA is a popular statistical technique widely used by a variety of research scientists. In most applications, the goal of the researcher is to examine changes over time on a single variable. For instance, a clinical psychologist may examine the long-term effectiveness of a therapeutic intervention by acquiring scores from a depression inventory on three different occasions: (a) 1 hr prior to the intervention, (b) 1 month post intervention, and (c) 1 year post intervention. By using repeated measures ANOVA, the researcher can assess the equivalence of mean levels of depression across the three measurement occasions. Of course, repeated measures ANOVA can also be used to address a much wider and more complex array of questions, especially when additional variables are assessed over time or when grouping variables are also examined. Researchers understand, nonetheless, that even for a simple study design, conducting a repeated measures ANOVA can be tricky business. One must worry about outliers, missing data, and a variety of assumptions (e.g., continuity, normality, homogeneity, and particularly sphericity). A choice must also be made between the traditional univariate, multivariate, and mixed-model approaches toward the analysis (Misangyi, LePine, Algina, & Goeddeke, 2006), and with this choice come additional concerns regarding corrections for Type I error inflation and the potential loss of statistical power (see Maxwell & Delaney, 2004). 1 In this article, we introduce a simple alternative to repeated measures ANOVA that requires fewer assumptions, is immune to outliers, and allows researchers to focus on the observations in hand rather than on the estimation of abstract population parameters. Researchers can also focus on assessing the accuracy of predicted patterns within the data rather than on the computation and interpretation of means and variances. Consequently, the issues underlying the choice between the univariate, multivariate, and mixed-model approaches to repeated measures ANOVA are completely eschewed. This novel and parsimonious alternative is referred to as an Ordinal Pattern Analysis in the context of Observation Oriented Modeling (Grice, 2011(Grice, , 2014. Below, we demonstrate its features using a published data set that presented numerous problems for repeated measures ANOVA but that in contrast was easily analyzed using an Ordinal Pattern Analysis. Repeated Measures ANOVA In a study on social reinforcement learning, Craig et al. (2012) captured and tagged honeybees to track their visits to an "artificial flower" in which the bees could consume a rewarding sucrose-rich solution. For the first six recorded visits, the bees were permitted to consume the sucrose solution and freely fly from the flower to return to the hive. Based 604192S GOXXX10.1177/2158244015604192SAGE OpenGrice et al. research-article2015 1 Oklahoma State University, Stillwater, USA on simple laws of learning, the bees were expected to return to the mechanical flower (an excellent source of food) at shorter and shorter intervals from the first visit to the sixth. The intervals between visits were referred to as Inter-Visit-Intervals, or IVIs. The bees were then "frustrated" for six consecutive visits by trapping them for several minutes in the mechanical flower after they had consumed the sucrose solution, thus delaying their return to the hive. For these trials, the bees were expected to return to the flower at longer and longer intervals as the "frustration" of being trapped in the flower affected their behavior. IVI measurements, reported in seconds, for 10 bees across the 12 trials are presented in Table 1. The maximum interval for a bee to return to the mechanical flower was set at 60 min (3,600 s), and all subsequent trials were treated as missing data if the bee returned several hours later, failed to return at all, or was observed foraging at a nearby open feeder with a less attractive source of sucrose (note Bees 7, 8, and 10 in Table 1). The recorded IVIs are considered independent between bees but dependent across trials and are thus suitable for analysis using a repeated measures ANOVA. As can be seen in Table 1, however, several features of the data are alarming. First, the 3,600 values are problematic because, although they fulfill a useful data management function, they represent extreme cases that could unduly affect the means for any given trial. Second, a single missing value on any one of the 12 trials could result in that bee being omitted depending on the type of analysis chosen. Finally, extreme times other than the 3,600 values may unduly influence the mean for any particular trial and consequently affect the ANOVA results as well. Indeed, the means and standard errors plotted in Figure 1 show the impact of the extreme times (>1,000 s) for Trials 8 through 12. As might be expected, the results from the omnibus ANOVA with all of the extreme values included yielded a highly significant univariate omnibus F value, F(11, 88) = 4.61, p < .001, η 2 = .37. However, a number of questions raise concerns about this result. Because the data were analyzed in SPSS using the General Linear Model option (i.e., the traditional univariate approach to repeated measures ANOVA), all of the data for the seventh honeybee were excluded automatically from the analysis. Should the missing data be replaced with estimated IVI times? Replacing missing data routinely rests on the Missing at Random or more restrictive Missing Completely at Random assumptions (see Fox-Wasylyshyn & El-Masri, 2005, for a pithy review), which may not be the case for these data. However, no matter how the missing data are finally handled, the extreme 3,600 values and other outliers are still present. Perhaps these values should be deleted and replaced as well, or perhaps some type of transformation should be applied to the data. Matters are made more difficult by the small number of honeybees relative to the number of trials. Even with a complete set of observations, the data could not be analyzed with the mixed-models approach (e.g., using the Mixed Model Trials 1 2 3 4 5 6 7 8 9 10 11 12 Bee 1 219 225 277 237 355 265 233 503 287 668 772 1,219 Bee 2 161 225 106 121 144 165 225 225 297 424 231 510 Bee 3 181 177 157 189 162 180 321 165 285 270 480 330 Bee 4 133 115 164 113 130 331 240 280 255 270 210 260 Bee 5 224 206 279 286 265 276 185 192 180 274 195 340 Bee 6 128 131 103 102 137 106 149 155 170 1,769 138 308 Bee 7 168 271 160 201 140 199 415 1, option in SPSS on restructured data; see Little, Milliken, Stroup, Wolfinger, & Schabenberger, 2006;Pinheiro & Bates, 2000, for discussions of mixed-model ANOVA), and the multivariate F value using the General Linear Model option in SPSS could not be computed due to insufficient degrees of freedom. The traditional univariate approach to repeated measures ANOVA was therefore the only option for testing the omnibus null hypothesis (as stated below), and this approach routinely requires an adjustment for violation of the sphericity assumption. When the Greenhouse-Geisser correction is applied, the degrees of freedom decrease to 1.46 and 11.68, yielding a p value of .042. The Huynh-Feldt correction reduces the degrees of freedom to 1.70 and 13.62, yielding a p value of .034. Finally, the lower-bound correction reduces the degrees of freedom to 1 and 8, yielding a non-significant p value of .064. It is widely recommended that researchers use one of these corrections to the univariate F value due to its sensitivity to violations of the sphericity assumption (Maxwell & Delaney, 2004). Which correction should be chosen here, and what of the other assumptions underlying the accuracy of the p value? Are additional transformations to the data or statistical adjustments necessary? With this question for these troublesome data, and without even arriving at more interesting specific mean comparisons, it should be clear that Frankenstein's monster is potentially at hand . . . and he will be all too willing to lead his creator into misguided interpretations and conclusions. Observation Oriented Modeling What is needed for these types of "difficult" data is an Ordinal Pattern Analysis (OPA), which is simple, relatively free of assumptions, and yields results that are transparent and easily interpretable (Thorngate & Edmonds, 2013). Conducted within the wider context of Observation Oriented Modeling (Grice, 2011), this analysis also prompts an overall shift in perspective. Traditional statistics, such as the ANOVA example above, represent what Breiman (2001) refers to as the modeling approach to data. This approach regards data to be the result of stochastic processes, and analyses are centered on model fitting and parameter estimation. Binary decisions are also often made with regard to the fitted models (e.g., "the linear model was statistically significant") and estimated parameters ("the null hypothesis, µ = 0, was rejected"). In contrast, the observation oriented modeler regards data to be the result of a generative causal mechanism. In the ideal case, the researcher will in fact construct an iconic model describing the structures and processes underlying the data (e.g., Grice, 2015;Grice, Barrett, Schlimgen, & Abramson, 2012). The goal of the analysis is therefore to identify theoretically meaningful and robust patterns with the given observations (data), which is more akin to what Breiman (2001) referred to as algorithmic modeling. Because patterns are sought, the computation of means, variances, covariances, and so on, is unnecessary, nor is it necessary to use a particular statistical model (e.g., the General Linear Model). The entire null hypothesis significance testing paradigm, which underlies the binary decisions in traditional statistics, is also replaced by analyses that (a) involve careful visual examination of data using the "eye test" (or "interocular traumatic test, " Edwards, Lindman, & Savage, 1963) and (b) provide the tools necessary for determining which observations are consistent with the theoretically meaningful pattern. Determining the overall accuracy of the explanatory model is therefore tantamount, leading to an increase or decrease of confidence in the model. To demonstrate the shift in perspective from data modeling and repeated measures ANOVA to Observation Oriented Modeling and OPA, we reanalyzed portions of Craig et al.'s (2012) original data. The complete design of Craig et al.'s study was quite complex, including 23 IVIs and 3 experimental and 1 control group of honeybees (total N = 50). For the purposes of demonstrating OPA and comparing it with ANOVA, we examined only the first 12 IVIs and only 2 groups of honeybees. This subset of the original data was sufficient to demonstrate the occurrence of learning in the honeybees (OPA results for the complete design can be found in Craig et al.), and it facilitated the presentation of various predicted patterns (via simpler graphs) as well as the exploration of novel analyses not reported by Craig et al. (viz., analyses involving predicted stability of IVIs). Testing Patterns As demonstrated in Craig et al.'s article, the shift in perspective from data modeling to Observation Oriented Modeling starts at the beginning, that is, with the hypotheses. With a repeated measures ANOVA, the goal is to estimate population parameters from the observed data, and in the framework of null hypothesis significance testing the null hypothesis is stated as follows: H 0 : µ 1 = µ 2 = µ 3 = µ 4 = µ 5 = µ 6 = µ 7 = µ 8 = µ 9 = µ 10 = µ 11 = µ 12 In other words, all 12 population means are hypothesized to be equal. The omnibus alternative hypothesis is that 2 or more of these 12 population means are not equal. More specific alternative hypotheses could be advanced contrasting pairs or groups of population means. Given these hypotheses, however, it is important to realize that Craig et al. (2012) had no interest in estimating population parameters. Indeed, it is not clear what honeybees would constitute the imaginary population. Would the population only include forager honeybees (Apis mellifera ligustica) of approximately 3 to 6 weeks of age from the two hives that were sampled in the study, or would the population include nurse bees, drones, and the queens of the two hives? Should the population instead equal the global population of Apis mellifera ligustica, or should it include all subspecies of Apis mellifera in the world? How far, exactly, should the sample be generalized? Unfortunately, there is simply no way to answer these questions in a non-arbitrary manner, which is not to suggest that populations are always defined arbitrarily or that estimating population parameters is never worthwhile. In political polling, games of chance, and research in which a random or representative sample can be drawn from a clearly specified population, for example, parameter estimation can prove fruitful; but none of these instances covers Craig et al.'s study, nor are they representative of the authors' scientific goals. Like the majority of behavioral researchers, Craig et al. (2012) were instead attempting to explain the behavior of honeybees. In other words, they were interested in making an abductive inference (Douven, 2011) to the causes underlying honeybee behavior rather than making a statistical inference to population parameters (Grice, 2015;Haig, 2005Haig, , 2014. Abductive inference involves reasoning from claims about phenomena, understood as presumed effects, to their theoretical explanation in terms of underlying causal mechanisms. Upon positive judgments of the initial plausibility of these explanatory theories, attempts are made to elaborate on the nature of the causal mechanisms in question. (Haig, 2008(Haig, , pp. 1019(Haig, -1020 Abduction is also consistent with the philosophical realism underlying Observation Oriented Modeling, which encourages scientists to develop integrated, causal models that explain the observations made in a given study (see Grice, 2011Grice, , 2014Grice, , 2015Grice et al., 2012, for examples). Turning one's focus to abduction rather than statistical inference leads to a number of startling and liberating realizations. First, because population parameters are not necessarily being estimated, issues such as inferential errors (Type I, II, or III; Harris, 1997), statistical power, and parameter bias can fall by the wayside. As will be made explicit below, the goal in Observation Oriented Modeling is to identify meaningful and improbable patterns of observations (i.e., behaviors) of individual honeybees. Second, aggregate statistics such as means, variances, and covariances can be avoided not only as population parameters, but avoided in the analyses altogether. Causes operate at the level of the individuals and serve as the necessary conditions for each bee's behavior. Causes do not affect means or other aggregate statistics directly, and hence the traditional, Galtonian/Fisherian/ Pearsonian ways of thinking about data are not required. The Observation Oriented Modeler is thus not restricted to the use of inferential statistical methods and traditional meanand variance-based analyses, and is instead ready to approach the order of nature in novel ways. As mentioned above, in Observation Oriented Modeling, the focus is placed on patterns of observations. Craig et al. (2012) clearly expected a particular pattern in the recorded IVIs based on their understanding of the laws of learning. The pattern they expected and tested is ordinal, and it is a pattern that should match the observed values for every honeybee in their sample; hence, the proper level of analysis (see Trafimow, 2014) is the recorded IVI times for each bee. With this in mind, the expected pattern of ordinal relations can be defined in the OOM software as shown in Figure 2. As can be seen, the matrix in the figure is comprised of 12 rows, with the top and bottom rows labeled as "Highest" and "Lowest," respectively. The columns are comprised of the 12 trials, and the shaded cells indicate the expected pattern of ordinal relations among the observed IVI times. As described above, each bee was expected to return to the mechanical flower faster and faster from Trials 1 through 6 (i.e., the IVIs were expected to decrease). After being "frustrated" by being trapped in the flower, each bee was then expected to take increasingly longer times to return to the flower (i.e., the IVIs were expected to increase). Moreover, the longest time for Trials 1 through 6 was expected to be shorter than the shortest time for Trials 7 through 12; in other words, every IVI for Trials 1 to 6 was expected to be shorter than every IVI for Trials 7 to 12. With the expected pattern determined based on the researchers' understanding of the causes behind the data, the observations for each honeybee can be examined. Figure 3 shows the results for the sixth bee in Table 1. As can be seen, her recorded IVI times matched the ordinal pattern closely. In OOM, there are two ways to quantify the fit between the observations and the expected ordinal pattern. First, only adjacent trials (i.e., the columns in Figure 3) can be considered, namely, 1 versus 2, 3 versus 4, 5 versus 6, and so on. With 12 trials, there are 11 pairs of adjacent trials. If the ordinal relation for a bee's observations matches the expected ordinal relation for any pair of adjacent trials, then the pair of observations is considered as "correctly classified." Using this adjacent counting method, the honeybee in Figure 3 produced eight correctly classified pairs of observations. Converting this result to a percentage yields the Percent Correct Classification (PCC) index for this honeybee, which is the primary numerical value to be obtained from analyses in OOM. For this bee, the PCC index is 72.73% (8/11) and is fairly impressive. Second, all possible pairs of observations can be considered, namely, 1 versus 2, 1 versus 3, . . . 1 versus 12, 2 versus 3, 2 versus 4, . . . 2 versus 12, and so on. With 12 trials, there are 66 ( 12 C 2 = 66) pairs of observations that can be classified as correctly matching the ordinal pattern. The number of correct classifications for the honeybee in Figure 3 using this more stringent criterion is 54, yielding a PCC index equal to 81.82%, an impressive result. This second approach for classifying pairs of observations is said to be more stringent because it takes into account the entire pattern rather than only adjacent pairs of observations. The hypothetical data in Figure 4, for instance, fit the expected pattern perfectly (PCC = 100%) when considering only adjacent observations, but only 36 of the 66 pairs of observations (PCC = 54.55%) are correctly classified if the entire pattern is considered. The decision to use either method would ideally be driven by a causal, integrated model (Grice, 2011), but as will be shown below, using both methods for computing the PCC values for the same data may be pragmatically beneficial. Using the option to match the entire pattern, the results for each honeybee are summarized in Table 2. As can be seen, the number of correctly classified pairs of observations is tallied and converted to the PCC index. Obviously, the PCC index ranges from 0 to 100, and the expectation here is that each value will equal 100, indicating perfect accuracy for the ordinal pattern. Not a single honeybee matched the pattern perfectly, all but one yielded PCC indices of at least 60%, whereas four bees' PCC indices were at least 80%, which is fairly impressive given the strict method of attempting to match the entire pattern. A probability statistic, referred to as a c-value (or chance value), is also reported in Table 2 for each honeybee. It was computed by first randomizing the data within trials for each bee. Consider, for example, values for a single bee from three consecutive trials: 140, 200, and 100. A randomized version of these data could be 200, 100, and 140. The PCC index is next computed for the randomized data on the basis of the expected ordinal pattern, and the process is repeated for a set number of times (1,000 randomized trials for the current analysis). The PCC indices are recorded, and the number of times the actual observed PCC index is equaled or exceeded for each honeybee is tallied and converted to a proportion. A high c-value near one therefore indicates that randomized versions of the same data routinely yielded PCC values as high or higher than the actual PCC value. In other words, the observed PCC index was not unusual. A low c-value near zero, on the other hand, indicates that the observed PCC index was rather unusual because it was not readily equaled or exceeded by PCC indices from randomized versions of the actual data. The chance value is therefore a randomization test, and it is reported as a "c-value" rather than a "p value" to remind researchers that it is an assumption-free probability derived from repeated randomizations of the actual data (e.g., no assumptions of normality, homogeneity, and so on, are made; see Winch & Campbell, 1969, for an early discussion of randomization tests; see Edgington & Onghena, 2007;Manly, 1997, for more recent treatments). Examination of the c-values in Table 2 reveals that the observed results were highly improbable for all but the fifth honeybee. The PCC index for this bee was only 39.39, and her results in Figure 5 show that, for some unknown reason, her IVIs were often shorter (indicating faster returns to the flower) after being trapped in the flower. No other information about this particular bee helped to explain her unexpected behavior, but it is clear OOM provides the tools necessary for focusing on the individual honeybees rather than on means or the estimation of abstract population parameters. Table 2 also shows that missing data can be handled at the level of the individual bees. IVI times were not recorded for Trials 10 to 12 for the seventh bee. The number of possible pairs of observations was thus reduced from 66 to 36, and the PCC index was computed on the basis of the 36 possible correct classifications. The result was impressive (83.33%) for the seventh honeybee, and the c-value (.01) again indicated that the result was improbable. As another option for handling missing data, the PCC index can be computed on the basis of the original possible pairs of observations, 66 in this case. Using this option for the seventh honeybee, the PCC index drops to 45.45% (30/66) with a c-value equal to .61, thus indicating the impact of the missing data. In addition to the missing data, the extreme values in the data set (see Table 1) posed no problems for the analysis because it is based on ordinal relations much like other non-parametric statistical procedures (see Cliff, 1996). No adjustments were necessary to the data themselves nor to the analysis itself. The results for all of the bees considered together can also be examined. These results are printed in the OOM software as follows: The Classifiable Pairs of Observations represents the total number of pairs of observations that can be classified correctly in the analysis. The value here, 630, equals the sum of the Classifiable Pairs of Observations reported in Table 2, and it can be seen that the missing pairs of observations for the seventh bee have been excluded. Of the 630 non-missing pairs across all of the bees and all of the trials, 438 were consistent with the ordinal pattern in Figure 2 and reported as Correct Classifications, yielding a fairly impressive PCC index of almost 70% (PCC = 69.52). The results from a randomization test show that not a single PCC index from 1,000 randomized versions of the data equaled or exceeded 69.52 (min = 38.10, max = 60.16) the observed PCC index. The c-value is thus less than 1 in 1,000, or c-value < .001. With additional randomizations of the data, a PCC value of at least 69.52 could still be observed. Finally, the Correctly Classified Complete Cases in the output above shows that not one of the nine honeybees (or cases) with non-missing data fit the entire pattern perfectly; PCC = 0, with a c-value necessarily equal to 1. The results for all 10 honeybees are arguably good with the overall PCC index equal to 69.52% and most of the individual bee PCC indices greater than 60%. Still, as with an omnibus ANOVA, a question to ask of the current analysis is whether or not most of the correct classifications are a result of comparing the trials prior to and after trapping the bees in the artificial flower. It may be the case, for instance, that the observations (recorded IVI times) do not fit the pattern in Figure 2 very well for Trials 1 to 6 nor for Trials 7 to 12, but that the first 6 IVIs are generally lower than the 6 IVIs after the bees were trapped in the flower. To address this possibility, the IVIs were analyzed for only the first six trials and using the expected decreasing ordinal pattern for only those trials. The overall PCC index was low (PCC = 42.67%, c-value = .90), and the PCC indices for 8 of the 10 bees were less than 50%. The remaining two PCC indices were 53.33% and 60.00%. Next, the last six trials were analyzed, again using the expected increasing ordinal pattern for those trials. The overall PCC index was higher (PCC = 73.19%, c-value < .001), and the PCC indices for all but one of the bees (the fourth bee) were greater than 65%, with four values equal to or greater than 80%. These results, considered with those for the complete pattern above, indicate that the IVIs did not generally decrease from the first to sixth trials, but the times were greater for Trials 7 through 12 while also increasing in a monotonic rate for all but one bee. Last, much like pairwise comparisons following an omnibus repeated measures ANOVA, all pairs of IVI values across all 12 trials can be examined in a manner consistent with the expected ordinal relations shown in Figure 2. Table 3 reports the results from these analyses, which clearly show the ordinal predictions for the first 6 trials do not fit the observations very well. Most PCC indices for these pairwise comparisons were 50% or lower, again indicating that the IVI times did not decrease in a monotonic fashion. When comparing IVI values from Trials 1 to 6 with Trials 7 to 12 in pairwise fashion, the PCC indices were all quite high. Most percentages were 80% or higher, and five values were 100%. Consistent with the conclusions above, trapping the honeybees in the mechanical flower led to higher IVI values. Last, pairwise comparisons for the last six trials showed the IVI values to be increasing in a somewhat monotonic fashion, with most PCC indices equal to or greater than 66%. Craig et al. (2012) also examined a group of 10 honeybees (see Table 4) who were allowed to fly freely from the mechanical flower for all 12 trials. These bees were not expected to demonstrate learned "frustration" like the other bees. Craig et al.'s analysis of the observed IVI times for these bees indeed revealed remarkably poor fit to the ordinal pattern in Figure 5. For the sake of demonstration in this article, let us now suppose that the IVI times were not expected to change substantially from trial to trial. Indeed, if the times for each bee were expected to be exactly equal across all trials, then the predicted ordinal pattern would be defined as shown in Figure 6. It would of course be unrealistic to expect perfect equality for each bee in this study. Numerous other causes are at work influencing the bees' behavior, including activity in the hive, fatigue, weather conditions, and temperature. Consequently, the pattern in Figure 6 is tested in the OOM software using an imprecision setting. Testing Equivalence Ideally, such a setting would be based on previous observations, studies, or precise theory, but here it will be set in a somewhat arbitrary fashion. Specifically, it is reasonable to expect each honeybee to return to the flower within 2 min of each trial given the proximity of the hive to the mechanical flower and the sorts of delays any given bee might encounter when flying, entering the hive, and unloading her social crop. The question then is, given a range of ±120 s, will the IVI times match the ordinal pattern in Figure 6? In other words, will each bee return to the mechanical flower within 2 min each time, from the first trial to the last? The results from the OOM software for the individual honeybees are shown in Table 5 and indicate an impressive degree of conformity between the observations and the expected pattern with the ±120 s imprecision setting. The PCC index was equal to 100% for the fourth bee, and the indices for four other bees were more than 80%. Only the IVI times for the 10th bee revealed very poor agreement with the expected ordinal pattern. As can be seen in Table 4, her IVIs ranged from 387 to 1,377 s and did not reveal any clear pattern across the 12 trials. The magnitudes of her times were generally high when compared with the other 9 bees, but no explanation could be found for why she was slow and for the variable in her IVI times. When equivalent patterns such as the one shown in Figure 6 are examined, the c-values will always equal 1 if all pairs of observations are being compared and the data are randomized as described above. Imagine, for example, IVI times for 3 trials equal to 100, 125, and 115, and a randomized order of these values as 125, 115, and 100. The absolute differences between all possible pairs of the original observations will equal the differences for the randomized values, namely, 25, 15, and 10. No matter how many times the IVI observations are randomized, this equality will always result, yielding the same PCC index in every case and a c-value of 1. Consequently, a randomization method that randomizes across (not just within) the bees is necessary for these types of patterns. For these data, we randomized both within all 12 IVI trials and across all 10 honeybees to compute the c-values reported in Table 5. As can be seen, 6 of the 10 values were .20 or lower, indicating improbable PCC indices. Not surprisingly, the c-value for the 10th bee was high (1.0 for 1,000 randomized trials). Finally, as with the original analyses above, OOM permits the researcher to focus on the individual bees under investigation as well as allows the researcher to examine the overall data, even with this type of pattern. The results for all 10 bees yielded an impressive PCC index of 75.45% when comparing all possible pairs of IVI times for the 12 trials. The accompanying c-value was also impressively low (<.001). Comparative ordinal patterns can also be constructed and evaluated to support the equality pattern shown in Figure 6. For instance, a monotonically decreasing pattern from Trials 1 through 12, with the imprecision setting of ±120 s, yielded an overall PCC index equal to 13.18%, c-value = .35. Tellingly, not one randomized PCC value equaled or exceeded 75.45%, the result for the equivalence pattern in Figure 6. Moreover, the individual PCC indices were below 50% for each of the 10 bees, and 9 of the 10 values were below 30%. Similarly dismal results were obtained when examining a monotonically increasing pattern across the 12 trials (overall PCC = 11.36, c-value = .69). In summary, the IVI times for the free-flying (non-frustrated) honeybees conformed well to the ordinal pattern of equivalence in Figure 6 with the imprecision setting of ±120 s. Data for only 1 of the 10 bees clearly did not fit this pattern. All other individual PCC indices were 59% or higher, and five were at least 80% (see Table 5). Comparing Groups Data from repeated measures and between-participants experimental designs can be combined in one analysis commonly referred to as a split-plot ANOVA, although it is sometimes referred to as a mixed-design ANOVA (Maxwell & Delaney, 2004). The data in Tables 1 and 4 are from 2 different groups of honeybees, and when compared across the 12 trials, constitute a 2 × (12) split-plot, factorial design. The primary reason for creating a factorial design is to assess the interaction between the two variables. The main effects may also be of interest but are often considered secondary and must be interpreted in the context of the interaction. With ANOVA, a statistically significant interaction is followed by either a simple-main-effects breakdown or the construction of interaction contrasts to understand the exact nature of the effect. Simple-main-effect breakdowns are more prevalent in the literature, but they conflate the variance of the interaction with the variance from one of the main effects. Interaction contrasts, by comparison, are "pure" follow-up tests of the interaction and have consequently been endorsed by a number of prominent methodologists (Harris, 1994;Rosnow & Rosenthal, 1995). An interaction contrast essentially describes how the pattern of means for one of the independent variables differs across levels of the other independent variable. For example, for Craig et al.'s data, a positive linear trend in the IVI means across all 12 trials for the frustrated honeybees could be contrasted with a negative linear trend for the free-flying bees. Analysis of split-plot designs in Observation Oriented Modeling and the OOM software is akin to testing an interaction contrast, except the analysis is not based on means and variances and does not involve the estimation of population parameters. As with the OOM analyses above, ordinal patterns are constructed and then evaluated against the observations themselves. Beginning with free-flying bees, recall the ordinal pattern in Figure 6 (with the ±120 s imprecision setting) was a fairly accurate representation of the IVI times: overall PCC = 75.45, c-value < .001. All but 1 of the 10 bees conformed to this pattern at least reasonably well. In the spirit of an interaction contrast, how well does this pattern fit the IVI times for the frustrated honeybees? The results indicated overall unimpressive accuracy (PCC = 52.54, c-value = .03), although the IVI times for the fifth frustrated honeybee (see Table 1) were remarkably stable within ±120 s (PCC = 92.42). IVI times from three other frustrated honeybees were also somewhat consistent with the pattern (PCCs = 61%-68%), but a small majority nonetheless were not consistent with the pattern (i.e., six frustrated bees with PCCs < 50%). Drawing on the original analyses above for the frustrated bees, an alternative competing pattern is shown in Figure 7. As can be seen, the IVI times are expected to be relatively stable across the first six trials and then increase monotonically from Trials 7 through 12. Again, the imprecision setting can be used, and for this analysis, any increases in IVI times for the last 6 trials must therefore exceed 120 s to be considered as correct classifications. The overall results for the frustrated honeybees using this ordinal pattern were good, but not highly impressive (PCC = 63.02%, c-value < .001). The fourth, fifth, and sixth bees' IVI times (see Table 1) did not fit the pattern very well (PCCs < 52%), whereas the remaining majority of individual IVI times yielded PCC indices that exceeded 60%. Evaluating the IVI times for the free-flying honeybees on the basis of the predicted ordinal pattern in Figure 7 yielded a very low overall PCC index (24.55%, c-value = .78). The PCC indices for the 10 free-flying bees were all below 50%, with seven values below 30%. Comparing the two distributions of overall PCC indices from the randomization test of each analysis furthermore revealed that an absolute difference of at least 38.47% (63.02 − 24.55) did not occur once in 1,000 trials (max = 20.35). The c-value for the difference between the overall PCC indices for the frustrated and freeflying bees was thus less than .001. In summary, the free-flying honeybees returned to the mechanical flower at a fairly steady rate (±120 s), as measured by their IVI times. Only one bee showed a high degree of variability in her IVI times, and no discernable pattern was noticeable in her observations. A slight majority of the frustrated honeybees did not conform to the equivalent ordinal pattern (see Figure 6) that captured the free-flying bees so well. The ordinal pattern in Figure 7 offered a better explanation of the observations for these bees as it demarcated when they were frustrated by being trapped in the flower after the sixth trial. The IVI times for a slight majority of the bees (6 of the 10 bees) conformed to this predicted pattern with an imprecision setting of ±120 s. By successfully contrasting the patterns of observations for the free-flying and frustrated honeybees, results from these analyses support Craig et al.'s theoretical goal of demonstrating the occurrence of learning. Discussion Comparing parametric ANOVA with OPA in the novel analysis of data from Craig et al.'s (2012) study revealed a number of distinct advantages for the latter approach. First and foremost was the relative ease of conducting the analyses. When performing a repeated measures ANOVA, a seemingly ambiguous choice must be made between the traditional univariate, multivariate, and mixed-model approaches toward analyzing the data. Interestingly, this choice was made academic for Craig et al.'s data because of insufficient degrees of freedom for computing the omnibus test statistics for the multivariate and mixed-model approaches. Despite its many limitations, the traditional univariate approach therefore had to be used and one of the adjustments for Type I error inflation (e.g., Greenhouse-Geiger) chosen and applied. The necessity of these adjustments in turn points to the sensitivity of the univariate F test to violations of assumptions, particularly the Note. IVI times are expected to be relatively stable across the first six trials and then increase monotonically across the final six trials. IVI = Inter-Visit-Intervals. sphericity assumption. Moreover, the analysis ignored the missing data problem, the 3,600 values, and the numerous influential IVI times. Addressing these issues would require difficult and sometimes assumption-laden decisions to make the data more suitable for analysis, including analyses involving more specific hypotheses about the 12 means. The conclusion is that conducting a repeated measures ANOVA for even a straightforward experimental design such as Craig et al.'s can be a complicated statistical affair. By comparison, the analyses in the OOM software were simple and unambiguous. The expected pattern of ordinal relations was defined for the group of honeybees being analyzed, and the conformity between the actual observations and pattern for each bee was summarized with the PCC index. A simple, assumption-free and distribution-free randomization test was used in an entirely secondary role to help evaluate each individual and group-level PCC index. The homogeneity of treatment-difference population variances (sphericity) assumption and other assumptions underlying repeated measures ANOVA were therefore avoided entirely. The PCC index itself is transparent and readily interpretable by scientists and lay people alike. The η 2 value for the first group of honeybees analyzed above (the "frustrated" group) was equal to .37, indicating 37% overlap between the trials (the independent variable) and the IVI times (the dependent variable). It is difficult to interpret exactly what this measure of effect size means without the aid of arbitrary conventions (e.g., Cohen's, 1988;Ferguson, 2009), and it is impossible to apply this effect size to any given bee in the sample. The individual PCC indices, however, are clearly interpretable, ranging from 0% to 100%, and indicate how well a bee's IVI times matched the predicted ordinal pattern. All that is needed is the predicted pattern and the options chosen to compute the PCC index (i.e., the adjacent or complete options, and the imprecision value); otherwise, it requires no special knowledge or conventions to be interpreted and conveyed to a lay person. Its meaning can be made even more obvious when presented with a graphic like Figure 3. Recent scholarship (Kazdin, 1999;Thompson, 2002) points to the importance of transparent indices of practical and clinical "significance" (or relevance), and the PCC index and visual features of OPA are well suited for conveying such information. The capability of examining all of the honeybees' IVI times as a group as well as examining each individual bee is another advantage of the Observation Oriented Modeling approach. This is particularly important for Craig et al. because their goal, like most scientists, was abduction rather than statistical inference. The predicted ordinal patterns were based on simple laws of learning derived from other species (e.g., decreasing run times for a rat in a maze across trials; Greenough, Madden, & Fleischmann, 1972). The laws are general in the sense that they should apply to any given honeybee, not because they are descriptions of population parameters. In other words, the laws are causal, and the causes inhere in the honeybees themselves, not in abstract population parameters. Another way to understand the point being made here is that in repeated measures ANOVA, the goal is to describe patterns of sample and inferred population means, and these patterns may not match a single honeybee's pattern of IVI times. For Craig et al., the proper level of analysis (Trafimow, 2014) is the individual bee, lest in describing the average, they end up describing "nobody in particular" (Vautier, Lacot, & Veldhuis, 2014, p. 51). Seeking an inference to best explanation (i.e., abduction), it is noteworthy an alternative predicted pattern for the frustrated group of bees was constructed. Specifically, the original predicted decline in IVI times for the first six trials was replaced by an unchanging ordinal pattern (±120 s), and then the frustrated and free-flying bees were compared with regard to the new pattern. The unchanging part of the pattern can be explained sufficiently when considering the study more closely. There is a limit to how fast honeybees can fly, and each bee must make her way through the active hive to unload her crop. Such factors may essentially cancel out any decreases in IVI times across the first six trials due to learning. In addition, prior to data collection, Craig et al. were obliged to shape multiple participants' responding in the artificial flower. After participants learned to reliably respond, two participants were concurrently run and the remaining trained participants were allowed to revisit the artificial flower before the next two participants' data collection commenced. In short, the amount of pre-training was not controlled between participants, nor was the number of reinforcers and returns to the artificial flower. Consequently, the bees may have had sufficient pre-training such that decreases in IVI times across the first six trials would be minimal, at best, a conjecture supported by the relatively stable IVI times for the free-flying bees. Additional experimental work would be required to fully support these explanations, but the point is clear: In Observation Oriented Modeling, traditional statistical concerns are minimized, whereas efforts to seek theoretical explanations are magnified. Another statistical issue that was eschewed in the OOM software was the influence of outliers in Craig et al.'s data. The analyses reported above were based on ordinal relations between trials and could therefore be regarded as a type of nonparametric statistical analysis, without the necessity of estimating population parameters. When considering the bees individually or all together in the context of the predicted ordinal pattern, it is difficult to relate the ordinal analysis to any specific nonparametric technique. Friedman's Test and Kendall's W (Siegel, 1956), for instance, are nonparametric alternatives to repeated measures ANOVA, but they are based on analysis of ranks and do not provide the means for assessing predicted ordinal patterns such as those posited by Craig et al. Nonetheless, it is well known that nonparametric statistics are generally less restricted by assumptions and are relatively immune to outliers (Cliff, 1996;Siegel, 1956). The OPA feature in the OOM software shares these advantages as it provides a flexible method for testing complex ordinal patterns that can be defined and assessed visually (e.g., see Figure 3) and numerically (viz., the PCC indices) for each case or for all the cases combined. Thorngate and Carroll (1986) long ago described and called for such ordinal methods they hoped would return the emphasis of theory and analysis to individuals and away from aggregate statistics and the estimation of abstract population parameters. Thorngate and Edmonds (2013) provide more recent examples of how their own OPA technique can be used to model crime rates and ratings of happiness. It should be mentioned in closing that OPA can be used to test predicted parametric patterns, such as linear or quadratic functions. Describing how this is done is beyond the scope of this article, but it entails testing ordinal patterns for difference scores (assuming equal intervals between observations) using the imprecision option if necessary. Specific parametric predictions can moreover be tested. For instance, suppose Craig et al. possessed sufficient experimental control and theoretical power to predict the exact IVI values for each honeybee. The predicted values could be compared with the actual values across all 12 trials either exactly or within a set range (e.g., ±10 s) to obtain the PCC indices and c-values for each bee and for all of the observations combined. The techniques demonstrated in this article are thus flexible and capable of modeling different types of data and a wide array of patterns. These techniques are also easy to use and yield results that are transparent and readily interpretable. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research and/or authorship of this article. Note 1. The focus of Misangyi, LePine, Algina, and Goeddeke's (2006) article is repeated measures regression, which is not listed here. This approach is also not discussed in this article because it is nearly equivalent to the traditional univariate approach to repeated measures. Misangyi et al. also offer a general recommendation for the univariate, multivariate, and mixed-model approaches over repeated measures regression.
10,931.2
2015-07-01T00:00:00.000
[ "Mathematics" ]
Binary Tree Pricing to Convertible Bonds with Credit Risk under Stochastic Interest Rates The convertible bonds usually have multiple additional provisions that make their pricing problem more difficult than straight bonds and options. This paper uses the binary tree method to model the finance market. As the underlying stock prices and the interest rates are important to the convertible bonds, we describe their dynamic processes by different binary tree. Moreover, we consider the influence of the credit risks on the convertible bonds that is described by the default rate and the recovery rate; then the two-factor binary tree model involving the credit risk is established. On the basis of the theoretical analysis, we make numerical simulation and get the pricing results when the stock prices are CRR model and the interest rates follow the constant volatility and the time-varying volatility, respectively. This model can be extended to other financial derivative instruments. Introduction Convertible bonds with the characteristic of bonds and stock are a complex financial derivative. They provide the right to holders that they give up the future dividend to obtain some stock with specified quantity. The pricing of convertible bonds is more difficult than straight bonds and options; the main reason is that it not only has value of bonds, but also involves all kinds of embedded option value brought by provisions of conversion, call, put, and so on. What is more, the embedded options in most times are American options. So, generally speaking, the pricing of convertible bonds cannot get closed-form solution; in most conditions, numerical method was adopted, for example, binary tree method, Monte Carlo method, finite difference method, and so on. As for Monte Carlo method, firstly it uses different stochastic differential equations to describe the pricing factor models in the market for simulation, then it makes pricing based on the characteristic of convertible bonds, for example, the boundary conditions acquired by all kinds of provisions, to make pricing for conversion bonds (Ammann et al. [1], Guzhva et al. [2], Kimura and Shinohara [3], Yang et al. [4], and Siddiqi [5]). But because the supposed parameters of stochastic differential equations are exogenous, this method not necessarily makes a better fitting for the existing market conditions. The binary tree method can solve the above problems. The binary tree method is firstly put forward by Cox et al. [6], Cox-Ross-Rubinstein (CRR) binomial option pricing model. After that, many researchers revised and popularized it. Cheung and Nelken [7] firstly apply binary tree to convertible bonds pricing and obtain the pricing solution of a twofactor model which is based on stock prices and interest rates. Carayannopoulos and Kalimipalli [8] apply the trigeminal tree pricing model to convertible bonds pricing research with single factor. Hung and Wang [9] also apply binary tree model to convertible bonds pricing which embodies default risk and considers the influences of stock prices and interest rates. Chambers and Lu [10] further considered the correlation of stock prices and interest rates and expanded the model of Hung and Wang. Binary tree model has been widely used in the pricing of contingent claims such as stock options, currency options, stock index options, and future options. Xu [11] proposes a trinomial lattice model to price convertible bonds and asset swaps with market risk and counterparty risk. Interest rate is a very important factor in the financial market; all security prices and yields are related to it. The interest rates model has equilibrium model, no-arbitrage model, and so on. In the equilibrium model, interest rates are generally described by some stochastic process, which is mean-reversion, to make interest rates show the trend of convergence to a long-term average with the passing of time, including Vasicek model, Rendleman-Bartter model, and CIR model. The parameters of these models should be estimated with history data, by selecting parameters purposely, but generally this fitting is not accurate and even reasonable fitting formula cannot be found. No-arbitrage models make initial term structure as model input and construct a binary tree such as CRR model for interest rate process, so that the term structure can fit the reality better and more concise. The relatively wide application no-arbitrage models include Holee model, Hull-White model, Black-Derman-Toy model, and Heath-Jarrow-Morton model. Different from the interest rate model of Hung and Wang and Chambers and Lu, this paper uses constant volatility and time-varying volatility binary tree model to describe short interest rates which are more intuitive and convenient. This paper makes pricing research of convertible bonds with the call and put provisions and uses binary tree method for the modeling of state variables in the financial market. As the duration of convertible bonds is relatively longer than straight bonds, their prices are subject to the impact of interest rates. Moreover, as one of the corporate bonds, convertible bonds may have credit risk. So this paper uses different binary trees to model the process of stock prices and interest rates, and considering the impact of stock dividends and credit risk to convertible bonds, it adopts default rate and recovery rate to describe credit risk and get two-factor binary tree model involving credit risk; on this basis, with example simulation to get the convertible bonds, pricing results under the condition of stock price obey CRR model, constant volatility interest rate binary tree model, and timevarying volatility interest rate binary tree model. [12] deduces the continuous form of short-term ( ) in Ho-Lee model [13] meeting the following stochastic differential equations: Market Model where ( ) is the drift, ( ) is the instantaneous volatility, both can be the function of time , and ( ) is Brownian motion. Grant and Vora [14] get the discrete form of (1) as follows: Make ( ) to be the forward interest rate in the interval [ , + 1]. Then get where Suppose that volatility is constant; that is, ( ) = , ∀ > 0, and then And get the constant volatility interest rates binary tree as shown in Figure 1. Time Varying Volatility Binary Tree Model. Jarrow and Turnbull [15] supposed that the volatility of short-term is changeable in different intervals, but is constant in the same time interval. Let Δ = 1, and then the discrete form of interest rates can meet Abstract and Applied Analysis The variance formula of the sum of short-term interest rate is And then get time-varying interest rates tree as shown in Figure 2. The stock prices will have two states; means the probability of market up, and then the probability of market down is 1 − . If the current stock price is , then the stock price of later period may have two possibilities: , , and = × , = × ; , separately mean the magnitude of up and down. If the initial price of stock is known, then the stock price tree can be determined by the given model parameters , , and . Stock Price Binary Model parameters , , and will directly impact the results of binary tree; the selection of them should follow no-arbitrage principle. Generally speaking, there are two selections: CRR model [4], equal-probability binomial model (Roman [16], Hull [17]). This paper adopts CRR model to describe pricing process of stock. CRR model selects parameters as follows: Especially, if the actual financial market is changed to risk neutral market. Then the expected profit of stock will change to risk-free interest rate , but the volatility is the same. The probability of price up in this model is = ( Δ − )/( − ); among them, is risk-free interest rate. Under this condition, the pricing result is no arbitrage. Credit Risk. Consider convertible bonds with credit risk. We adopt the method of Jarrow and Turnbull [18] We can get the riskfree interest term structure and risk interest term structure from them. If the recovery rate is already known, then the rate of risk , in number period of bonds, can be acquired. The detail analysis process is as follows. If the risk-free interest rate of one-year period is 0 and risk interest rate is * 1 , then If the risk interest rate of two-year period is * 2 , then When 1 , 2 can be got by the above formula, with the same method, we can get the risk interest rate { , ≥ 1} of each period. Stock and Interest Rate Binary Tree Model with Credit Risk For convertible bonds with credit risk, suppose that the underlying stock price and risk-free interest rate process are random, and the underlying stock price process is described by CRR model, where the stock magnitude of up and down is = √Δ , = − √Δ respectively. Suppose that the stock price is 0 when default, and then the possible price of stock is 0, , . In risk-neutral world, the expected yield rate is risk-free interest rate , and the stock continuous dividends yield is , then the expected yield rate is − ; so to meet the no-arbitrage condition, there is is the up probability of stock with credit risk. As the risk-free interest rate of all periods is random, suppose that the risk-free interest rate of number period is , the volatility of stock is constant , and dividends rate is , so the parameters of stock price in all periods can be generally presented as Suppose risk-free interest rates are stochastic and described by binary tree model, then the stock tree and interest rate tree involving credit risk are combined as shown in Figure 3. In this paper, we suppose that the correlation coefficient of interest rate and stock price is 0. After obtaining the process of stock prices and risk-free interest rates, the value of convertible bonds can be got by backward induction. We divide the value of convertible bonds into two parts; one is the value of equity got by converting to stock or exercise embedded options; the other is bonds value Abstract and Applied Analysis 5 Numerical Examples We take a four-period binary tree model as an example to expound the convertible bonds pricing process with call provision and put provision in the above models and compare the results under the constant volatility interest rate model and the time-varying volatility interest rate model. Process of Interest Rate and Stock. The initial parameters of the convertible bond are all the same to the constant volatility interest rate model and the time-varying volatility interest rate model. Suppose that time interval Δ = 1, the up probability of interest rates is = 1/2, and the 1-4year period yields of risk-free zero-coupon bonds are 6.145%, 6.366%, 6.837%, and 6.953%, respectively; the volatility of short-term interest rates is 2.5%. So the other binary tree parameters of constant volatility interest rate can be got as shown in Table 1. In the same way, the other binary tree parameters of time-varying volatility interest rate can be got as shown in Table 2. Then we get the two interest rate binary trees. The = 0.8764, = 0.8027, = 0.7307, = 0.6604. Then we get the four-period stock prices binary tree. Default Rates. We take the corporate bonds as reference risk bonds; suppose that the 1-4-year period yields of corporate zero-coupon bonds are 7.645%, 8.155%, 8.557%, and 9.128%, respectively, and the recovery rate of convertible bonds is constant = 45%, and one-year risk-free interest rate 0 = 6.145%. Interest rate binary tree indicates that the branch point of number period is . If the interest rate of number branch point in number period is ( − 1) , is the interest rates state from start to current, and then the derived interest rate branch point of number + 1 period is ( ) , ( ) . As = 1/2 = 1 − , to meet the no-arbitrage principle, parameters { 1 , 2 , 3 , 4 } meet the following four equations: By the above equations and the constant volatility interest rate binary tree, the default rates of bonds in every period are shown in Table 3. Similarly, the default rates of corporate bonds in every period under time-varying volatility interest rate binary tree are shown in Table 4. Price Process of Convertible Bonds. The convertible bond contains call and put provisions, the duration is = 4, the face value got in maturity date is 100, conversion rate is 3, callable price is call = 106, and puttable price is put = 80. We suppose that the investors can exercise the puttable right after one year. Now we take the convertible bond under time-varying volatility interest rate binary tree to explain its pricing process. Take four points A, B, C, and D in pricing tree of Figure 4 into consideration; among them, C, D are at the end of period, 4, B are at the end of period 3, and A is at Conclusions Binary tree method is a classical pricing method, by constructing the binary tree of state variable to describe the possible paths of state variable in the duration of contingent claims and then to make pricing research. Binary tree method can effectively solve the path-dependent options pricing, intuitive and easy to operate. As the embedded options in the convertible bonds are all American options, binary tree method becomes one of the main pricing methods of convertible bonds. Interest rate is the main factor which impacts the price of convertible bonds; the description of its binary tree model is the main problem of convertible bonds pricing. This paper adopts constant volatility and timevarying volatility binary tree model to describe interest rates and further consider the impact of stock dividends and credit risk to the price of convertible bonds, adopt default rate and recovery rate to describe the credit risk, and get the twofactor binary tree model with credit risk added. Based on this, we make a numerical example and get the convertible bonds pricing result under the stock prices obeying CRR model and the constant and time-varying volatility interest rate binary tree model. The model can be popularized to the pricing of convertible bonds with more complex provisions and other financial derivatives such as bond options, catastrophe bonds, and mortgage-backed security.
3,309.6
2013-04-17T00:00:00.000
[ "Business", "Economics", "Mathematics" ]
A preliminary evaluation of surface latent heat flux as an earthquake precursor Introduction Conclusions References Introduction Among a large number of so-called earthquake precursors (such as geomagnetism, gas composition and electromagnetic radiation), thermal variations have been of particular interest in the last several decades.In the earlier 1980s, temperature data obtained from ground meteorological stations were used to study the relationship between earthquakes and soil or air temperature changes at different depths and elevations (Hao et al., 1982;Wang and Zhu, 1984).In recent years, the development of satellite and sensor technologies has allowed observation at much higher spatial and temporal resolutions.By using NOAA AVHRR satellite thermal images, Tronin used thermal remote sensing data to observe abnormal infrared radiation in a seismically active region in central Asia (Tronin, 1996).Analogous remotely sensed images were also used in Russia, China, India, Mexico and other countries (Choudhury et al., 2006;Genzano et al., 2007;Ouzounov and Freund, 2004;Ouzounov et al., 2007;Pulinets et al., 2006;Qiang et al., 1997;Tronin, 2000).Furthermore, thermal remote sensing products have also been employed in the study of the relationship between thermal variations and seismic activity, such as outgoing longwave radiation (OLR) and temperature of a black body (TBB) (Ouzounov et al., 2007;Zhang et al., 2010). As a key component of Earth's energy budget, SLHF (surface latent heat flux), which represents the heat flux resulting from changes in water phase, has been recently proposed as a possible precursor to marine/coastal earthquakes.Dey and Singh firstly found some anomalous SLHF peaks a few days prior to five earthquakes that occurred near the ocean, causing them to propose SLHF as a precursor to seismic activity in coastal regions (Dey and Singh, 2003).Based on their discovery, although some data-mining technologies including wavelet transformation and spatiotemporal continuity analysis have been consequently introduced to explore the temporal and spatial variations of SLHF before and after earthquakes (Cervone et al., 2004(Cervone et al., , 2005;;Singh et al., 2007), there are quite a few of scientists still focusing on point and shortterm analysis.Most of the present study of relationships between seismic activity and SLHF precursors generally consists of focusing on one or more specific earthquakes, comparing their individual daily SLHF for several months before the earthquake to background values (calculated differently by different authors), declaring anomalies, displaying several images of the variation in SLHF prior to and following the earthquake, and analyzing the spatial patterns of SLHF variations in a certain area (Chen et al., 2006;Dey and Singh, 2003;Li et al., 2008;Pulinets et al., 2006;Qin et al., 2010Qin et al., , 2008)). As a potential earthquake precursor, SLHF variation is urged to be evaluated statistically.Although many scientists have studied the theory of pre-seismic thermal variations (Freund et al., 2007;Pulinets et al., 2006;Saraf et al., 2009), there is still no comprehensive and widely accepted geophysical explanation for thermal changes prior to seismic activity.To get rid of false predictions caused by random noise or by chance coincidence, any earthquake-predicting method (whether short-term or long-term) needs to be evaluated statistically (Kagan, 1997;Geller, 1997).Kagan and Jackson proposed a set of rules for evaluating earthquake forecasting methods during the famous VAN debate (Jackson, 1996;Kagan and Jackson, 1996).According to their research, any possible earthquake-predicting method should satisfy two basic standards: (1) that the suitability of the method be ascertained and values of adjustable parameters be established during the learning period; and (2) that no parameter fitting is allowed in the control stage.So far, there is hardly any published paper focusing on the evaluation of the so-called earthquake precursor SLHF variation. In this study, the evaluation procedure was carried out in three steps: identifying short-term anomalies based on other studies, determining if they are earthquake-induced anomalies using long-term data, and changing some parameters to analyze their effect on the correlation foundation.As a result, this paper is organized as follows: earthquakes and SLHF products are introduced in Sect.2; the quantitative short-and long-term relationships are illustrated, classified and evaluated in Sect.3; the discussion is extended to SLHF data and related parameters to address the importance of data applicability and threshold settings in Sect.4; and concluding remarks are given in Sect. 5. Earthquakes During the past decade, dozens of disastrous earthquakes occurred in close proximity to an ocean or below the seafloor.In this paper, we take six earthquakes into consideration: Sumatra, Papua, Samoa, Haiti, Tohoku and one east of the South Sandwich Islands (hereafter referred to as ESSI).The main selection criteria include a magnitude of M w = 7.0 or larger, similar focal depth in the crust and near or beneath an ocean.Figure 1 shows the epicentral locations of the selected earthquakes, and Table 1 gives their basic information (http://earthquake.usgs.gov/). Surface latent heat flux data Earth's surface not only absorbs and releases heat by electromagnetic radiation but also exchanges energy with the atmosphere through sensible and latent heat exchange.The former is caused by air turbulence or convection, and the latter is mainly caused by water phase changes.The term "surface latent heat flux" is used to describe the flux of heat from the surface of the land or ocean to the atmosphere that is associated with the solidification, melting and transpiration of water (Bourras, 2006;Schulz et al., 1997).Due to the homogeneity of ocean medium, SLHF can be easily used to monitor heat variations at the ocean-atmosphere interface.SLHF data can be obtained in various ways.Traditionally, SLHF has been computed from bulk formulas that use ship-or ground-based measurements.However, due to the low temporal and spatial resolution of this point-type data, the availability and accuracy of station-derived fluxes are relatively limited (Singh et al., 2001).By assimilating land surface, ship, rawinsonde, pibal, aircraft, remote sensing data and other available data, the NCEP/NCAR (National Centers for Environmental Prediction/National Center for Atmospheric Research) reanalysis system provides global integrated reanalysis data series at an accuracy of 10-30 W m −2 , suitable for long-term surveys (1979 and newer data -the third phase of the evolution of the global observing system, i.e., the "modern satellite era").The data employed in this paper were downloaded from the FTP server ftp://ftp.cdc.noaa.gov.Daily mean SLHF data are represented by a Gaussian grid of 94 lines from 88.542 • S to 88.542 • N, with regular 1.875 • longitudinal spacing and projected onto a rectangular grid (Kalnay et al., 1996;Kistler et al., 2001).Corresponding NCEP grid values can be calculated from the longitude and latitude of individual earthquake epicenters (refer the last column in Table 1). Classification of relationship To evaluate the correlation between earthquakes and SLHF anomalies statistically, we assumed their behaviors to be two independent events and classified their relationships into four categories: 00, 01, 10 and 11 (see Table 2).To our concerns, only anomalies that occurred within a specified time window before a given earthquake were considered.The definition of "anomaly" as well as "time window" will be given in Sect.3.2. Figure 2 shows the four types of relationships in the area of the Tohoku earthquake over a period of more than 20 yr.DOT stands for "day of total years", which spans from 1 January 1991 (DOT = 1) to 1 January 2012 (DOT = 4383).Dark triangles mark values that surpass the anomaly threshold, which could be interpreted as anomalous signals.The arrows indicate specific earthquakes during the study period.As the 00 category indicates a period of no seismicity or anomalies, only categories 01, 10 and 11 are discussed in the following sections. Establishment of parameters To define the thermal anomaly precisely, we firstly selected four adjustable parameters before the formal evaluating procedure: (1) M -an earthquake with a magnitude M (M w ) or larger is included in the earthquake list and is considered for correlation examination; (2) anomaly threshold -values beyond this threshold are considered as anomalies; (3) time window -the length of days between the beginning of an anomaly and an earthquake; and (4) E -the extent/amplitude of an anomalous value.For SLHF data, the unit of Secondly, for all of these earthquakes, the values of former two parameters were preliminarily fixed according to previous researches.A comprehensive review of the literature on the identification of thermal anomalies, coupled with knowledge of seismology and statistics, suggests that (1) the parameter M can be set at a magnitude of 5.0, which is a moderately sized earthquake; and (2) the anomaly threshold can be defined as the mean value of SLHF data over tens of years, including the study period, plus 2.0 times the standard deviation (i.e., µ + 2.0σ ). Thirdly, considering the various geological and climatic backgrounds of the six earthquakes considered here, the values of time window, DOT and E were established based on the short-term SLHF variations corresponding to each earthquake.The variation in SLHF for 90 days prior to and 30 days following each main shock is displayed in Fig. 3.The upper gray line shows the reference maximum values (i.e., anomaly thresholds).The lower black line represents the daily values of NCEP SLHF grid points encompassing the epicenter of each earthquake.The bold black arrow indicates the date of each earthquake and the triangle highlights SLHF anomalies.For the Sumatra earthquake, there was only one anomaly 69 days before the main shock.This anomaly lasted for 6 days and had an average value of 22.79 W m −2 .Compared to the anomaly before the Sumatra earthquake, the anomaly associated with the ESSI earthquake was less significant; it lasted only 2 days and had a mean value of 7.77 W m −2 .However, given the amplitude of the SLHF variations in the ESSI area, this anomaly is still notable.The two anomalies before the Papua earthquake are difficult to identify, and both have low DOT and abnormal ranges.Interestingly, an anomaly occurred 7 days after the main shock, which was near the peak value for the 3 months surrounding the main shock.However, we only focus on precursory SLHF anomalies and do not discuss this anomaly further.Seventy days prior to the Samoa earthquake, there was one obvious anomaly that continued into the next day and averaged 18.04 W m −2 , which is relatively significant.Two peaks occurred before the Haiti earthquake, but they are both small.Three peaks exceed the background level before the Tohoku earthquake.The mean values of these anomalies are larger than 30 W m −2 , exceeding its µ + 2.0σ threshold by nearly 200 %.For each main shock, the values of time window and DOT are the maximum, while E is the average of anomaly values.Individual values of the four parameters for each studied earthquake can be found in Table 3. Identification and long-term evaluation Based on the parameters established earlier in this paper, the long-term analysis for related SLHF variations and seismicity was carried out in two stages of comparison: "01" vs. "11" and "10" vs. "11". In the first stage, we computed the occurrence probability of "01" and "11", i.e., we calculated the number of times that precursory changes in SLHF satisfied the standards of being an anomaly and the probability of earthquakes that occurred within the given time window.The variations in SLHF within one NCEP grid cell might be affected by several factors, including seasonal changes, monsoons, and seismic activity.To assess the impact of earthquakes near the epicentral NCEP grid area, all earthquakes larger than a given magnitude (M) and within an area of approximately 1 million km 2 around the epicenter of each of the six earthquakes (roughly 10 • longitude by 10 • latitude; the individual area varies with the latitude of each epicenter) were taken into consideration.The results of this analysis are given in Table 4.To remove the foreshock-main shock-aftershock effect and its influence on later changes in SLHF, we also combined earthquakes within 30 days of each other (referred to as solo earthquakes). Table 4 gives the probabilities of "01" and "11" scenarios of relationships between SLHF anomalies and earthquakes.There are many instances in which the SLHF value surpassed the anomaly threshold.Haiti had the fewest anomalies.Even so, it had 42 abnormal variations over the past 20 yr.Before the removal of the earthquake clustering effect, the numbers of earthquakes larger than M for each of the six cases were remarkable large.Except for Haiti, the percentages of "11" scenarios were significant, indicating that many earthquakes occurred after SLHF anomalies.After the de-clustering process, both the number of earthquakes and the percentage of "11" scenarios decreased significantly, and the correlation is statistically insignificant (see Table 5).Comparing the average surpassing values of SLHF variations, which were without a related earthquake to the anomalies prior to these earthquakes (i.e., "01"), shows that the anomalous peaks before the ESSI, Samoa and Haiti earthquakes are numerically insignificant.The values of E for ESSI, Samoa and Haiti in the short term were successively 7.77, 18.04 and 11.48, while in the long term were 9.57, 26.17 and 17.04.In other words, these SLHF fluctuations at such degrees may be very normal for these areas. In the second stage of comparison, the probabilities of "10" and corresponding "11" scenarios were assessed.Instead of considering the seismicity in a 10 • by 10 • area surrounding each of the six epicenters, we focused on the earthquakes within the NCEP grid cell containing each epicenter.For each of the six earthquakes, an individual set of seismic events in the preceding 20 yr was constructed.The percentages of SLHF anomalies that were within the specified time window before the earthquakes are given in Table 6.Due to the SLHF data giving only daily mean values, any earthquakes that occurred on the same day were merged into one event to prevent duplicate computations.For the ESSI case, the number of earthquakes was small while the number of anomalies was relatively very large.It is interesting that there were not any anomalies that correspond with those three earthquakes in time sequence.Except for the Haiti case in which all of those anomalies occurred within its specific time window of that certain earthquake, the probabilities of "11" for the Sumatra, Papua and Tohoku cases were less than 50 %.In the area surrounding the Samoa earthquake epicenter, 12 earthquakes occurred during the study period, while half of the earthquakes occurred after SLHF anomalies.In other words, for all earthquakes other than the Samoa and ESSI case, the percentages of "10" scenarios were distinctly higher than their counterparts, i.e., most of the earthquakes were not sensitive to variations in SLHF, even in the very near vicinity.What should be paid attention is that the average magnitudes of earthquakes that belonged to "10" for five cases were all more than M w = 5.3. Data applicability Although the use of a homogeneous data set (i.e., NCEP-SLHF) would have alleviated the error due to different SLHF observations, the NCEP data set contains assimilative data whose accuracy relies on several factors.The accuracy of a single variable at different periods varies depending on the original data collection method.Although the NCEP reanalysis data assimilation system is consistent, the observing system has evolved substantially over time.The evolution of the global observing system is divided into three major phases: the "early" period from the 1940s through the International Geophysical Year in 1957, when the first upper-air observations were made; the "modern rawinsonde network" from 1958 to 1978; and the "modern satellite era" from 1979 to the present (Kalnay et al., 1996).Therefore, the accuracy of reanalyzed surface latent heat fluxes is naturally time-dependent.Given the evolution of data accuracy, the SLHF anomalies preceding the ESSI, Papua and Haiti earthquakes were measured using less accurate NCEP SLHF data, i.e., 10-30 W m −2 accuracy.Therefore, these variations may not be true anomalies.Because the history of NCEP data is very short compared with the earthquake catalog, the date of a given earthquake should be considered before employing the NCEP/NCAR data in the study of SLHF variations prior to earthquakes.The output variables in NCEP/NCAR data are classified into four classes, depending on the degree to which they are influenced by the observational data and/or the assimilation model.Unfortunately, surface fluxes are among the "C" variables, which means that they depend heavily on the model during data assimilation (subject to the assimilation of other observations) and should be used with caution (Kistler et al., 2001).If the model and its physical parameterizations are realistic, the SLHF data can provide accurate estimates, even on a daily timescale.However, it will be regionally biased if the model is biased.Hence, the model feasibility should be checked before using SLHF data from NCEP/NCAR to study any SLHF variations in a specific area. Several scientists have misused NCEP/NCAR data when studying the relationships between SLHF variations and seismicity.To correctly identify and detect direct or indirect earthquake-induced changes in SLHF using NCEP SLHF data, we suggest that long-term analysis be carried out for the study area to establish the background levels and check if any variations in them correlate with the SLHF changes and earthquakes. Parameter settings Like other thermal precursors, despite of several years of intense work focusing on the application of SLHF data to the prediction of coastal earthquakes, obvious precursor anomalies are generally found retrospectively after the events.To find these anomalies the evaluation criteria might be determined retroactively or adjusted, and there are no established and accepted parameters. The anomaly threshold is the most important parameter to establish.When Dey and Singh proposed a probable relationship between SLHF anomalies and earthquakes, they accounted for seasonal effects by subtracting the monthly mean from the daily value and dividing the daily SLHF value by the standard deviation of the SLHF data for that specific day within each year from a 10 yr data set.The background noise was calculated as the mean SLHF plus 1.5 times the standard deviation of SLHF (Dey and Singh, 2003). Other analogous thresholds have been given by other scientists, such as µ + σ (Li et al., 2008;Pulinets et al., 2006) or µ + 2.0σ (Qin et al., 2010).SLHF fluctuations occur continuously in any area.Daily SLHF values in a given area over several years results in a large data set that conforms to a normal distribution.According to the well-known 68-95-99.7 rule (three sigma rule), only approximately 86.6 % of values are within µ ± 1.5σ (Larson and Farber, 2009).The remaining 13.4 % of the SLHF anomalies might be regular fluctuations due to seasonal factors and not seismicity.As Henk Tijms said, "the theory of probabilities . . .teaches us to avoid the illusions which often mislead us" (Tijms, 2004).Keeping the concept of normal distributions in mind may assist in determining the validity of seismic precursors.The conditions used to determine whether a specific variation in SLHF is anomalous behavior is one of the key issues in the study of correlations between SLHF and earthquakes.Methodical detection of SLHF anomalies should be achieved when investigating the relationship between SLHF variations and earthquakes. The time window is also fundamental to the correlation evaluation.Individual time windows for six cases were fixed according to the short-term SLHF variations.In fact, there is no known way to determine this parameter other than empirically.It might vary with the earthquake location, time and other related factors.To determine how the time window affects the calculated correlation between SLHF anomalies and earthquakes, we set 10 time windows (of 90, 80, 70, 60, 50, 40, 30, 20, 10 and 5 days) and computed the proportions of "01" and "10" scenarios for each window.Figure 4a illustrates the negative correlation between time window and the proportion of "01" scenarios.For a time window of 5 days, the proportions of all the six cases was 1, i.e., almost no earthquakes occurred within 5 days of each anomaly.Figure 4b shows the SLHF anomaly that occurred 5-10 days before the Haiti earthquake.For the ESSI earthquake, the percentage of "10" scenarios is 100 % for all of these time windows, which indicates that 107 anomalies occurred more than 90 days before the three qualified earthquakes.For the other four earthquakes, the proportions decreased with longer time windows.As shown in Fig. 4, the correlation between thermal anomalies and seismicity is highly dependent on the length of the time window.The longer time window is set, the more thermal anomalies can be considered precursors of a specific earthquake.Therefore, the percentage of "11" scenarios increases with longer time windows and causes precursory activity to appear more likely. Summary and conclusion In light of these evaluation results obtained from this study, several conclusions can be drawn: (1) although some SLHF variations may surpass the background varying level, they still cannot be recognized as thermal anomalies according to their tiny surpassing amplitudes and SLHF data accuracy; (2) the clustering effect of earthquake sequence should be paid enough attention during the evaluation of relationship between SLHF variations and earthquakes; (3) the correlation of SLHF anomalies and seismic activity is relatively low (due to chance) and largely depends on several factors including data and parameters. We strongly recommend that standard SLHF anomaly detecting criteria should be established.While several adjustments to parameters at the learning stage are acceptable, one must ensure that the corresponding criteria have been clearly set and strictly employed before any phenomenon is formally defined as a precursor.Even if the geophysical theory is not understood thoroughly, predetermined identifying and analyzing procedures still need to be taken into account and addressed. Based on the above findings, much further work can be effectively carried out.We will perform more evaluations on several other related thermal parameters that are derived by remote sensing or assimilation technology.Other related factors including the seasonal variations in wind and ocean current, regional salinity concentration and relative humidity will be taken into account.Moreover, keeping the advantage of remote sensing data in spatial resolution in mind, further long-term spatial analysis for the mentioned earthquakes will be carried out.Similar to one single NCEP grid analysis, parameters such as time window and anomaly threshold will be selected to study the spatial and temporal relationship between earthquakes and thermal variations.More data mining technologies will also contribute to the following work. Fig. 2 . Fig. 2. The four types of relationships between SLHF anomalies and earthquakes. Table 1 . Basic information of studied earthquakes. Table 2 . Four categories of the relationship between earthquake and anomaly. Table 3 . Four selected parameters of six earthquakes.
5,110
2013-10-22T00:00:00.000
[ "Geology" ]
Fast inverse design of microstructures via generative invariance networks The problem of the efficient design of material microstructures exhibiting desired properties spans a variety of engineering and science applications. The ability to rapidly generate microstructures that exhibit user-specified property distributions can transform the iterative process of traditional microstructure-sensitive design. We reformulate the microstructure design process using a constrained generative adversarial network (GAN) model. This approach explicitly encodes invariance constraints within GANs to generate two-phase morphologies for photovoltaic applications obeying design specifications: specifically, user-defined short-circuit current density and fill factor combinations. Such invariance constraints can be represented by differentiable, deep learning-based surrogates of full physics models mapping microstructures to photovoltaic properties. Furthermore, we propose a multi-fidelity surrogate that reduces expensive label requirements by a factor of five. Our framework enables the incorporation of expensive or non-differentiable constraints for the fast generation of microstructures (in 190 ms) with user-defined properties. Such proposed physics-aware data-driven methods for inverse design problems can be used to considerably accelerate the field of microstructure-sensitive design. Physics-aware deep generative models are used to design material microstructures exhibiting tailored properties. Multi-fidelity data are used to create inexpensive yet accurate machine learning surrogates for evaluating the physics-based constraints within such design frameworks. Introduction R MF models are similar although the label requirements of the multi-fidelity model is reduced by 80%. We stress that while the 126 low-fidelity network was trained using the entire dataset, the multi-fidelity model was only trained with 20% of the high-fidelity 127 labels, which are significantly more expensive to generate (e.g., evaluating the J sc and FF of one morphology needs about 1 128 cpu-hr, whereas the low fidelity metrics can be computed in less than a minute). Hence, by using the multi-fidelity network, 129 we alleviate the problem of requiring a large labelled dataset to train a surrogate physics model as the invariance constraint 130 evaluator in the InvNet. 133 We present the results of generating targeted morphologies that are tailored to design specifications using our proposed 134 InvNet with multi-fidelity surrogate model framework. In Figure 3(a), we show samples of microstructures generated with 135 InvNet for different design specifications. In the top row, we show examples of morphologies with low J sc values and high FF 136 values. As we traverse down the rows of Figure 3(a), the specified J sc values are increased while the FF values are decreased. 137 It is observed that the InvNet-trained generator is able to generate a variety of candidate microstructures with different 138 morphologies given the same design specifications. This signifies that the generator has learnt the underlying distribution of the 139 actual data and no mode collapse occurred during training which can result in only similar morphologies being generated. This 140 also anecdotally validates a hypothesis in the OPV community that there exist multiple families of morphologies that produce 141 identical performance. 142 To further verify that the generated morphologies satisfy the imposed design constraints, we generated an additional 1000 143 morphologies for different ranges of J sc and FF values and compared the estimated properties of these morphologies with the 144 actual design specifications. The values of these estimated properties and design specifications are plotted as densities and 145 shown in Figure 3(b). We observe that the specified values and generated values for both J sc and FF have highly overlapping 146 densities. These overlapping densities show that generator is capable of creating morphologies that satisfy the imposed design 147 specifications, hence enabling targeted design of candidate two-phases microstructures. 148 Nonetheless, we observe that there are situations where the generated morphologies do not adhere to the design specifications, 149 as seen in the first row of Figure 3(b), where the density of generated morphologies (in solid green) had a range of J sc values that 150 are higher than the specified range of J sc values (in dotted blue). Since the proposed framework is fundamentally data-driven, 151 we hypothesize that this failure mode was caused by an imbalanced dataset where samples from the low J sc and high FF 152 regions might be sparse. To confirm this hypothesis, we visualize the training data distribution in Figure 3(c). Based on the 153 visualization of the joint density, we observe that there are indeed very few samples in the top left region, where morphologies 154 have a low J sc and high FF values. However, it is interesting to recognize that even when the generator fails to generate 155 morphologies with specified J sc in such sparse training data regions, the rank order of the morphologies' J sc are still preserved. 156 Instead of generating morphologies with random J sc s', the generated morphologies defaulted to morphologies with low J sc and 157 high FF values which are well supported with data. 158 Comparing high-fidelity and multi-fidelity InvNets 159 Next, we provide qualitative results to compare the effects of using the high-fidelity, R HF , and multi-fidelity R MF surrogate 160 model as the invariance constraint evaluator in InvNet framework. In Figure 2, we have shown that the performances of the high-161 and multi-fidelity surrogate models are comparable. Moreover, we are also interested in investigating if the higher variance 162 of the multi-fidelity surrogate will compound and affect the results of the generated morphologies. To study this, we trained 163 InvNet with the same network architecture and replaced the R MF with R HF . We illustrate the results from both methods in 164 Figure 4. In terms of the generated morphologies, we do not observe any significant difference between the two methods. Both 165 the high-and multi-fidelity InvNets are capable of generating microstructures of varying morphologies without signs of mode 166 collapse. However, the density plots which are used to validate the constraint invariances reveal two interesting observations. 167 First, we observe that the high-fidelity InvNet is more capable of generating low J sc /high FF morphologies in comparison 168 with the multi-fidelity InvNet. This is evident in the first row, where the density of morphologies generated by high-fidelity 169 InvNet has a higher overlapping area with the design specifications as compared to the density of morphologies created by 170 multi-fidelity InvNet. We attribute this to the fact that R HF was exposed to a much larger and diverse set of morphologies as 171 compared to R MF , which results in the high-fidelity InvNet being able to learn the underlying structure of the low J sc /high FF 172 morphologies better when training for the invariance. Thus, this suggests we can expect the performance of high-fidelity InvNet 173 to be more robust and consistent when queried in regions where training data is sparser. 174 The second interesting observation we make is that the high-fidelity InvNet also tends to generate morphologies that are 175 a little more biased in terms of the FF. This can be observed in the second, third, and fourth rows where the densities of 176 high-fidelity FF are slightly shifted from the FF design specifications. Referring back to Figure 3(c), we observe that the 177 marginal density of FF data is highly skewed towards the lower regions. Therefore, it is possible that by training R HF on 178 the entire high-fidelity dataset and subsequently using it as the invariance constraint evaluator to train InvNet does result in 179 generated morphologies that are more biased in terms of the design specifications. This highlights the importance of having a 180 balanced dataset when using our proposed framework for morphology generation. Efficiency of neural-network based methods versus physics-based models In Table 1, we compare the wall-clock running times of our proposed neural-network based methods with physics-based 184 methods for a few different scenarios. All timings were performed on the same platform using a NVIDIA Titan RTX GPU and 185 averaged across 100 function evaluations. In the first two columns, we show the average computation times for evaluating 186 the J sc and FF properties of a given morphology. We observe that both multi-and high-fidelity methods are several orders 187 of magnitude faster than a high-fidelity physics simulation. A second advantage is that with the surrogate models, only one 188 evaluation is required to estimate both J sc and FF simultaneously. In comparison, performing the physics simulation requires 189 separate individual evaluations for J sc and FF. Comparing the multi-fidelity surrogate model R MF with the high-fidelity 190 surrogate model R HF , we note that R HF is an order of magnitude faster than R MF . However, training R HF comes at the cost of 191 requiring a large dataset with high-fidelity labels. On the other hand, R MF requires a smaller amount of high-fidelity labels, but 192 requires training a more complex model architecture, which increases computation time. Hence, we view the benefits of each 193 method as a trade-off between availability of data with computation time. 194 In the third column, we show the total time required to train InvNet for 1E5 epochs. We observe that the high-fidelity 195 InvNet is ≈ 3X faster than multi-fidelity InvNet, which is expected since the training of InvNet is dependent on the surrogate 196 model to compute the invariance loss. We also include an estimate of the time required to train the InvNet if we were to replace 197 the invariance constraint evaluator with an actual physics-based model to compute the invariance loss. As observed, training 198 such an InvNet will require ≈ 60k hours, which is not tractable in compared to using a neural network-based surrogate model. 199 Last but not least, we provide the morphology generation time for a single morphology. Since the process of generating a 200 morphology using InvNet during inference is independent of surrogate model, there is no significant difference time difference 201 between using the high-fidelity versus multi-fidelity InvNet. In summary, we conclude that there is no significant difference 202 in terms of the querying a trained high-fidelity versus multi-fidelity InvNet to generate targeted morphologies. Instead, the 203 deciding factor of which model to apply depends on the availability of high-fidelity labels or computation resources. The 204 high-fidelity InvNet framework is faster to train but requires a large dataset of high-fidelity labels to pre-train the surrogate 205 model. Conversely, the multi-fidelity InvNet model requires less high-fidelity labels but requires a more complex network 206 architecture which results in longer training times. 209 The ability to rapidly synthesize targeted microstructure designs is essential in a broad range of scientific and engineering 210 applications. We propose a data-efficient generative framework (InvNet) that casts user-specifications as explicit invariance 211 constraints to generate candidate two-phase microstructures that adheres to design specifications. While recent works with 212 similar objectives have proposed frameworks that demonstrated promising results 12, 22 , we highlight that those approaches 213 is not capable of solving our specific application in a tractable manner. This is particularly due to the extremely long and 214 expensive computation required to evaluate the constraints, which is a common bottleneck in the community. Hence, to remedy 215 this challenge, we leverage neural network-based surrogates for the purpose of fast constraint evaluation. Using a surrogate, 216 our framework addresses the challenge of expensive constraint evaluation while simultaneously circumventing the need of 217 having a differentiable and explicit, closed-form expression of the constraints. Combining these advantages, we believe that 218 our method results in a far more general-purpose framework that is applicable to a wider range of inverse design problems. 219 Additionally, we have also supplemented our surrogate-based generative framework with a multi-fidelity approach to improve 220 the data requirements of the model. This multi-fidelity approach reduces foreseeable expensive label generation procedures, While we have demonstrated our proposed framework through the lens of a material microstructure design problem that 230 uses a data-driven surrogate, we emphasize that our InvNet framework is certainly not limited to purely data-driven surrogate 231 approaches. Since the invariance constraint of InvNet is explicit, it can be easily replaced or combined with other data-free 232 approaches. In this regard, a key future direction is to develop InvNets that explicitly incorporate complex physics/domain 233 knowledge in a computationally tractable manner. This approach will significantly reduce the dependency of the proposed 234 framework on data availability and extend the capability of the framework to extrapolate beyond the support of data. Other 235 5/10 promising directions include extending the current framework to generate morphologies with more than two phases as well as validating the generalizability of the framework on a dataset with more than two target properties. To conclude, our vision is 237 that the computational tools developed in this paper will serve to democratize and accelerate the area of microstructure-sensitive Here, X, n, p represent the exciton, electron and hole distributions respectively. ϕ represents the electric potential. q morphologies. To ensure a stable training process, we also scaled the labels of J sc and FF to belong in the same numerical 305 range. Following standard practices, we partitioned 80% of the data as training data and reserved 20% of data as a test data. 306 Since the task of the surrogate model is to essentially perform a multi-target regression, the loss function of the regressor is where R HF denotes the high-fidelity surrogate model, parameterized by parameters φ , I is the input image of the microstruc- Multi-fidelity surrogate model: Before describing the training details, we briefly justify the need to replace the graph-based 317 computation of low-fidelity descriptors with another neural network surrogate, R g in the multi-fidelity model. While multi-318 fidelity frameworks are effective in reducing the requirement of expensive labels 32 , they are currently not tractable for application 319 as an invariance constraint in InvNets. This is because updating the generator's parameters in InvNet requires the gradient 320 computation of the invariance-loss function. However, graph-based methods used to compute the low-fidelity descriptors are 321 often non-differentiable. Therefore, optimizing the parameters of the generator via conventional back-propagation becomes 322 a non-trivial problem. Additionally, evaluating the low-fidelity descriptors using previously proposed graph-based method 323 requires that the generated images be converted into nodes and edges on-the-fly during training, which incurs additional 324 computational cost and time. Hence, a neural network surrogate which is differentiable and can directly evaluate graph features 325 of morphologies in the pixel domain circumvents both of these challenges. 7/10 As illustrated in Figure 1(c), the multi-fidelity network encompasses both low-fidelity network (described in SI) and a shared-embedding network. The purpose of the shared-embedding network is to learn additional features that are not already 328 captured by the low-fidelity network for estimating J sc and FF. During training of the multi-fidelity network, the low-fidelity 329 network predicts the low-fidelity descriptors of a given microstructure, which are combined with the image embeddings from 330 the shared embedding network. These two vectors are then passed through a dense layer to estimate J sc and FF. As we are only 331 using a limited amount of high-fidelity labels, it is possible that training the multi-fidelity network might lead to a biased model 332 due to label imbalance. To avoid such issues, we constructed the following weighted loss function with empirically-determined To reduce the requirements of expensive, high-fidelity labels to train the surrogate model, we propose a multi-fidelity network which attains the same predictive accuracy as training the network on high-fidelity data by combining information from cheap, low-fidelity labels and a fraction of high-fidelity labels. Figure 2. Results of high-fidelity and multi-fidelity surrogate models. (a) Left figures summarize the distribution of errors for both J sc and FF estimation using the high-fidelity surrogate physics model. Bottom plots visualize the correlation plot of the estimated properties with respect to the ground truth values. In both cases, the predicted values have high correlation coefficients, R 2 values of greater than 0.9. (b) Summary of error distributions for J sc and FF estimation using the multi-fidelity surrogate model which was trained with only 20% of high-fidelity labels. We observe that while there is slight drop in R 2 and increase in variance, there is a huge marginal gain in terms of decreasing the amount of expensive simulations required to generate the high-fidelity labels. Figure 3. Results of targeted microstructure design using multi-fidelity InvNet. (a) Examples of morphologies generated by InvNet for the specified J sc and FF ranges shown on the right densities. (b) Densities of estimated J sc and FF from generated morphologies compared with a range of respective design specifications for 1000 samples. Observe that the densities of the design specifications and generated morphologies properties in the mid-and high-ranges (rows 2 to 7) are highly overlapping, signifying that the invariances are satisfied. In contrast, the densities at the region of low J sc are more deviated, signifying a more biased model at the region where the training data is sparse. (c) Visualization of joint and marginal densities of training data for both J sc and FF. Notice that the marginal density of J sc labels is relatively well balanced, while the marginal density of FF is extremely skewed, resulting in sparser data around certain regions. Visually, we observe that both models are capable of generating varying morphologies which follows a similar trend as we varied the design specifications. Looking at the densities of property invariances, we observe that the high-fidelity InvNet performs slightly better than multi-fidelity InvNet by generating morphologies which are closer to design specifications in the low J sc high FF regions where training data is sparse. However, the high-fidelity InvNet also tend to generate morphologies which are slightly biased in terms of the FF, as observed in rows 3, 4 and 5. Table 1. Comparison of average computation times of neural network-based methods vs physics-based methods for different processes. J sc and FF columns denotes the time required to evaluate the corresponding properties given a morphology. InvNet training times are based on our training scheme of 1E5 epochs. *Physics model-based InvNet training is based on an estimate if the invariance loss were to be computed using high-fidelity physics simulation. Morphology Generation column denotes the time required for a trained InvNet to generate a single morphology given design specification values of J sc and FF.
4,230.8
2020-11-09T00:00:00.000
[ "Engineering", "Materials Science", "Physics", "Computer Science" ]
Automatic Breast Tumor Diagnosis in MRI Based on a Hybrid CNN and Feature-Based Method Using Improved Deer Hunting Optimization Algorithm Breast cancer is an unusual mass of the breast texture. It begins with an abnormal change in cell structure. This disease may increase uncontrollably and affects neighboring textures. Early diagnosis of this cancer (abnormal cell changes) can help definitively treat it. Also, prevention of this cancer can help to decrease the high cost of medical caring for breast cancer patients. In recent years, the computer-aided technique is an important active field for automatic cancer detection. In this study, an automatic breast tumor diagnosis system is introduced. An improved Deer Hunting Optimization Algorithm (DHOA) is used as the optimization algorithm. The presented method utilized a hybrid feature-based technique and a new optimized convolutional neural network (CNN). Simulations are applied to the DCE-MRI dataset based on some performance indexes. The novel contribution of this paper is to apply the preprocessing stage to simplifying the classification. Besides, we used a new metaheuristic algorithm. Also, the feature extraction by Haralick texture and local binary pattern (LBP) is recommended. Due to the obtained results, the accuracy of this method is 98.89%, which represents the high potential and efficiency of this method. Introduction e term "cancer" refers to the abnormal growth of some cells. ese cells can spread and invade other sections of the human body. e occurrence of this in the breast texture as a mass is called breast cancer. Breast cancer occurs mainly in women, but it may be observed in men, too. After lung cancer, the second cause of death in women is breast cancer. Generally, there are two types of breast cancer based on primary origin: primary tumors and secondary tumors. e primary tumor originates from breast texture cells. In metastatic tumors, the cells become cancerous in another section of the body and spread to the breast through the lymphatic system or bloodstream. Based on the American Cancer Society (ACS) statistics, the incidence rate for new cases of this disease is 125.3 for females each year (per 100,000 men and women). e death rate value is 20.3 for females per 100,000 men and women each year. e rates are given age-arranged and according to 2012-2016 cases and 2013-2017 death cases [1]. Invasive Ductal Carcinoma (IDC) is the most communal type of this cancer. It begins from cells that put the milk duct into the breast. en destroys the duct wall and spreads to the nearby breast textures. At this point, it spreads (metastasize) through the lymph system and stream of the blood. Many women with breast cancer diseases are treated by (1) hormone therapy and (2) chemotherapy. Similarly, targeted therapy or radiation is the local treatment. Sometimes, a combination of these treats is used. Early diagnosis of this disease can be very important to cure. e breast imaging methods commonly used at this time are mammograms, ultrasound, breast MRI, breast tomosynthesis (3D mammography), positron emission tomography (PET), computed tomography (CT) scan, optical imaging tests, electrical impedance imaging (EIT) scans, contrast-enhanced mammography (CEM), and, at last, the chest X-rays. ese techniques are used for careful observation of significant things like the shape, size, location, the exact kind of cancer, more details about the stage of cancer or how fast it is growing, and metabolism of breast tumors. Sometimes, a combination of these methods is used for a more accurate diagnosis. According to recent researches, MRI may locate certain small breast lesions that are occasionally missed by the mammography method. erefore, it can be a useful diagnostic tool. Nowadays, computer-aided diagnosis (CAD) based on MRI images is used to detect tumors. Hence, this efficient method is more important. Indeed, combining the CAD systems with MRI images is caused to decrease the useless data and aided in fast detection of the tumor. Recently, artificial intelligence based on CAD has been used to improve detection. Formerly, mammography and MRI image processing for breast tumor diagnosis were based on machine learning techniques and extraction of geometric features. In deeplearning algorithms, the convolutional neural networks (CNNs) are speedily becoming a prevalent technique to process medical images. Deep learning is hierarchical learning and one of the subbranches of machine learning which is for learning high-level data summaries. is emerging method has been noticed more in the artificial intelligence field. In previous articles, several techniques were presented for breast tumor diagnosis. For instance, Hu et al. [2] proposed a method for feature extractions of (1) (DCE)-MRI and (2) (T2w) sequences to improve breast cancer detection. (DCE)-MRI sequence is the dynamic contrast-enhanced and T2 is the weighted (T2w) MRI sequence for each MR study. Based on the mentioned features, this method was used as a pretrained convolutional neural network (CNN) for classification and final detection between benign and malignant tumors. In conclusion, feature fusion using DCE (P value < 0.001) (95% confidence intervals) had statistically better performance. Ibrahim et al. [3] introduced a segmentation approach for breast tumors in thermal images. ey used the chaotic salp swarm algorithm (CSSA) to this. is segmentation algorithm uses the quick-shift technique which clusters the breast thermal image pixels to reach the optimal superpixels. e final results showed that removing the extra parts of the image and keep the breast area. is leads to improving the detection accuracy (92%). Ibraheem et al. [4] presented a median 2D filter which used to preprocess breast cancer images. Feature extraction was performed by the DWT (discrete wavelet transform) method and then reduced to 13 features. Eventually, a support vector machine (SVM) has been utilized to detect the cancerous mass. Simulation and test results have shown 98.03% accuracy. Navid et al. [5] recommended a method that uses a threshold-based WCO optimization algorithm. WCO is a metaheuristic algorithm inspired by the FIFA World Cup challenge. en, the Kapur approach was used to define an objective function. Finally, the candidate solutions were selected from random samples of the search space in the image histogram. Togaçar et al. [6] introduced novel deep learning which was developed based on the convolutional neural network. ey proposed a model called BreastNet to improve the quality of the classification. e BreastNet was built based on attention modules. e data has been processed by augmentation techniques. e image features are exchanged by various augmentation techniques like shift, rotation, flip change, and brightness. Also, they used the hypercolumn method for the accurate classification of the data. Other sections of the BreastNet pattern model include the pooling, convolutional, dense blocks, and residual. e method obtained 98.80% accuracy. Alanazi et al. [7] proposed a CNN method that analyzes the hostile ductal carcinoma tissue regions in whole-slide images (WSIs) to automatic detection of the breast cancer. e suggested system using CNN Model 3 obtains 87% accuracy. e five-layer CNN in Model 3 is best suited for this detection. e paper studies the presented technique that applies various convolutional neural network (CNN) architectures to automatically detect breast cancer, comparing the results with those from machine learning (ML) algorithms. Ma et al. [8] presented that a 1D-CNN model was developed and trained for classification. e Fisher discrimination analysis (FDA) and support vector machine (SVM) classifiers were trained and tested with the same spectral data for comparison. e best classification performance, namely, the overall diagnostic accuracy of 92%, the sensitivity of 98%, and the specificity of 86%, has been achieved by using the 1D-CNN model. Table 1 indicates some recent studies in breast tumor identification. In this study, our main purpose is to provide a twofold system for better diagnosis. e proposed approach includes a CNN model optimized with an improved algorithm, which is performed with a texture feature-based technique as a sequential method. en, the results are combined to achieve the best result. is computer system reduces the complexity and leads to improving computational performance. Additionally, it solves the problems of the previous literature to achieve the best results [9]. Figure 1 is an overview of the proposed method. e main contributions of this study are highlighted as follows: (i) Optimal comprehensive approach for the automatic detection of breast cancer by the CAD (ii) Hybrid method to improve the classification performance and efficiency (iii) A noise reduction and normalization of the data (iv) CNN classifier amended by Balanced Deer Hunting Optimization Algorithm (BDHOA) for classification (v) Haralick texture and local binary pattern for feature extraction Image Preprocessing First, input breast MRI image data should be simplified and prepared for the next steps. us, in the first step, normalization is applied. Hence, the data intensity values are normalized by the min-max method in the scale range 0-1. Here, the size of the image is 250 × 250. en, the noise reduction method is used to eliminate undesired distortions. Noise reduction is the most important phase of preprocessing. In recent studies, partial equations have been used to reconstruct images. Also, MR images have problems such as electromagnetic (EM) noise emitted from circuits. e main cause of noise in MRI imaging can be of two types: (1) hardware and (2) subject (physiological noise, body motions, cardiac pulsation, respiratory motions, etc.). To overcome the noise problems of the breast MR images, they must be filtered. Acoustic noise is the main noise in MRI. So, noise removal is important in medical image processing. In this regard, an Intelligent Hybrid Filter is Computational Intelligence and Neuroscience used. is fuzzy-based filter is utilized to eliminate the noise of images. is filter is used in particular for the preprocessing of medical images [8]. e procedure of this filter is summarized as follows [7]: (1) e noisy image is passed in parallel from four noise removal filters (2) X is the input image, and X 0 , X 1 , X 2 , and X 3 are the output of the filters (3) e output of the filters enters the fuzzy-neural system as input (4) Finally, Y is the output of the fuzzy-neural system and is the final improved image Convolutional Neural Networks CNN is an abbreviation form of convolutional neural networks. It is one of the branches of deep neural networks. Also, it is highly accurate in image processing, classification, and segmentation. e CNNs are mainly used in machine learning for visual or speech analysis and diagnosis. Convolutional networks were inspired by biological processes of the (connections between neurons) cat's visual cortex. It is a significant approach in deep learning, wherein several layers are trained purposefully and powerfully [10]. It is more efficient because of its accuracy and fast operation. In computer vision, CNN is one of the most important methods. Generally, all models of CNN contain three key parts: (1) convolution layer, (2) pooling layer, and (3) fully connected layer, where each layer has a definite task. Also, CNN has two training steps. At first, the input image has been injected into the CNN with a simple dot multiplication between the input and the neuron parameters, which is followed by a convolutional multiplication during each layer. e network error is computed from the output to network training. For this purpose, it compares the network output by using a loss error function and correct solution and computing the error rate. en, the phase of backpropagation starts based on the amount of calculated error rate, where the derivative of each parameter is obtained using the chain rule. Also, these components change based on their effect on the network error [9]. After updating the parameters, the next stage is feedforward. ese steps are repeated several times in sufficient numbers until the network training is completed. e learning is used to get a certain number of kernel matrixes. In this case, gradient descent was utilized for the selection of the optimal network weights. In network, a ReLU (rectified linear unit) function with f(z) � max(z, 0) is utilized to activate neurons. e output scale is intensely reduced by max pooling [10]. e training error is evaluated to adapt the weight of the neuron and obtain the desired output. e backpropagation step minimizes the cross-entropy loss [11]. e loss is with the following formula: where D j defines the achieved output vector for the m th class such that d j � (0, ..., 0, 1, . . . , 1 √√ √ √√ √ k , 0, ..., 0) explains the desired output vector and z (i) j demonstrate the Softmax function which is shown by the following formula: where M describes the sample number, and a weight penalty (ρ) has been utilized for extending the function by storing the values of the weight large; that is, where connection weight is indicated by V k , k in layer l and L shows the layer's totality number, and K illustrates the layer for l connections. Given that the designed layouts of the CNN are based on tests and errors, it also has problems. In recent years, various types of optimal automatic approaches have been presented for extending the network using bioinspired optimization algorithms [12]. Deer Hunting Optimization Algorithm (DHOA) One of the steps is optimization, which is the process of obtaining the "best available" values of a problem. Sometimes, conventional classical optimization algorithms are not able to solve problems correctly and quickly [13]. To overcome this issue, there is a new technique called metaheuristics for fast solutions of the problems such as NP-hard (nondeterministic polynomial-time hard). Metaheuristics can be imitated based on different phenomena from the animals hunting behavior to humankind's social behaviors [14]. In some cases, algorithms are also improved to find the best optimum response. For example, the harmony search algorithm, dolphin swarm algorithm, genetic algorithm, symbiotic organism search, and the world-cup optimization algorithm are used to solve the various types of complex problems [15]. Besides, Yin and Navid suggested a modern bioinspired algorithm that is inspired by hunting deer [16]. e deer's features make their hunting process more difficult. An important feature of deer is their vision. Its visual feature is five times stronger than man's vision. e other remarkable feature of the deer is its sense of smell. is sense in deer is sixty times stronger than human's smell sense. e deer snores loudly and walks heavily when it realizes the danger of this. is reaction can let another deer know. e deer can also detect the extreme frequency of the sounds well. In the following text, the deer hunting system has been described in detail. Initialization. e metaheuristic deer hunting algorithm starts with the set known as the hunter, which is a group of random populations, which is defined as follows: 4 Computational Intelligence and Neuroscience Z � y 1 , y 2 , . . . , y n , 1 < j ≤ n, where the number of hunters or types of solutions is indicated by n. Also, Z refers to the total hunter population. Initializing the Parameters. e second stage involves quantifying the main components, the angle of the position of the deer and the angle of the wind. Space is considered a circle. us, the wind angle is written in the formula of a circle: where α is a random integer within the limitation 0, 1 { } and j describes the present iteration. Also, the angle of the deer location is defined as follows: Here, β shows the angle of the wind. Position Propagation. During the first iteration, it is usually not possible to find the best solution for the algorithm [17]. However, after generating a random integer and evaluating the cost function from it, the best integer is considered as the optimal solution value [18]. Here, we assumed two parameters, including leader position (z l ) as the initial best location of the hunter and the successor position (z s ) as the succeeding hunter position. Propagation Based on the Leader's Position. To get the best position using the initial repetition, the entire population tries to reach the best position by updating their location. Hence, the "encircling behavior" is mathematically formulated by the following equation: In this formula, Z j and Z j+1 indicate the present and later locations, S w shows the random integer based on wind velocity in the scope [0, 2], and the coefficient vectors are denoted by L and k, where the formula is written as follows: where I max illustrates the peak of repetition, and in the range [−1, 1], c is the random component. δ is the random integer in the range from 0 to 1. Figure 2 presents the updating position of Z * , where (Z, Y) shows the primary location of the hunter that can be updated depending on the prey location. e updated status will be ongoing to achieve the best situation (Z * , Y * ) based on the L and K. e hunters go to the place where the leader is located. If the leader's move was not successful, the hunter stays in his previous position. e updating of position is according to (9), when S w < 1. Indeed, hunters can move in all directions regardless of the position angle. erefore, according to (9) and (10), the hunters can update their locations in each random position. Propagation Based on the Position Angle. Also, we can expand the space of solving way by considering the location angle. Angle assessment is very important to determine the position of the hunter. So, the successful attack should not be visible to the prey. e visualization of the deer angle (prey) formula is as follows: Due to the difference between the angle of the wind and the angle at which the prey is seen, u is the parameter that is considered for updating the angle of position: where β illustrates the angle of wind blowing. en, to update the position angle parameter, After obtaining the angle of location, the new location can be calculated using the following formula: e prey does not see the hunter because of the view angle. Propagation Based on the Position of the Successor. To use the exploration, it is possible to adjust L in the behavior of encircling. According to the random first search, the integer of vector L cannot be taken into account as more than 1. us, the successor location is used for providing a new update of the best solution. An exploration updating formula is given as follows: Computational Intelligence and Neuroscience where Z_s explains the successor location of hunters at any moment. In this algorithm, the location of the hunters is updated by the best solution in each repetition. e best solution is obtained while |L| ≥1. If |L| <1, one of the hunters is randomly selected. is method creates an L switch, which can modify the mode of the algorithm between the exploitation and exploration stages. Stuck into the local optimum is a shortage of the original DHO algorithm [19]. In the following, a new modification has been proposed to recover this problem. e Balanced DHO Algorithm. Here, Lévy flight (LF) is used to evolve the DHO algorithm. Lévy flight is a method that solves the problem of component convergence defect. Lévy flight creates a random walking system that helps to control the local search correctly. e formula is given in the following: where 0 < δ ≤ 2, R ∼ N(0, σ 2 ) and T ∼ N(0, σ 2 ), Γ(.) presented the gamma, D describes the size of the step, the index for Lévy is denoted by μ, and R/T ∼ N(0, σ 2) describes that Gaussian distribution is used for generating the samples, where the mean value and the variance value are zero and σ^2, respectively. Here, μ � 3/2. According to the Lévy flight system, the following equation is the new enhanced hunter location: where Z l j+1 shows the new location for the agent of search Z j+1 and where u is limited within [0, 2] and r indicates a random integer at the range from 0 to 1. Z ′ (t) represents a random location vector selected for the current population. For providing guaranteed best solution candidates, fitter agents are kept: e following diagram shows the balanced DHOA (BDHOA), which illustrates the steps of this process is given in Figure 3. Validation of the BDHO Algorithm Here, four benchmarks are proposed to analyze the (BDHO) algorithm. Also, several metaheuristic algorithms have been compared with BDHOA. To do this, the benchmarks have been validated on the balanced DHOA, ant colony optimization algorithm (ACO) [20], gray wolf optimization algorithm (GWO) [21], and grasshopper optimization algorithm (GOA) [22,23], and particle swarm optimization (PSO) [24]. e original DHOA is also given in the table to show the capabilities of BDHOA. We have simulated by Matlab R2016b with a laptop configuration of 2.20 GHz CPU and 6.00 GB RAM. In this section, the first benchmark function is Rastrigin. Its constraint is [−512, 512] and has the dimension of (30-50) that can be mathematically formulated as follows: e second benchmark function is Rosenbrock, which is within [−2.045, 2.045] and has a dimension of 30 to 50. is benchmark can also be formulated as follows: e third benchmark function with a dimension of 30-50 is within [−10, 10], which is called Ackley. e following formula is related to this benchmark: Sphere benchmark function is the fourth which has [−512, 512] constraint and 30-50 dimensions. e formula is shown in the following: e comparison result, according to (1) mean deviation (MD) and (2) standard deviation (SD), has been demonstrated in Table 2. e table above shows that the mean deviation and standard deviation in the BDHOA method are less and this result is convenient. Also, it can be observed that BDHOA gives the best results compared with the original DHOA. Due to that, it can be useful for obtaining an optimum solution. Breast Tumor Classification Based on the Proposed Method. e CNN training is mostly performed according to the backpropagation. To overcome the issue of stuck in the local optimum, several methods are presented. In this section, to reduce the network error, the proposed BDHO algorithm is employed instead of the backpropagation approach. e purpose of using this metaheuristic algorithm for CNN is to minimize the value of the mean square error (MSE) function. e formula of the MSE function is as follows: where a i j represents the j th for network output and b i j illustrates the j th of the desired integer of the CNN during the t period. N signifies the value for layers of the output and M indicates the data number in the formula. CNN technique can be very useful in the rapid detection of breast tumors in MR images. In this study, CNN classification is used with two models of CNN and different classifiers [5]. is classification includes (1) extraction of the features and (2) features dimension decrease. is is briefly exhibited in the following. Extracting Features In image processing, feature extraction means converting image data into usable information for the next stages [25]. is is performed by extracting some general or particular features of the input image [26]. Among some types of Computational Intelligence and Neuroscience feature extraction techniques, the texture technique gives more information with details on the spatial arrangement and intensities of colors. Also, this method has some fans in medical imaging. To do feature extraction in this research, two features are used: (1) Haralick features and (2) local binary pattern (LBP). In the following, these two methods are briefly explained. Local Binary Pattern Features. e operator of LBP selects binary integers of the pixels, then compares these values with their neighbor pixels and decimal numbers, and finally, encodes the surrounding local structure of each pixel. In the binary labeling step, the resultant strictly negative have been encoded with 0 and the other values encoded with 1. e achieved binary numbers (codes) from the LBP feature are in the clockwise direction. e final extracted binary values are assumed as local binary patterns codes. e final values extracted are binary and called local binary patterns or LBP codes. Haralick Texture Features. e Haralick feature is a statistical feature that is evaluated from the gray level cooccurrence matrices (GLCM). e purpose is to evaluate the matrix and computes the neighboring gray level cooccurrence in the input image. e GLCM explains the information about a square matrix in the region of interest (ROI) that illustrates a correlation among the reference pixel with a presented intensity integer and the pixels around it that are located in various directions. In this study, four directions, in 0°, 45°, 90°, and 135°, have been employed for the pixels, and the average integers have been used as last the Haralick features. Dimension Reduction of the Features Based on ICA. At this stage, given that the data volume in the feature is high, to achieve the desired volume, the data reduction method is used which also leads to simplification. To do this, feature dimensions are reduced by the independent component analysis (ICA) method. e ICA is a computational methodology for tightfitting of the concealed factors, which underlie a series of signals. ICA introduces a reproductive model for the large database of MRI. By considering the ICA property, which is a method to separate the blind source, and assuming that the subfactors are non-Gaussian signals and also these subcomponents are free (independent) from each other, the ICA is a very powerful algorithm for analyzing and evaluating principal parameters. e difference is that ICA's ability is to find the underlying sources, even if the classic methods lead to failing. In this algorithm, measurements are given as an array of time series. e phrase Blind Source Separation (BSS) is employed for characterizing the breast waves recorded by several sensors. Finally, the input of the classifier is the data image, which is divided into two sections: training and testing images. After injecting into the classifier, the classifier trains them and predicts the label for test images. Figure 4 indicates a diagram of the feature extractionbased method. Final Simulations. e final step is to classify the obtained results from the proposed feature-based techniques and the CNN model. e main goal is to propose an accurate and efficient method to detect breast tumors from MR images. e approach of classification is briefly described in the following. At the first step, the diagnosis results of hybrid technique (proposed CNN and feature extraction-based) have been collected. After that, the results of the suggested CNN are checked out. If the output is labeled as a tumor, the output will be exhibited as cancer. Otherwise, if the output of the presented CNN is labeled as healthy, the features of the MR image are checked out again according to the feature extraction-based method. In this condition, if the output image has been diagnosed as a tumor, the output will be labeled as cancer, or else, it will be diagnosed as healthy. Database Description. is method aims to quickly detect the breast tumors in MRI by using MATLAB R2016b software with a system configuration of 2.20 GHz CPU and 6.00 GB RAM. e main idea is to design an optimized CNN (convolutional neural network) to achieve promising results. To validate this technique, it is implemented on the DCE-MRI dataset, which is usually used for analyses of classification efficiency. e DCE-MRI dataset includes a set of 219 breast MR images that is collected from 105 different patients with breast cancer (angiosarcoma, inflammatory, DCIS, ILC, and LCIS) (55 tumor-like and 50 non-tumor-like malignant lesions), and 114 DCE-MRI were detected to be normal. In MATLAB, image size is 512 × 512 pixels. e presented scheme (optimized CNN) and feature extractionbased method are designed for analyzing the MR images. ere are several types of performance analysis to evaluate classification. One of these analyzes is accuracy. e accuracy determines the proportion of the correctly classified image number to the total image number. e results of the analysis of accuracy for the studied methods, including the feature-based method, optimized CNN, and the hybrid method (feature-based and optimized CNN), are indicated in Table 3. As can be observed from Table 3, the data presents that the highest efficiency has been achieved when hybrid feature-based/optimized CNN has been utilized for classification. e accuracy of classification can be considered as an efficient indicator to determine the performance of the method when the test dataset contains equal numbers of samples from the classes. e results indicate the efficiency of the proposed system in the rapid diagnosis and timely treatment of the patient. To get more evaluations, confusion matrices have been used for performance analysis of the breast tumor classification. A confusion matrix is a table with two dimensions which is usually employed for determining the classification efficiency and performance, on a test set to define the true values. Table 4 illustrates a sample confusion matrix for hybrid feature-based/optimized CNN. is table is based on an investigation on breast cancer: angiosarcoma, inflammatory, and ductal carcinoma in situ (DCIS). Several indicators have been used for determining the efficiency of the classifier, particularly for each cancer tumor class [27]. e critical indicators in the classification report are specificity, precision, and sensitivity which are obtained from the following formulas: where the following parameters show some classified cases: FP, false positives; TP, true positives; FN false negatives; and TN, true negatives. Table 5 shows the final results of using the proposed technique once the optimized and the feature-based method classifier are together for the detection goal. In Table 5, the integer of the specificity for all the datasets is high, which illustrates the correct identifying samples without the specific disease. e proposed method is also compared with two types of well-known methods. Table 6 provides a comprehensive comparison, according to some different state of the art for classification techniques [28]. In Table 6, it is clear that the precision parameter in the proposed method is better and higher than other methods. Briefly, it is observed that once using the suggested method, the value of system efficiency indicators (precision, sensitivity, and specificity) is increased. Computational Intelligence and Neuroscience 9 Conclusions A new comprehensive approach was proposed for the automatic detection of breast tumors. e method is a hybrid model, including an optimized design of a convolutional neural network and feature extraction-based technique to improve the classification efficiency. In this study, preprocessing steps are applied, which eliminate noise and simplifies classification. Additionally, it leads to an increase in the quality of the dataset. us, the value is also normalized. Also, the feature extraction-based method was based on Haralick texture features. is method was used with independent component analysis (ICA) to reduce the dimension of the features. Simulations were performed according to the DCE-MRI dataset. e results were compared by various states of the method. Also, other methods were compared to indicate the system's efficiency. It is also possible to increase the accuracy of the study by using a variety of other metaheuristic algorithms. Furthermore, deep convolutional neural network model can be used for further research, to classify the breast cancer images. e final satisfactory results stated the advantage of the suggested approach toward the other methods. In the future, we will examine the proposed technique on a different dataset. e proposed method can be generalized to the design of highperformance computer-aided diagnosis systems for other medical imaging tasks in the future.
7,306.6
2021-07-16T00:00:00.000
[ "Medicine", "Computer Science" ]
ANALYSIS OF TOURISM SECTOR ON COMMUNITY INCOME IN GORONTALO PROVINCE IN 2015-2019 ________________________________________________________________ This study aims to determine (1) the effect of the number of tourists on people's income in the province of Gorontalo, (2) the effect of the number of hotel accommodations on the income of the people in the province of Gorontalo, (3) the average length of stay on the income of the people in the province of Gorontalo. This study uses quantitative methods using panel data INTRODUCTION The tourism sector has now developed and become one of the largest industries for economic growth and community welfare in Indonesia, which is expected as a foreign exchange earner.Regions that have assets in the form of tourist attractions that are in demand, can bring benefits to the tourism sector itself where the number of tourist visits is very meaningful for the development of the tourism industry and local revenue (Purwanti, Novi dwi Dewi, 2014).Tourism is considered as one of the development sectors that can spur economic growth in certain areas, which have tourist attractions while maintaining the preservation of the natural, physical, social and cultural environment. The tourism sector is a potential sector to be developed as a source of regional income (Zhang, 2020).To increase local revenue, the government needs to develop and facilitate tourism sites so that the tourism sector can contribute to economic development.The successful development of the tourism sector can increase its role in regional revenue, where tourism is the main component by also considering the factors that influence it, such as: the number of attractions offered, the number of tourists visiting both domestic and international and the hotel occupancy rate (Nyoman, 2003).The support of funding allocations from the government every year makes the tourism sector develop tourist attractions to be visited by many tourists.The number of tourists visiting makes the tourism sector has the potential to increase local revenue.As a result, the number of tourist visits makes a positive contribution to local revenue.Tourist length of stay is the number of nights or days that a foreign tourist spends outside his or her country of residence. The length of stay of tourists is indeed one of the factors that determine the amount of revenue or foreign exchange received for countries that rely on foreign exchange from the tourism industry.Theoretically, the longer a tourist stays in a tourist destination, the more money is spent in the area.At least for the purposes of food and drink and hotel accommodation while staying there (Wijaya & Cynthia, 2017) Hotel occupancy rate is a situation to what extent the number of rooms sold, when compared to the entire number of rooms that are able to be sold (Akoit & Babulu, 2021). With the availability of adequate hotel rooms, tourists are not reluctant to visit an area, especially if the hotel is comfortable to stay in. The tourists will feel safer, more comfortable, and feel at home to stay longer in the tourist destination.The tourism industry, especially activities related to lodging, will get more and more income if the tourists stay longer. As with other sectors, tourism also affects the economy in a tourist destination area or country.The size of the influence differs between one region and another or between one country and another (Muljadi, 2012, Sammeng, 2001).Based on a study conducted by the World Travel and Tourism Council (WTTC) in 2004, the tourism sector can increase regional income, because of its nature as a Quick Yielding Industry. The successful development of the tourism sector, means that it will increase its role in regional revenue, where tourism is the main component by also paying attention to the factors that influence it, such as: the number of tourists visiting both domestic and international, the number of hotels and the average length of stay is very decisive for unstable or new residents to support tourism (Kusuma et al., 2021).besides that Per capita income for a tourist determines the length of stay and the ability to shop at the tourist attractions visited.Finally, it can increase regional revenue, especially the tourism sector has the aim, among others, to expand business opportunities and create jobs.The Government support is contained in the Gorontalo Province regional regulation number 4 of 2011 concerning the regional spatial plan or RTRW which stipulates several potential areas to be developed as tourism areas in Gorontalo.From table 6, it can be concluded that the amount of Regional Original Revenue of districts and cities in Gorontalo Province has increased every year from 2015-2015. Local . Local Revenue can be influenced by the number and types of taxes and levies collected by local governments, as well as the intensiveness of the management apparatus in implementing tax and levy collection.Local Revenue that tends to increase every year is expected to improve the economic conditions of districts and cities in Gorontalo Province. The tourism industry sector as one of the sectors relied upon for regional revenue, the Gorontalo Provincial Government is required to be able to explore and manage its tourism potential as an effort to obtain funding sources through new breakthroughs.One of the breakthroughs is to improve the quality and new tourism objects in Gorontalo.This will encourage an increase in the number of foreign tourists and domestic tourists, so that it will increase regional revenue, especially tourism levies and will also affect the economic activities of the surrounding community, so that later it can finance the implementation of regional development. Relationship between Tourist Visits and Community Income According to Soekadijo (2001) stated that tourists are people who travel from their place of residence only temporarily to stay in the place they visit.Those who are considered as tourists are people who do pleasure, for health reasons and so on, people who travel for meetings as representatives (science, administration, events, religious, athlete and business reasons).Consumption of the tourism sector is the goods and services consumed by tourists in meeting their needs, wants, and expectations during their stay in the tourist destinations they visit, ranging from travel packages, accommodation, food and beverages, transportation, recreation.culture and sports, shopping, and others (Soekadijo, 2001).Previous studies have shown that there is a relationship between tourist visits and people's income, including studies from (Nuhung et al., 2013), (Ardianti, 2017) end (Purwanti & Dewi, 2014). Relationship between the number of hotels and people's income Hotels play an important role in the tourism industry, this is because Not a few people are reluctant to visit tourist areas because of the lack of facilities adequate hotel.Hotel according to (Batafi, 2006) is a type of accommodation which uses part or all of the building to provide services commercially managed lodging, dining and drinking services as well as meet the requirements set by the government.With regard to the number of hotels, it can be interpreted that the number of accommodation used for stay that is managed commercially.With regard to tax receipts, a small number of hotels can be obtained determine the size of local tax revenue.This is caused by the amount of hotel tax revenue is determined from the tax rate, which is 10% of the total hotel revenue. So the more hotels there are, the more people will be attracted to stay and the more taxes will be submitted to the government.Study from (Alyani & Siwi, 2020), (Dewi et al., 2020) end (Sofinatun najjah, luluk fadliyanti, 2022) shows that the number of hotels has a relationship with people's income Relationship between length of stay of tourists and people's income The length of stay of tourists is the number of nights or days spent by tourists in an accommodation such as a hotel or villa.The longer and the more occupancy of a hotel room or villa, the greater the tax on the hotel that will be paid (Gede Yoga Suastika & Nyoman Mahendra Yasa, 2017).Previous studies have shown that there is a relationship between the length of stay of tourists and people's income, including studies from (Yanti et al., 2021) end (Hanafi Ahmad, 2022). Regional Original Revenue (PAD) Regional Original Revenue (PAD) is revenue obtained by the region from sources within its own territory which is levied based on local regulations in accordance with applicable laws and regulations Halim (2009: 54).The Tourism Object Retribution Revenue Revenue from tourism objects is a source of revenue for tourism objects originating from entry ticket fees, parking fees and other legitimate income from tourism objects. According to Law no.34 of 2000 concerning amendments to Law no.18 of 1997 that Regional Taxes and Regional Levies are one of the important sources of regional income to finance the implementation of regional government and regional development. According to (Munawir, 1997) levies are contributions to the government that can be imposed and direct services can be appointed. The coercion here is economical because anyone who does not benefit from the government's services will not be charged the fee.Then also described the definition and understanding related to retribution.(Sproule & White, 1997) say that levies are all payments made to individuals for using services that generate direct profits from the service. Growth Tourism and economic growth are linked by various ways in which tourism can contribute to the economic development of tourist destinations.The relationship between tourism and economic growth is the basis for the dependence of various tourist-based economies on the impact of tourism for their economic development, for example tourism provides more jobs for local residents helping local residents start businesses that cater to tourists leading to revenue generation from tourist spending and fiscal policy and assist in infrastructure development (Sadono Sukirno, 2000). One of the benefits of the tourism sector and economic development is the fact that tourist areas are passionate about providing jobs for local residents.Tourism requires many services in order to sustain the industry (Nur et al., 2022).Employment is a macroeconomic factor that contributes to economic growth by providing workers with disposable income and consequently leads to an increase in the Regional Gross Domestic Product (GDP). RESEARCH METHOD Research Type and Design The type of research used in this study is a quantitative descriptive approach because it provides a description of the research results. The research design used is quantitative research using panel data method. Quantitative research is research that shows and proves theories, to explain a true event or fact and develop and describe statistics to show the relationship between variables (Narbuko & Achmadi, 2013).Judging from this understanding, the researcher wants to determine whether the variable of tourism sector income development and the number of tourist visits have an effect on economic growth in Gorontalo Province. Research Time and Place This research was conducted from January to formulation that asks about the relationship between two or more variables, while the causal relationship is a causal relationship (Sugiyono, 2017). Data Analysis Technique Operational Research Variables This research is a definition, nature or value of people, objects, organizations or activities that have been determined by researchers to be studied and then drawn conclusions.The variables used in this study consisted of two types of variables, namely the dependent variable (dependent) and the independent variable (independent).The following is an explanation of the two variables.The independent variable is the variable that causes the dependent variable to arise.The independent variable in this study consisted of Community Welfare (Y1) Economic Growth (Y2).The dependent variable is a variable that is the result of the independent variables mentioned above, in this study the dependent variable is Tourism Sector Income in North Gorontalo Regency (X). Population and Research Sample The research population used in this study was in Gorontalo Province.Thus, the population that will be used by the researcher Data Analysis Methods and Techniques Panel data regression analysis is a regression analysis with a data structure that is panel data.Generally, parameter estimation in regression analysis with cross section data is carried out using the least squares method estimation or called Ordinary Least Square (OLS).The general regression equation is as follows: Y=f(x) (3.1) Where: Y= dependent variable X=independent variable. Multiple Liner Regression Analysis The multiple linear regression equation with 2 (two) independent variables is as follows: Y=f(x) Where: The regression explains the relationship that occurs between Y and X, if X changes value then it will also affect the value of Y. The panel data estimation function is as follows: 2. TW = Total tourists have a positive effect on people's income, meaning that for every 1 increase in tourists, income will increase by 0.001757 Thousand Rupiah. 3. JH = The number of hotels has a positive effect on community income, meaning that for every 1 increase in the number of hotels, income will increase by 6.634391 thousand rupiah. Income From the analysis results, the probability value of the variable number of tourists (JW) is 0.0011.Comparing the ρ-value with the α value, the ρ-value is less than 1% and it is decided that H0 is rejected or the number of tourists has a significant effect on community income from 2015 to 2019. Number of Hotels and Community Income From the analysis results, the probability value of the variable number of hotels (JH) is 0.2370.Comparing the ρ-value with the α value, results in a ρ-value of more than 10% and it is decided that H0 is accepted or the number of hotels does not significantly (real) affect community income in 2015 to 2019. Income From the analysis results, the probability value of the average length of stay (RLM) variable is 0.0000.Comparing the ρ-value with the α value, results in a ρ-value of less than 1% and it is decided that H0 is rejected or the average length of stay is significant to community income from 2015 to 2019. The test results above identify that the VIF numbers of the three independent variables are less than ten (VIF < 10), so there is no evidence of multilinearity between the independent variables in the model. Tourist Visits to Community Income There is a positive and significant relationship between the number of tourist visits and the variable income per capita of the community. This means that if the number of foreign tourist visits and foreign tourists visiting Gorontalo Province increases, the per capita income of the community will increase.The increasing number of tourists who come to visit will increase tourist demand in the tourism sector, which will cause symptoms of consumption of various kinds of goods and services in tourist destinations. Gorontalo's position, which is also very strategic between the two provinces of Central Sulawesi and North Sulawesi, is very strategic as a connecting route or foreign tourists traveling through the two provinces. This can be an advantage that should be utilized by Gorontalo Province.This is because artificial and natural tourism objects are found in Gorontalo Province, spread in almost all regencies / cities.With the increase in the number of tourist visits both foreigners and local residents, it will have implications for the provision of tourism instruments. Because tourism activities create demand for both consumption and investment which in principle will lead to the production of goods and services during tourism.In an effort to respond to tourist demand, investment is needed in the fields of transportation, accommodation, handicraft industries and consumer products, service industries and restaurants. The development of the tourism sector in this area is highly dependent on the number of tourists visiting an area.The arrival of tourists will increase the income of the area visited. For foreign tourists, they will bring foreign currency into the country.The more tourist visits, the more positive the impact on tourist attraction, especially the source of regional income. The regression coefficient given by the number of positive tourists on public income is 0.001757 thousand rupiah.If multiplied by one thousand rupiah, every increase of 1 tourist, the community will receive 1,757 Rupiah per tourist who comes to Gorontalo Province.This finding is in line with previous research conducted by Ramadhany & Ridlwan, (2018) which found that the number of tourist arrivals will have positive implications for community income.Not only that, Holik (2016) also found that the increasing number of tourist visits will increase people's per capita income. However, this research is not in line with research conducted by Nepal et al (2019),, stating that the arrival of tourists will reduce the community's per capita income, this is due to the presence of tourists who do not allocate part of their funds to buy the production of goods and services that have been prepared by the community and the community loses with this impact.Nepal et al (2019), also provide another reason, namely that the more tourists visit one tourist attraction, the more it will damage the tourist attraction. Number of Hotels to Community Income The number of hotels has a positive and significant effect on community per capita income.Tourists who visit to enjoy tourism destinations will tend to look for a place to stay, one of which is a hotel.The number of hotels proxied in this study based on data released by the Central Bureau of Statistics is the number of hotel buildings in each regency and city in Gorontalo Province. A brief explanation of why the number of hotels can have a positive and significant effect on people's per capita income, because basically, this segment includes short-term stays for tourists and fellow travelers, as well as food and beverages for direct consumption. The number and type of ancillary services offered in this category vary widely.providing long-term accommodation, such as primary residence, or preparing food or beverages not intended for direct consumption or sale in retail trade so that an increase in the number of hotels will result in an increase in the contribution of the accommodation and food and beverage sector in each region. Hotels are capital-intensive and laborintensive service businesses, in the sense that they require large capital with a large number of workers as well.Labor is part of the population that can produce goods and services if there is demand for goods and services.The more hotels that are built in an area, the more it will increase people's income (Ghofur, 2016). This finding is in line with previous research conducted by Anjasmara & Setiawina, (2019) which found that the number of hotels will have positive implications for community income.The number of hotels will stimulate an increase in the quantity of labor, the more labor needed, the higher the wage level, thus increasing people's income.This research can be used as a guide or guide in developing similar research.Practically, this study provides benefits for other related Average advantage of tourism development according to Sulaiman in (Khoir, Fawaidul Ani, Hety Mutika Hartanto, 2018) is that it can open up employment opportunities, increase the income / income of the community or region, support regional developmentmovements and stimulate the growth of indigenous Indonesian culture.Tourism in the Economic Development aspect is the development of tourist attractions in an area that will encourage tourism demand.This can trigger the arrival of tourists, which is an opportunity for local people to open a business so that the community will get income from the effects of tourism itself.Historically, tourism is a development priority for Gorontalo Province.In terms of tourism potential, Gorontalo Province has a lot of diverse tourism potential spread throughout the region, where this potential has a competitive advantage.The increase in tourist visits in Gorontalo is influenced by the richness and beauty of its natural scenery as well as the diversity of ethnicities and cultures owned by the region with the nickname of the city of Serambi Madinah.In addition, government support for the development of tourist destinations in Gorontalo is very intensively implemented considering that Gorontalo is a very potential area to grow its tourist destinations.Gorontalo's strategic location between Central Sulawesi Province and North Sulawesi Province is very favorable as a connecting route or route passed by foreign tourists in the two provinces. Previous studies have examined the role of tourism on community income in the context of different perspectives: Ramadhany & Ridlwan, (2018), Holik (2016) (Ghofur, 2016) Anjasmara & Setiawina, (2019); Hermanto (2020) and Alghifari (2018), However, studies investigating the entrepreneurial sector on people's income comprehensively are still limited.This study aims to investigate the effect of total tourists, number of hotel accommodations and average length of stay on people's income in Gorontalo Province. regional revenue sector plays a very important role, because through this sector it can be seen to what extent a region can finance government activities and regional development.The definition of local revenue according to Law Number 33 of 2004 concerning financial balance between central and regional governments, local revenue is revenue obtained from the local tax sector, local levies, the results of regionally owned companies, the results of managing separated regional assets, and other legitimate local revenue.Percapita Income Regional Original Revenue (PAD) is revenue obtained by the region from sources within its own territory which is levied based on local regulations in accordance with applicable laws and regulations Halim (2009: 54).The regional revenue sector plays a very important role, because through this sector it can be seen to what extent a region can finance government activities and regional development.The definition of local revenue according to Law Number 33 of 2004 concerning financial balance between central and regional governments, local revenue is revenue obtained from the local tax sector, local levies, the results of regionally owned companies, the results of managing separated regional assets, and other legitimate local revenue. is an opportunity especially for marginalized areas with several export options.Tourists who are interested in the cultural values and assets that exist in the country, for example the culture in developing countries promote tourism through the preservation of heritage values.Thus, enabling the poor to increase their income through their culture and assets (Honey & Gilpin, 2009).2. Tourism is one of the export sectors where poor people in a country can become exporters through the sale of goods to foreign tourists. October 2022, which includes all steps from preparation to research implementation.The location in this research is the tourism sector in Gorontalo Province.The problem formulation in this study uses is the annual data on Economic Growth, the annual data on the Revenue Development of the Tourism Sector, and the annual data on the Number of Tourist Visits.While the samples used in this study are annual data on economic growth, annual data on the development of tourism sector income, and annual data on the number of tourist visits in Gorontalo Province.The sampling method used is purposive sampling, which is a sampling method, in which sample members are submitted to the consideration of data collectors based on considerations that are in accordance with certain aims and objectives.The data used in this sampling are Economic Growth, Number of Tourist Visits and Revenue Development of the Tourism Sector in Gorontalo Province in 2015-2019.Data Collection Technique The data collection technique used in this research is documentation.Researchers collected archival data at the Central Statistics Agency (BPS) of Gorontalo Province.In addition, this research also uses data collection techniques with library research, which is done by reading, searching and analyzing books or other scientific references that have to do with the topic being studied. + + + + + Then the function is formed as an econometric model with the following model specifications: = + + + + + of Gorontalo Province we can see the contribution made by each region.Of the three variables (Number of Tourists, Number of Hotels, and Average Length of Stay) that affect community income. Figure Figure 1.Data Normality Source: Estimation Output Processed, 2022 (Attached) results of the covariance test (Heteroscedasticity) with the Glesier method, it can be concluded that the value of ρ> α (0.05).This means that there is no heterogeneity in the model or the variables in the estimated model have no attachment to the residual value between the variations of each variable. Length of Stay on CommunityIncomeAverage length of stay has a negative and significant effect on community per capita income.An increase in the average length of stay can reduce the per capita income of the community.This is certainly interesting, because the results of the estimation of tourist visits and the number of hotels have a positive effect on people's per capita income, while the average length of stay has not been able to increase the contribution of the food and beverage accommodation sector in each region that is the object of research.This can happen, because in 5 regencies and 1 city from 2015 to 2018 there was no significant increase in the average length of stay of tourists.For example, in Bone-bolango Regency, which only has 1 hotel and does not have a place to stop or rest provided by the local government.Another reason is that the average length of stay has a negative and significant impact on per capita income because it is not yet robust and optimal in developing and promoting attractions, facilities and availability.These three pillars are central themes for local governments because they are closely related to average length of stay.Overlapping and multiple interests in the tourism sector, which makes cross-sector coordination more difficult.When the tourist destination does not have attractive attractions and is only monotonous, tourists will not stay long at the tourist attraction.This attraction is an important component as an attraction for tourists to visit.This attraction can be done by innovating the resources in each region, because each region in Gorontalo Province has its own uniqueness.Then, in terms of supporting facilities needed to support tourism activities, such as accommodation, entertainment and culinary needs, this can have an impact on the length of stay of tourists.This research shows that these facilities have not been optimally promoted by travel agencies so that tourists will not visit the place for long.Plus the lack of maintenance of tourism facilities will also have an impact on reducing the length of stay of tourists.Another component is the accessibility component related to transportation and supporting infrastructure such as roads to tourist attractions.It is likely that each region has its own accessibility component, because if the accessibility of communication, road infrastructure is adequate, the distance traveled by tourists to tourist attractions will not be a big problem.However, the lack and difficulty of finding transportation, making it impossible to connect one place to another.Road infrastructure is also an important factor, because if the road to a tourist destination is bad, tourists will discourage them from going to that tourist destination.The three components mentioned earlier are some of the statements that cause the average length of stay of tourists to have implications for a decrease in the per capita income of the community.Therefore, local governments need to encourage investment in the context of destination development in the region.The existing tourism potential always needs to be explored, both in terms of nature and culture, arts, crafts and culinary.Then the length of stay of tourists more than one day will usually get a discount, this rate reduction will reduce hotel revenue.Tourists will also choose a place to stay not far from the destination location, this will also reduce the accommodation spent by tourists.The results of this study are in line with research conducted by Hermanto (2020),, stating that the length of stay of tourists will have a negative impact on community per capita income.Alghifari (2018), the results of his research agree with the results of this study that length of stay will respond negatively to community per capita income. of the research results has been shown and discussed in detail, the researcher can draw conclusions from the research entitled "Analysis of the Tourism Sector on Community Income in GorontaloProvince".Among others as follows; 1)There is a positive and significant relationship between the number of tourist visits and the variable community per capita income.This means that if the number of foreign tourist visits and foreign tourists visiting Gorontalo Province increases, the community's per capita income will increase; 2) The number of hotels has a positive and significant effect on people's per capita income.This means that if the number of hotels increases, the per capita income of the community will increase; 3) Average length of stay has a negative and significant effect on per capita income of the community.An increase in the average length of stay can reduce the community's per capita income.Because if tourists are more than one day they will usually get a discount, this rate reduction will reduce hotel revenue.Tourists will also choose a place to stay not far from the destination location, this will also reduce the accommodation spent by tourists.SuggestionFrom the results of the study and then the conclusions of the study, the researchers made several recommendations so that the tourism sector could increase the income per capita of the community 1) The need for updating tourist data visiting all tourist facilities in the area.Incorporate different forms of travel spending into each attraction.This is necessary to evaluate and formulate a tourism development strategy in the future towards a better direction and not to forget about innovation; 2) Consideration of investment in the tourism sector must consider the components of attractions, facilities and accessibility.This encourages tourism to contribute to sustainable economic growth; 3) Local governments must also encourage investment in the context of developing regional destinations.Existing tourism potential must always be explored, both in terms of nature and culture, crafts and culinary.Implications and Limitations This research has both theoretical and practical implications.Theoretically, this research adds knowledge about developed tourism, adds insight, especially for writers and generally for readers about the number of tourist visits and knowing how the development of the tourism sector on people's income and in Gorontalo Province. Table 1 . Domestic Tourist Visit Data Districts and Cities in Gorontalo Province Table 2 . Data on Foreign Tourist Visits Districts and Cities in Gorontalo Province Table 4 . Average Length of Stay of Domestic Tourists Regency and City in Gorontalo Table 6 . Regional Original Revenue of Districts and Cities in Gorontalo Province (in million rupiah) Table 9 . Region Intercept Table Table 10 . t-Statistic Test Table
6,946.8
2022-10-23T00:00:00.000
[ "Economics" ]
Gene–gene and gene-environment interactions on cord blood total IgE in Chinese Han children Background IL13, IL4, IL4RA, FCER1B and ADRB2 are susceptible genes of asthma and atopy. Our previous study has found gene–gene interactions on asthma between these genes in Chinese Han children. Whether the interactions begin in fetal stage, and whether these genes interact with prenatal environment to enhance cord blood IgE (CBIgE) levels and then cause subsequent allergic diseases have yet to be determined. This study aimed to determine whether there are gene–gene and gene-environment interactions on CBIgE elevation among the aforementioned five genes and prenatal environmental factors in Chinese Han population. Methods 989 cord blood samples from a Chinese birth cohort were genotyped for nine single-nucleotide polymorphisms (SNPs) in the five genes, and measured for CBIgE levels. Prenatal environmental factors were collected using a questionnaire. Gene–gene and gene-environment interactions were analyzed with generalized multifactor dimensionality methods. Results A four-way gene–gene interaction model (IL13 rs20541, IL13 rs1800925, IL4 rs2243250 and ADRB2 rs1042713) was regarded as the optimal one for CBIgE elevation (testing balanced accuracy = 0.5805, P = 9.03 × 10–4). Among the four SNPs, only IL13 rs20541 was identified to have an independent effect on elevated CBIgE (odds ratio (OR) = 1.36, P = 3.57 × 10–3), while the other three had small but synergistic effects. Carriers of IL13 rs20541 TT, IL13 rs1800925 CT/TT, IL4 rs2243250 TT and ADRB2 rs1042713 AA were estimated to be at more than fourfold higher risk for CBIgE elevation (OR = 4.14, P = 2.69 × 10–2). Gene-environment interaction on elevated CBIgE was found between IL4 rs2243250 and maternal atopy (OR = 1.41, P = 2.65 × 10–2). Conclusions Gene–gene interaction between IL13 rs20541, IL13 rs1800925, IL4 rs2243250 and ADRB2 rs1042713, and gene-environment interaction between IL4 rs2243250 and maternal atopy begin in prenatal stage to augment IgE production in Chinese Han children. Introduction The worldwide prevalence of allergic diseases has dramatically increased during the past few decades, resulting in heavy burden to the whole society and huge medical expenditure around the world [1]. Allergic diseases have long been attributed to IgE-mediated inflammatory responses [2]. Evidence has demonstrated that regulation of IgE production may begin in utero, reflected in the levels of cord blood IgE (CBIgE) [3]. Elevated CBIgE has been shown to be a risk factor for the subsequent development of allergic diseases [4]. Recent studies have indicated that certain genes and environmental factors may interact to elevate CBIgE levels [5][6][7], with the heritability estimated around 84-95% [8]. Ober and Hoffjan reviewed 118 genes associated with asthma or atopy, among which 25 have been replicated in six or more independent samples and thus are considered to be true susceptibility genes [9]. The elite group of susceptible genes of asthma and atopy replicated in more than ten different studies include IL13, IL4, IL4RA, FCER1B and ADRB2, five important inflammatory genes associated with IgE levels [10][11][12]. Our previous study has found that gene-gene interactions on childhood asthma exist between these genes in Chinese Han population [13]. Whether the gene-gene interactions among the aforementioned five genes begin in fetal stage, and whether these genes interact with prenatal environment to enhance CBIgE production and then cause subsequent allergic diseases have yet to be determined. This study attempts to explore whether there are gene-gene and gene-environment interactions on CBIgE elevation among genetic variants in IL13, IL4, IL4RA, FCER1B and ADRB2 genes and prenatal environmental factors in Chinese Han population. This is the first study to investigate gene-gene and gene-environment interactions on CBIgE in the mainland of China. Elucidation of genetic and environmental determinants of CBIgE may allow for detection and prevention of allergic sensitization in early life. Study participants This study included 989 Chinese Han children from the Shanghai Allergy Cohort, which was a prospective birth cohort with infants recruited between 2012 and 2013 at two large tertiary hospitals in Shanghai, Xinhua Hospital and the International Peace Maternity & Child Health Hospital. Written informed consent was obtained from the mothers prior to delivery. Prenatal and perinatal epidemiologic and clinical information along with cord blood samples were collected by trained research nurses. The study was approved by the Ethics Committee of Xinhua Hospital and the International Peace Maternity & Child Health Hospital (approval number: XHEC-C-2012-003), and conducted according to the principles in the Declaration of Helsinki. Epidemiologic and clinical information collection Trained research nurses conducted face-to-face interviews using structured questionnaires, collecting information on maternal age, height, prepregnancy weight, education level, maternal atopy, prenatal pet exposure, prenatal active or secondhand smoking, and family income. Maternal atopy was referred to those mothers who had asthma, allergic rhinitis or atopic dermatitis along with detectable specific IgE. Prenatal pet exposure was defined as keeping cats or dogs at home during pregnancy. Information on parity, previous pregnancy, gestational age, date of birth, delivery mode, infants' gender, birth weight and antenatal complications was obtained from medical records. CBIgE measurement CBIgE levels were determined by using ImmunoCAP Total IgE Low Range Assay [5] on the Phadia 250 (Thermo Scientific ™ , Waltham, Massachusetts, USA) according to the standard manufacturer's protocols. Elevation of CBIgE levels was cut-off at ≥ 0.5 KU/L as previously described [5,6]. Selection of genes and single nucleotide polymorphisms This study focused on five candidate genes, including IL13, IL4, IL4RA, FCER1B and ADRB2, which are key inflammatory genes affecting IgE levels [10][11][12] and had been found associated with asthma or atopy by more than ten different studies [9]. Our previous study had identified gene-gene interactions on asthma between these genes in Chinese Han children [13]. Within these genes, nine known functional single-nucleotide polymorphisms (SNPs) [13] with minor allele frequency greater than 10% were chosen for analysis, as shown in Table 1. Genotyping Genomic DNA was extracted from cord blood using QIAamp DNA Blood Mini Kit (QIAGEN, Hilden, Germany). Genotyping of the nine SNPs was performed by matrix-assisted laser desorption / ionization time of flight mass spectrometry (MALDI-TOF MS) [14] using the MassARRAY iPLEX platform (Sequenom Inc, San Diego, CA, USA) according to the manufacturer's instructions. Laboratory personnel were blinded to CBIgE status. The overall call rate was 98.6%. Genotyping quality control included 5% duplicate and negative samples. Genotyping concordance rate was higher than 98%. Statistical analysis Associations between CBIgE elevation and the epidemiologic characteristics of the study subjects were assessed by the χ 2 test. The Hardy-Weinberg equilibrium test for each of the nine SNPs was performed in the total population with the χ 2 statistics. Association of elevated CBIgE in subjects with each SNP was analyzed by using the Pearson's χ 2 test. In addition to the allelic test of association, dominant and recessive genetic models were tested for the nine SNPs by logistic regression analysis. P value, odds ratio (OR) and 95% confidence interval (95% CI) were calculated by using the PLINK program (http:// pngu. mgh. harva rd. edu/ ~purce ll/ plink/). A twotailed P value ≤ 0.0055 after Bonferroni Multiple Testing correction was considered statistically significant. Gene-gene interactions were analyzed with GMDR (Version 1.0), which is a free, open-source interaction analysis tool, aimed to perform gene-gene interaction with generalized multifactor dimensionality reduction (GMDR) methods [15]. The model that maximizes the testing balanced accuracy (TBA) and minimizes the statistical significance is selected. TBA indicates the accuracy of classification of cases and controls. Heuristically, a satisfactory TBA is higher than 0.55. Gene-gene interactions revealed by GMDR analyses were validated by χ 2 tests. Gene-environment interactions were evaluated by logistic regression analysis and GMDR approach. Linkage disequilibrium (LD) was calculated for the SNPs located on one chromosome. The detection power of the sample size in this study was 0.88 based on the minor allele frequency of 0.25 and its OR for CBIgE elevation at 1.30. Association between CBIgE elevation and the epidemiologic characteristics of the study subjects There were 989 Chinese Han infants in this study, of whom 27.1% had elevated CBIgE levels. Table 2 presents the distribution of CBIgE concentrations by epidemiologic characteristics of the study subjects. Cesarean section and male gender were associated with elevated CBIgE levels (P < 0.05). Association between CBIgE elevation and single SNPs All the nine SNPs were in Hardy-Weinberg equilibrium (P > 0.05). As shown in Table 3, SNPs IL13 rs1295686 and IL13 rs20541 were solely associated with CBIgE elevation. The A allele of rs1295686 (OR = 1.37, P = 2.73 × 10 -3 ) and T allele of rs20541 (OR = 1.36, P = 3.57 × 10 -3 ) were significantly increased in elevated CBIgE group compared with normal group. The most significant association with CBIgE elevation was found under recessive model for the two SNPs. Significant association with CBIgE elevation was not found among the other seven loci (P > 0.0055, after Bonferroni Multiple Testing correction). Gene-gene interactions on CBIgE elevation Gene-gene interactions on CBIgE elevation were explored among all the nine SNPs by GMDR approach. Totally, there were four models exhibiting a TBA higher than 0.55, as shown in Table 4. Based on the TBA and P values, significant multi-loci interactions were found in the four models (P < 0.05). Among them, the fourway interaction model (IL13 rs20541, IL13 rs1800925, IL4 rs2243250 and ADRB2 rs1042713) which showed the highest TBA and lowest P value (TBA = 0.5805, P = 9.03 × 10 -4 ), was regarded as the optimal one. As the four SNPs that made up the optimal model are located on one chromosome, pairwise LD of them was calculated (r 2 < 0.3), indicating a low LD between them. Interactions between the four SNPs of the optimal model were further validated by χ 2 tests. Table 5 shows that individuals carrying IL13 rs20541 TT, IL13 rs1800925 CT/TT, IL4 rs2243250 TT and ADRB2 rs1042713 AA had a significantly higher risk of CBIgE elevation compared with those without any of the four risk genotypes (OR = 4.14, P = 2.69 × 10 -2 ), and also greater than those with less than four risk genotypes. Gene-environment interactions on CBIgE elevation Logistic regression analysis and GMDR approach were applied to search the potential gene-environment interactions on CBIgE elevation between the nine SNPs and environmental factors including prenatal pet exposure, prenatal active or secondhand smoking, maternal atopy, maternal age, maternal prepregnancy BMI, delivery mode, infants' gender and season of birth. By using logistic regression analysis, it was found that C allele of IL4 rs2243250 interacted with maternal atopy to elevate CBIgE levels (OR = 1.41, P = 2.65 × 10 -2 ), as shown in Table 6. However, no significant geneenvironment interaction was found by GMDR analysis. Discussion IgE-mediated reaction is the central component of allergic diseases. Five key inflammatory genes affecting IgE levels, including IL13, IL4, IL4RA, FCER1B and ADRB2 [10][11][12], have been demonstrated associated with asthma or atopy by more than ten different studies [9]. Our previous study has found that gene-gene interactions on asthma exist between these genes in Chinese Han children [13]. This study attempted to determine whether the interactions begin in utero, and whether these genes interact with prenatal environmental factors to increase CBIgE levels and induce subsequent allergic diseases. Of the models tested using GMDR approach, the fourway gene-gene interaction model consisting of IL13 rs20541, IL13 rs1800925, IL4 rs2243250 and ADRB2 rs1042713 was chosen as the optimal one for CBIgE elevation based on its TBA and P value. Among the four SNPs, only IL13 rs20541 was identified to have an independent effect on CBIgE elevation, while the other three had small but synergistic effects. Carriers of IL13 rs20541 TT, IL13 rs1800925 CT/TT, IL4 rs2243250 TT and ADRB2 rs1042713 AA were estimated to be at more than fourfold higher risk for CBIgE elevation. Among these genes and prenatal environmental factors, only IL4 rs2243250 and maternal atopy were found to have interactions on elevated CBIgE. This is the first study The missing data: maternal age (n = 25); maternal prepregnancy BMI (n = 27); maternal education (n = 24); maternal atopy (n = 36); prenatal pet exposure (n = 27); prenatal active or secondhand smoking (n = 25); parity (n = 23); previous pregnancy (n = 23); gestational age (n = 23); season of birth (n = 31); delivery mode (n = 23); gender (n = 24); birth weight (n = 23); antenatal complications (n = 49). The missing data were not from the same individuals for each variable CBIgE cord blood IgE a Maternal atopy was referred to those mothers who had asthma, allergic rhinitis or atopic dermatitis along with detectable specific IgE b Keeping cats or dogs at home during pregnancy c Pregnancy hypertension, diabetes, infection or intrauterine growth retardation * χ 2 test was used to analyze associations between CBIgE elevation and the epidemiologic characteristics To our knowledge, this study is also the first to identify gene-gene interactions between IL13 rs20541, IL13 rs1800925, IL4 rs2243250 and ADRB2 rs1042713 on CBIgE elevation. IL13 and IL4 genes encode cytokines interleukin 13 (IL13) and IL4, which share a common signaling pathway in binding to their receptors on human B cells, and switch immunoglobulin production from IgM to IgE [16]. ADRB2 gene encodes Beta2-adrenergic receptor (ADRB2). Stimulation of ADRB2 on B cells responding to allergen enhances IgE production via a unique signaling pathway, independently of class switch recombination [17,18]. IL13, IL4 and ADRB2 are all associated with IgE levels. IL13 rs20541 TT genotype, IL13 rs1800925 T allele, IL4 rs2243250 TT genotype and ADRB2 rs1042713 AA genotype have been associated with increased IL13 concentration [19], enhanced IL13 promoter activity [20], augmented IL4 levels [21], and decreased downregulation of ADRB2 [22], respectively. How these four varients interact with each other biologically to promote IgE production in prenatal stage need further functional studies in vitro and in vivo. In this study, gene-environment interaction on elevated CBIgE was found between IL4 rs2243250 and maternal atopy. Maternal atopy has been reported to modify cord blood immune response and it may provide an intrauterine environment that influences fetal immune development and results in allergic predisposition [23][24][25]. IL4 gene polymorphism affects cytokine IL4 levels [26]. How maternal atopy interacts with IL4 gene variants to enhance antenatal IgE production need future biological studies. Our study confirmed the independent role of IL13 rs20541 and rs1295686 on CBIgE elevation, and also found the association of cesarean section and male gender with elevated CBIgE levels, consistent with previous reports [3,[5][6][7]27]. However, no interactions were identified among them. To date, only a few studies have explored gene-gene and gene-environment interactions on CBIgE elevation. One study in a predominantly black sample reported that three IL13 SNPs (rs1295686, rs1800925 and rs206974) could jointly influence CBIgE concentration [3]. One study in a birth cohort in Korea identified interactions between reactive oxygen species genes, prenatal exposure to home renovation and maternal atopy on CBIgE response [28]. Another study, in a Chinese population in Taiwan, found that IL13 rs20541, male sex and prenatal environmental tobacco smoke interacted on antenatal IgE production [5]. In this study, we found a four-way genetic interactions among IL13 rs20541, IL13 rs1800925, IL4 rs2243250 and ADRB2 rs1042713, and a two-way gene-environment interactions between IL4 rs2243250 and maternal atopy on CBIgE elevation. The variation of the gene-gene and geneenvironment interactions on fetal IgE production may be in part explained by different populations and different genetic and environmental factors focused in different studies. Therefore, when we move forward to identify constellations of interacting genes and environments that influence antenatal IgE production, replication studies in different populations are required. There are some limitations in this study. First, only five genes (IL13, IL4, IL4RA, FCER1B and ADRB2) were chosen as candidate genes. However, these five genes are susceptible genes of asthma and atopy replicated in more than ten different studies [9], and our previous study has found that gene-gene interactions on asthma exist between these genes in Chinese Han children [13]. Second, the subjects' environmental exposures were evaluated using a self-reported questionnaire, which might lead to an underestimation of the associations of certain environmental exposures. Genes and environmental factors interact to elevate CBIgE concentrations [5][6][7], with the heritability estimated around 84-95% [8]. In our future studies, more candidate genes especially those from genomewide association studies should be included and direct measurement of certain environmental exposures is needed. Third, cord blood IgA concentrations were not measured to exclude subjects whose circulation was contaminated by maternal blood. However, previous studies using cord blood IgA levels as an indicator of maternal contamination have reported a very low rate of contamination [29]. Therefore CBIgE is unlikely to be contaminated by maternal IgE [3]. In summary, Gene-gene interaction between IL13 rs20541, IL13 rs1800925, IL4 rs2243250 and ADRB2 rs1042713, and gene-environment interaction between IL4 rs2243250 and maternal atopy begin in fetal stage to increase IgE production in Chinese Han children. After future functional and replication studies, these findings may be translated into specific strategies for early prediction and prevention of allergy.
3,914.4
2021-07-09T00:00:00.000
[ "Medicine", "Biology" ]
Radiation reaction in electron-beam interactions with high-intensity lasers Charged particles accelerated by electromagnetic fields emit radiation, which must, by the conservation of momentum, exert a recoil on the emitting particle. The force of this recoil, known as radiation reaction, strongly affects the dynamics of ultrarelativistic electrons in intense electromagnetic fields. Such environments are found astrophysically, e.g. in neutron star magnetospheres, and will be created in laser-matter experiments in the next generation of high-intensity laser facilities. In many of these scenarios, the energy of an individual photon of the radiation can be comparable to the energy of the emitting particle, which necessitates modelling not only of radiation reaction, but quantum radiation reaction. The worldwide development of multi-petawatt laser systems in large-scale facilities, and the expectation that they will create focussed electromagnetic fields with unprecedented intensities > 1023 Wcm−2, has motivated renewed interest in these effects. In this paper I review theoretical and experimental progress towards understanding radiation reaction, and quantum effects on the same, in high-intensity laser fields that are probed with ultrarelativistic electron beams. In particular, we will discuss how analytical and numerical methods give insight into new kinds of radiation-reaction-induced dynamics, as well as how the same physics can be explored in experiments at currently existing laser facilities. ∗<EMAIL_ADDRESS>1 ar X iv :1 91 0. 13 37 7v 1 [ ph ys ic s. pl as m -p h] 2 9 O ct 2 01 9 I. INTRODUCTION It is a well-established experimental fact that charged particles, accelerating under the action of externally imposed electromagnetic fields, emit radiation [1]. The characteristics of this radiation depend strongly upon the magnitude of the acceleration as well as the shape of the particle trajectory. For example, if relativistic electrons are made to oscillate transversely by a field configuration that has some characteristic frequency ω 0 , they will emit radiation that has characteristic frequency 2γ 2 ω 0 , where γ is their Lorentz factor. Given ω 0 corresponding to a wavelength of one micron and an electron energy of order 100 MeV, this easily approaches the 100s of keV or multi-MeV range [2]. The total power radiated, as we shall see, increases strongly with γ and the magnitude of the acceleration. We can then ask: as radiation carries energy and momentum, how do we account for the recoil it must exert on the particle? Equivalently, how do we determine the trajectory when one electromagnetic force acting on the particle is imposed externally and the other arises from the particle itself? That this remains an active and interesting area of research is a testament not only to the challenges in measuring radiation reaction effects experimentally [3], but also to the difficulties of the theory itself [4,5]. The 'correct' formulation of radiation reaction within classical electrodynamics has not yet been established, nor has the complete corresponding theory in quantum electrodynamics. While these points are undoubtedly of fundamental interest, it is important to note that radiation reaction and quantum effects will be unavoidable in experiments with high-intensity lasers and therefore these questions are of immense practical interest as well. This is motivated by the fast-paced development of large-scale, multipetawatt laser facilities [6]: today's facilities reach focussed intensities of order 10 22 Wcm −2 [7][8][9], and those upcoming, such as Apollon [10], ELI-Beamlines [11] and Nuclear Physics [12], aim to reach more than 10 23 Wcm −2 , with the added capability of providing multiple laser pulses to the same target chamber. At these intensities, radiation reaction will be comparable in magnitude to the Lorentz force, rather than being a small correction, as is familiar from storage rings or synchrotrons. Furthermore, significant quantum corrections to radiation reaction are expected [4], which profoundly alters the nature of particle dynamics in strong fields. The purpose of this review is to introduce the means by which radiation reaction, and quantum effects on the same, are understood, how they are incorporated into numerical simulations, and how they can be measured in experiments. While there is now an extensive body of literature considering experimental prospects with future laser systems, our particular focus will be the relevance to today's high-intensity lasers. It is important to note that much of the same physics can be explored by probing such a laser with an ultrarelativistic electron beam. Previously such experiments demanded a large conventional accelerator [13,14], but now 'all-optical' realization of the colliding beams geometry is possible thanks to ongoing advances in laser-wakefield acceleration [15,16]. Indeed, the first experiments to measure radiation-reaction effects in this configuration have recently been reported by Cole et al. [17], Poder et al. [18]. This review attempts to provide the theory context for the interest in their results. Let us begin by introducing the various parameters that determine the importance of radiation emission, radiation reaction, and quantum effects. It will be helpful to consider the concrete and it can be shown to be both Lorentz-and gauge-invariant [19]. The solution to the equations of motion, where the force is given by the Lorentz force only, can be found in many textbooks (see Gibbon [20] for example), so we will only summarize it here. The electromagnetic field tensor for the wave is eF µν = ma 0 ∑ i f i (φ )(k µ ε i ν − k ν ε i µ ), where k is the wavevector, primes denote differentiation with respect to phase φ = k.x, and the ε 1,2 are constant polarization vectors that satisfy ε 2 i = −1 and k.ε i = 0. Then the four-momentum of the electron p may be written in terms of the potential eA µ = ma 0 ∑ i f i (φ )ε i µ : Translational symmetry guarantees that k.p = k.p 0 . The electron trajectory x µ (φ ) = (p µ /k.p) dφ . Let us say that the electron initially counterpropagates into a circularly polarized, monochromatic wave, with velocity β 0 and Lorentz factor γ 0 . The electron is accelerated by the wave in the longitudinal direction, parallel to its wavevector, reaching a steady drift velocity of β d . Transforming to the electron's average rest frame (ARF), as shown in fig. 1, we find that the electron executes circular motion with Lorentz factor γ = (1 + a 2 0 ) 1/2 , velocity β = a 0 (1 + a 2 0 ) −1/2 and radius r = a 0 /[γ 0 (1 + β 0 )ω 0 ]. That γ is constant tells us that there is a phase shift of π/2 between the rotation of the velocity and electric field vectors v and E, so v · E = 0 and the external field does no work on the charge. The instantaneous acceleration of the charge is non-zero so the electron emits radiation while describing this orbit. We can use classical synchrotron theory [21] to calculate the energy radiated in a single cycle E rad , as a fraction f of the electron energy in the ARF γ m, with the result f = E rad /(γ m) = 4πR c /3. The magnitude of the radiation losses is controlled by the invariant classical radiation reaction parameter [22] R c ≡ αa 2 0.13 E 0 500 MeV Here E 0 is the initial energy of the electron, I 0 = E 2 0 the laser intensity and λ = 2π/ω 0 its wavelength. If we define 'significant' radiation damping to be an energy loss of approximately 10% per period [23], we find the threshold to be R c 0.024, or a 0 γ 1/2 0 7 × 10 2 for a laser with a wavelength of 0.8 µm. At this point the force on the electron due to radiative losses must be included in the equations of motion. We can see this directly by comparing the magnitudes of the radiation reaction and Lorentz forces. Estimating the former as F rad = E rad /(2πr ) and the Lorentz force as F ext = γ m/r , we have that F rad /F ext 2R c /3. For R c 1 we enter the radiation-dominated regime [24][25][26]. We will discuss how the recoil due to radiation emission is included in classical electrodynamics in section II A. Before doing so, let us also consider the spectral characteristics of the radiation emitted by the accelerated electron. In principle the periodicity of the motion, and its infinite duration, means that the frequency spectrum is made up of harmonics of the ARF cyclotron frequency. However, recall that at large γ a 0 , relativistic beaming means that most of the radiation is emitted in the forward direction into a cone with half-angle 1/γ . The length of the overlap between the electron trajectory and this cone defines the formation length l f , which is the characteristic distance over which radiation is emitted. A straightforward geometrical calculation gives the ratio between l f and the circumference of the orbit C = 2πr The invariance of a 0 suggests we could have reached this result in a covariant way; indeed, a full determination of the size of the phase interval that contributes to emission gives the same result, even quantum mechanically [27]. The smallness of the formation zone means that the spectrum is broadband, with frequency components up to a characteristic value ω γ 3 /r . Comparing this characteristic frequency to the cyclotron frequency (in the average rest frame) ω c = 1/r gives us a measure of the classical nonlinearity: ω ω c a 3 0 . At a 0 1, the radiation is made up of very high harmonics and is therefore well-separated from the background. The ratio between the frequency ω and the electron energy in the ARF χ = ω /(γ m) is another useful invariant parameter Restoring factors ofh and c we can show that χ ∝h, unlike R c . It therefore parametrizes the importance of quantum effects on radiation reaction [27,28], as can be seen by the fact that if χ ∼ 1, an individual photon of the radiation can carry off a substantial fraction of the electron's energy. By setting γ 0 = 1 in eq. (6), we can show that χ is equal to the ratio of the electric field in the instantaneous rest frame of the electron to the so-called critical field of QED [29,30] E cr ≡ m 2 e = 1.326 × 10 18 Vm −1 , which famously marks the threshold for nonperturbative electron-positron pair creation from the vacuum [31]. The two parameters R c and χ allow us to characterize the importance of classical and quantum radiation reaction respectively. We show these as functions of a 0 and χ, the classical and quantum nonlinearity parameters, in fig. 2. It is evident that, as a 0 increases, it requires less and less electron energy to enter the radiation-dominated regime. Indeed, if the acceleration is provided entirely by the laser so that γ a 0 , radiation reaction becomes dominant at about the same a 0 that quantum effects become important, assuming that ω 0 corresponds to a wavelength of 0.8 µm. However, for a 0 50 as is accessible with existing lasers [7][8][9], it is not possible to probe radiation reaction via direct illumination of a plasma. Instead, the experiments illustrated in fig. 2 have used preaccelerated electrons to explore the strong-field regime, thereby boosting both R c and χ. The importance, and type, of radiation reaction effects can be parametrized by a 0 , the normalized intensity of the laser field or classical nonlinearity parameter, and χ, the quantum nonlinearity parameter. Classical radiation damping becomes strong when R c = αa 0 χ > 0.01 (light blue) and dominates when R c > 0.1 (darker blue). Quantum corrections to the spectrum become necessary when χ > 0.1. Electronpositron pair creation and QED cascades are important for χ > 1. Experiments that have explored quantum effects with intense lasers are shown by open circles [13,14,17,18], along with a recent result with aligned crystals [32]. that, as R c is defined on a per-cycle basis, it would be possible for classical radiation reaction effects to be large in long laser pulses while remaining below the threshold for quantum effects.) The next generation of laser facilities will reach a 0 in excess of 100, perhaps even 1000 [10][11][12]. The plasma dynamics explored in such experiments will be strongly affected by radiation reaction and quantum effects. A. Classical radiation reaction In classical electrodynamics, radiation reaction is the response of a charged particle to the field of its own radiation. The first equation of motion to include both the external and self-induced electromagnetic forces in a manifestly covariant and self-consistent way was obtained by Dirac [33]. This solution starts from the coupled Maxwell's and Lorentz equations and features a mass renormalization that is needed to eliminate divergences associated with a point-like charge [34,35]. The result is generally referred to as the Lorentz-Abraham-Dirac (LAD) equation. For an electron with four-velocity u, charge −e and mass m it reads where τ is the proper time. Here F µν is the field tensor for the externally applied electromagnetic field, so it is the second term that accounts for the self-force. Although the LAD equation is an exact solution of the Maxwell-Lorentz system, using it directly turns out to be problematic. The momentum derivative d 2 u µ dτ 2 in the RR term leads to so-called runaway solutions, in which the electron energy increases exponentially in the absence of external fields, and to pre-acceleration, in which the momentum changes in advance of a change in the applied field [36]. These issues have prompted searches for alternative classical theories of radiation reaction [37][38][39][40][41] that have more satisfactory properties (see the review by [5] for details). The most widely used classical theory is that proposed by [42]. They realized that if the second (RR) term in eq. (9) were much smaller than the first in the instantaneous rest frame of the charge, it would be possible to reduce the order of the LAD equation by substituting du dτ → e m F µν u ν in the RR term. The result, called the Landau-Lifshitz equation, is first-order in the electron momentum and free from the pathological solutions of the LAD equation [5]: The following two conditions for the characteristic length scale over which the field varies L and its magnitude E must be fulfilled in the instantaneous rest frame for the order reduction procedure to be valid: L λ C and E E cr /α, where λ C = 1/m is the Compton length. Note that both of these are automatically fulfilled in the realm of classical electrodynamics [4], as quantum effects can only be neglected when L λ C and E E cr . The former condition ensures that the electron wavefunction is well-localized and the latter means recoil at the level of the individual photon is negligible [4]. One reason to favour the Landau-Lifshitz equation is that all physical solutions of the LAD equation are solutions of the Landau-Lifshitz equation [43]. Once the trajectories are determined, the self-consistent radiation is obtained from the Liénard-Wiechart potentials, which give the electric and magnetic fields of a charge in arbitrary motion [1]. The spectral intensity of the radiation from an ensemble of N e electrons, the energy radiated per unit frequency ω and solid angle Ω, is given in the far field by where n the observation direction, and r k and v k are the position and velocity of the kth particle at time t [1]. In plane electromagnetic waves Among the other useful properties of eq. (10) is that it can solved exactly if the external field is a plane electromagnetic wave [44]. Taking this field to be eF , using the same definitions as in section I, eq. (10) is most conveniently expressed in terms of the lightfront momentum u − ≡ k.p/(mω 0 ), scaled perpendicular momentaũ x,y ≡ u x,y /u − , and phase φ : and The remaining component u + is determined by the mass-shell condition u − u + − u 2 x − u 2 y = 1 and the position by integration of where u − 0 is the initial lightfront momentum, the classical radiation reaction parameter R c = a 2 0 u − 0 ω 0 /m as in eq. (2), and dψ. The choice of notation here reflects the fact that f (φ ) is proportional to the electric field and so I(φ ) is like an integrated energy flux. We use eq. (14) to solve eq. (13), obtainingũ i (φ ) and then where u i,0 is the initial value of the perpendicular momentum component i and H i (φ ) = φ −∞ f i (ψ)I(ψ) dψ. The electron trajectory in the absence of radiation reaction is obtained by setting α = 0, in which case we recover eq. (1) as expected. In section I we estimated that the electron would radiate in a single cycle a fraction 4πR c /3 of its total energy. Using our analytical result eq. (14) and assuming γ 1 so that u − 2γ, we can show this fraction is actually E rad /(γ 0 m) = (4πR c /3)/(1 + 4πR c /3). Here the denominator represents radiation-reaction corrections to the energy loss, guaranteeing that E rad /(γ 0 m) < 1. With these corrections, the energy emitted, according to the Larmor formula, is equal to the energy lost, according to the Landau-Lifshitz equation (see Appendix A of Di Piazza [45] for a direct calculation of momentum conservation). The emission spectrum eq. (11) may also be expressed in terms of an integral over phase. The number of photons scattered per unit (scaled) frequency s = ω/ω 0 and solid angle is [46,47] where the scaled four-position ξ ≡ ω 0 x, and ε and n are the four-polarization and propagation direction of the scattered photon. Given these relations and the analytically determined trajectory, we can numerically evaluate the number of photons scattered to given frequency and polar angle by integrating eq. (16), summed over polarizations, over all azimuthal angles 0 ≤ ϕ < 2π. B. Quantum corrections: suppression and stochasticity We showed in fig. 2 that in many scenarios of interest, reaching the regime where radiation reaction becomes important automatically makes quantum effects important as well. This raises the question: what is the quantum picture of radiation reaction? Let us revisit the example we studied classically in section I, that of an electron emitting radiation under acceleration by a strong electromagnetic wave. One might instinctively liken this scenario to inverse Compton scattering, as energy and momentum are automatically conserved when the electron absorbs a photon (or photons) from the plane wave and emits another, higher energy photon. However, the recoil is proportional toh and vanishes in the classical limit; we would then recover Thomson scattering rather than radiation reaction. The solution is that, in the regime a 0 1 and χ 1, quantum radiation reaction can be identified with the recoil on the electron due its emission of multiple, incoherent photons [22]. These conditions express the following: a 0 1 means that the formation length is much smaller than the wavelength of the external field, by eq. (4), so the coherent contribution is suppressed; and χ 1 means pair creation can be neglected. The latter is important because QED is inherently a many-body theory and it is possible for the final state to contain many more electrons than the initial state. As the number of photons N γ ∝ α ∝ 1/h and the momentum change of the electron ∝h for each photon, we have that the total momentum change ∝h 0 and therefore a classical limit exists [4]. This suggests that one way to determine the 'correct' theory of classical radiation reaction is to start with a QED result and take the limith → 0. This has been accomplished for both the momentum change [48,49] and the position [50]. In particular, Ilderton and Torgrimsson [50] were able to show that, to first order in α, only the LAD, Landau-Lifshitz and Eliezer-Ford and O'Connell formulations of radiation reaction were consistent with QED. In both the classical and quantum regimes, the force of radiation reaction is directed antiparallel to the electron's instantaneous momentum, and its magnitude depends on the parameter χ. We defined this earlier for the particular case of an electron in an electromagnetic wave [see eq. (6)]. In a general electromagnetic field F µν , where p = γm(1, v) is the electron four-momentum. χ depends on the instantaneous transverse acceleration induced by the external field: in a plane EM wave, where E and B have the same magnitude and are perpendicular to each other, χ = γ |E| (1 − cos θ )/E cr , where θ is the angle between the electron momentum and the laser wavevector, and it is therefore largest in largest in counterpropagation. A curious consequence of eq. (17) is the existence of a radiation-free direction: no matter the configuration of E and B, there exists a particular v that makes χ vanish [51]. Electrons in extremely strong fields tend to align themselves with this direction, any transverse momentum they have being rapidly radiated away [51]. As this direction is determined purely by the fields, the self-consistent evolutions of particles and fields is determined by hydrodynamic equations [52]. The larger the value of χ, the greater the differences between the quantum and classical predictions of radiation emission. Classically there is no upper limit on the frequency spectrum, whereas in the quantum theory there appears a cutoff that guarantees ω < γm. Besides this cutoff, spinflip transitions enhance the spectrum at high energy [53]. Let us work in the synchrotron limit, wherein the field may be considered constant over the formation length (i.e. l f λ , using eq. (4)). The classical emission spectrum, the energy radiated per unit frequency ω = f γm and time by an electron with quantum parameter χ and Lorentz factor γ, is Two quantum corrections emerge when χ is no longer much smaller than one: the non-neglible recoil of an individual photon means that the spectrum has a cutoff at f = 1; and the spin contribution to the radiation must be included. The former can be included directly by modifying (18), which yields the spectrum of a spinless electron (shown in orange in fig. 3). A neat exposition of this simple substitution is given by Lindhard [54] in terms of the correspondence principle (see also Sørensen [55]). Then when the spin contribution is added, we obtain the full QED result [27,28,56] where we quote the spin-averaged and polarization-summed result. This is shown in blue in fig. 3. The number spectrum dN γ dω = ω −1 dP q dω (χ, γ) has an integrable singularity ∝ ω −2/3 in the limit ω → 0. The total number of photons N γ = dN γ dω dω is finite. The combined effect of these corrections is to reduce the instantaneous power radiated by an electron. This reduction is quantified by the factor g(χ) = P q /P cl , which takes the form [21,28] where K is a modified Bessel function of the second kind and Γ(2/3) 1.354. The limiting expressions given in eq. (21) are within 5% of the full result for χ < 0.05 and χ > 200 respectively. A simple analytical approximation to eq. (20) that is accurate to 2% for arbitrary χ is g(χ) [56]. The changes to the classical radiation spectrum and the magnitude of g(χ) are shown in fig. 3. Note that the total power P q = 2αm 2 χ 2 g(χ)/3 always increases with increasing χ. Figure 3 shows that the radiated power at χ ∼ 1 is less than 20% its classically predicted value. While this suppression does have a marked effect on the particle dynamics, it is not the only quantum effect. As is discussed in section I, χ is the ratio between the energies of the typical photon and the emitting electron. When this approaches unity, even a single emission can carry off a large fraction of the electron energy, and the concept of a continuously radiating particle breaks down. Instead, electrons lose energy probabilistically, in discrete portions. The importance of this discreteness may be estimated by comparing the typical time interval between emissions, ∆t = ω /P, with the timescale of the laser field 1/ω 0 [57]. Equation (19) yields for the average photon energy ω 0.429γ χm for χ 1 and 0.25γm for χ 1; the radiated power P = 2αm 2 χ 2 g(χ)/3. We find We expect stochastic effects to be at their most significant when ω 0 ∆t 1, which implies that the total number of emissions in an interaction is relatively small but χ is large. Consider, for example, the interaction of a beam of electrons with a plane electromagnetic wave, where the Lorentz factors of the electrons are distributed γ ∼ dN e dγ . The distribution is characterized by a mean µ ≡ γ and variance σ 2 ≡ γ 2 − µ 2 . Under classical radiation reaction, higher energy electrons are guaranteed to radiate more than their lower energy counterparts (P ∝ γ 2 ), with the result that both the mean and the variance of γ decrease over the course of the interaction [59]. This is still the case if the radiated power is reduced by the Gaunt factor g(χ), i.e. a 'modified classical' model is assumed (see section III A), because radiation losses remain deterministic [60]. Under quantum radiation reaction, radiation losses are inherently probabilistic. While µ will still decrease (more energetic electrons radiate more energy on average), the width of the distribution σ 2 can actually grow [59,61]. Ridgers et al. [62] derive the following equations for the temporal evolution of these quantities, under quantum radiation reaction: where · · · denotes the population average and g 2 (χ) = χ dP q / χ dP cl is the second moment of the emission spectrum. Only the first term of eq. (24) is non-zero in the classical limit, and it is guaranteed to be negative. The second term represents stochastic effects and is always positive. Broadly speaking, the latter is dominant if χ is large, the interaction is short, or the initial variance is small [62,63]. The evolution of higher order moments, such as the skewness of the distribution, are considered in Niel et al. [63]. A distinct consequence of stochasticity is straggling [64], where an electron that radiates less (or no) energy than expected enters regions of phase space that would otherwise be forbidden. Unlike stochastic broadening, which can occur in a static, homogeneous electromagnetic field, straggling requires the field to have some non-trivial spatiotemporal structure. If an unusually long interval passes between emissions, an electron may be accelerated to a higher energy or sample the fields at locations other than those along the classical trajectory [65]. In a laser pulse with a temporal envelope, for example, electrons that traverse the intensity ramp without radiating reach larger values of χ than would be possible under continuous radiation reaction; this enhances high-energy photon production and electron-positron pair creation [66]. If the laser duration is short enough, it is probable that the electron passes through the pulse without emitting at all, in so-called quenching of radiation losses [67]. The quantum effects we have discussed in this section emerge, in principle, from analytical results including the emission spectrum [eq. (19)]. While further analytical progress can be made in the quantum regime, using the theory of strong-field QED (see section III B), modelling more realistic laser-electron-beam or laser-plasma interactions generally requires numerical simulations. Much effort has been devoted to the development, improvement, benchmarking and deployment of such simulation tools over the last few years. In the following section we review these continuing developments. A. Classical regime A natural starting point is the modelling of classical radiation reaction effects. In the absence of quantum corrections, we have all the ingredients we need to formulate a self-consistent picture of radiation emission and radiation reaction. We showed in section I how using only the Lorentz force to determine the charge's motion and therefore its emission led to an inconsistency in energy balance. This is remedied by using either eq. (9) or eq. (10) as the equation of motion, in which case the energy carried away in radiation matches that which is lost by the electron. Implementations of classical radiation reaction in plasma simulation codes have largely favoured the Landau-Lifshitz equation (or a high-energy approximation thereto), as it is first-order in the momentum and the additional computational cost is not large [68][69][70][71]. These codes have not only been used to study radiation reaction effects in laser-plasma interactions [60,[72][73][74][75][76], but also whether there are observable differences between models of the same [77,78]. The radiation reaction force proposed by Sokolov [41] has also been implemented in some codes [79,80], but note that it is not consistent with the classical limit of QED [50]. It is also possible to solve LAD equation numerically via integration backward in time [81]. Given data on the trajectories of an ensemble of electrons (usually a subset of the all electrons in the simulations), eq. (11) can be used to obtain the far-field spectrum in a simulation where classical radiation reaction effects are included [23,82,83]. Equation (11) is valid across the full range of ω (pace the quantum cutoff at ω = γm), including the low-frequency region of the spectrum where collective effects are important: ω < n 1/3 e , where n e is the electron number density. This region does not, however, contribute very much to radiation reaction; this is dominated by photons near the synchrotron critical energy ω c n 1/3 e . Thus the spectrum can be divided into coherent and incoherent parts, that are well separated in terms of their energy [57]. In the latter region, the order of the summation and integration in eq. (11) can be exchanged, and the total spectrum determined by summing over the single-particle spectra. In a particle-in-cell code for example, the electromagnetic field is defined on a grid of discrete points and advanced self-consistently using currents that are deposited onto the same grid [84]. Defining the grid spacing to be ∆, this scheme will directly resolve electromagnetic radiation that has a frequency less than the Nyquist frequency π/∆. Given appropriately high resolution, this accounts for the coherent radiation generated by the collective dynamics of the ensemble of particles. The recoil arising from higher frequency components, which cannot be resolved on the grid, and in any case as a self-interaction is neglected, is accounted for by the radiation reaction force. Further simplification is possible if the interference of emission from different parts of the trajectory is negligible. As indicated in section I, at high intensity a 0 1, the formation length of the radiation is much smaller than the timescale of the external field (see eq. (4)). This being the case, rather than using eq. (11), we may integrate the local emission spectrum eq. (18) over the particle trajectory, assuming that, at high γ, the radiation is emitted predominantly in direction parallel to the electron's instantaneous velocity [46,85,86]. The approach is naturally extended to account for quantum effects, by substituting for the classical synchrotron spectrum eq. (18) the equivalent result in QED, eq. (19). One consequence of doing so is that the radiated power is reduced by the factor g(χ), given in eq. (20). This should be reflected in a reduction in the magnitude of the radiation-reaction force. Consequently, a straightforward, phenomenological way to model quantum radiation reaction is to use a version of eq. (10) where the second term is scaled by g(χ). This 'modified classical' model has been used in studies of laser-electron-beam [23,60,66] and laser-plasma interactions [87,88] as a basis of comparison with a fully stochastic model (shortly to be introduced), as well as in experimental data analysis [18,32]. It has been shown that this approach yields the correct equation of motion for the average energy of an ensemble of electrons in the quantum regime [62,63]. It is, however, deterministic, and therefore neglects the stochastic effects we discussed in section II B. B. Quantum regime: the 'semiclassical' approach In section II B we discussed how 'quantum radiation reaction' could be identified with the recoil arising from multiple, incoherent emission of photons. Indeed, if χ 1, any or all of these photons can exert a significant momentum change individually. Figure 2 tells us that we generally require a 0 1 to enter the quantum radiation reaction regime with lasers, which necessitates a non-perturbative approach to the theory. This is provided by strong-field QED, which separates the the electromagnetic field into a fixed background, treated exactly, and a fluctuating part, treated perturbatively [89]; see the reviews by Di Piazza et al. [4], Ritus [27], Heinzl [90] or a tutorial overview by Seipt [91] which discusses photon emission in particular. Although it is the most general and accurate approach, strong-field QED is seldom used to model experimentally relevant configurations of laser-electron interaction [92]. In a scatteringmatrix calculation, the object is to obtain the probability of transition between asymptotic free states; as such, complete information about the spatiotemporal structure of the background field is required. Analytical results have only been obtained in field configurations that possess high symmetry [93], e.g. plane EM waves [27] or static magnetic fields [28]. The assumption that the background is fixed also means that back-reaction effects are neglected, even though it is expected that QED cascades will cause significant depletion of energy from those background fields [94][95][96]. Futhermore, the expected number of interactions per initial particle (the multiplicity) is much greater than one in many interaction scenarios. At present, cutting-edge results are those in which the final state contains only two additional particles, e.g. double Compton scattering [97][98][99][100] and trident pair creation [101][102][103][104][105], due to the complexity of the calculations. The need to overcome these issues has motivated the development of numerical schemes that can model quantum processes at high multiplicity in general electromagnetic fields. In this article we characterize these schemes as 'semiclassical', by virtue of the fact that they factorize a QED process into a chain of first-order processes that occur in vanishingly small regions linked by classically determined trajectories, as illustrated in fig. 5. The rates and spectra for the individual interactions are calculated for the equivalent interaction in a constant, crossed field, which may be generalized to an arbitrary field configuration under certain conditions. The first key result is that, at a 0 1, the formation length of a photon (or an electron-positron pair) is much smaller than the length scale over which the background field varies (see section I) and so emission may be treated as occurring instantaneously [27]. The second is that if χ 2 | f | , |g| and f 2 , g 2 1, where f = (E 2 − B 2 )/E 2 cr and g = E · B/E 2 cr are the two field invariants, the probability of a QED process is well approximated by its value in a constant, crossed field: P(χ, f , g) P(χ, 0, 0)+O( f )+O(g) [see Appendix B of Baier et al. [56]]. The combination of the two is called the locally constant, crossed field approximation (LCFA). The first requires the laser intensity to be large, whereas the second requires the particle to be ultrarelativistic and the background to be weak (as compared to the critical field of QED). We will discuss the validity of these approximations, and efforts to benchmark them, in section III C. Within this framework, the laser-beam (or laser-plasma) interaction is essentially treated classically, and quantum interactions such as high-energy photon emission added by hand. The evolution of the electron distribution function f = f (t, x, p), including the classical effect of the background field and stochastic photon emission, is given by [62,106] where W γ (p, k ) is the probability rate for an electron with momentum p to emit a photon with momentum k . A direct approach to kinetic equations of this kind is to solve them numerically [59,107,108], or reduce them by means of a Fokker-Planck expansion in the limit χ 1 [63]. However, the most popular is a Monte Carlo implementation of the emission operator [the right hand side of eq. (25)], which naturally extends single-particle or particle-in-cell codes that solve for the classical evolution of the distribution function in the presence of externally prescribed, or self-consistent, electromagnetic fields [65,106]. This method is discussed in detail in Gonoskov et al. [57], Ridgers et al. [109], so we only summarize it here for photon emission. The electron distribution function is represented by an ensemble of macroparticles, which represent a large number w of real particles (w is often called the weight). The trajectory of a macroelectron between discrete emission events is determined solely by the Lorentz force. Each is assigned an optical depth against emission τ = − log(1 − R) for pseudorandom 0 ≤ R < 1, which evolves as dτ dt = −W γ , where W γ is the probability rate of emission, until the point where it falls below zero. Emission is deemed to occur instantaneously at this point and τ is reset. The energy of the photon ω = |k | is pseudorandomly sampled from the quantum emission spectrum dN γ dω = ω −1 dP q dω (χ, γ) [see eq. (19)] and the electron recoil determined by the conservation of momentum p = p + k and the assumption that k p if γ 1. If desired, a macrophoton with the same weight as the emitting macroelectron can be added to the simulation. Electron-positron pair creation by photons in strong electromagnetic fields is modelled in an analogous way to photon emission [57,109]. Thus there are two distinct descriptions of the electromagnetic field. One component is treated as a classical field (in a PIC code, this would discretized on the simulation grid) and the other as a set of particles. In principle this leads to double-counting; however, as we discussed in section III A, the former lies at much lower frequency than the photons that make up synchrotron emission, and has a distinct origin in the form of externally generated fields (such as a laser pulse) or the collective motion of a plasma. Coherent effects are much less important for the highfrequency components, which justifies describing them as particles [57]. C. Benchmarking, extensions and open questions The validity of the simulation approach discussed in section III B relies on the assumption that a high-order QED process in a strong electromagnetic background field may be factorized into a chain of first-order processes, each of which is well approximated by the equivalent process in a constant, crossed field. It is generally expected that this reduction works in scenarios where a 0 1 and χ 2 | f | , |g| [27,56]. However, these asymptotic conditions do not give quantitative bounds on the error made by semiclassical simulations. As these are the primary tool by which we predict radiation reaction effects in high-intensity lasers, it is important that they are benchmarked and Blackburn et al. [92]. that the approximations are examined. One approach is to compare, directly, the predictions of strong-field QED and simulations. We focus here on results for single nonlinear Compton scattering [92,110,111], the emission of one and only one photon in the interaction of an electron with an intense, pulsed plane EM wave, by virtue of its close relation to radiation reaction. It is shown that the condition a 0 1 is necessary, but not sufficient, for the applicability of the LCFA: we also require that a 3 0 /χ 1 for interference effects to be suppressed [112]. These interference effects are manifest in the low-energy part of the photon emission spectrum f = ω /(γm) < χ/a 3 0 , as the formation length for such photons is comparable in size to the wavelength of the backround field. Semiclassical simulations strongly overestimate the number of photons emitted in this part of the spectrum because they exclude nonlocal effects [110,111]. Nevertheless, they are much more accurate with respect to the total energy loss (and therefore to radiation reaction), because this depends on the power spectrum, to which the low-energy photons do not contribute significantly [92]. This is shown in fig. 6, which compares the predictions of exact QED and semiclassical simulations for an electron with p − 0 /m 2γ 0 = 2000 colliding with a two-cycle laser pulse with normalized amplitude a 0 and wavelength λ = 0.8 µm. There is remarkably good agreement between the two even for a 0 = 5. An additional point of comparison in Blackburn et al. [92] is the number of photons absorbed from the background field in the process of emitting a high-energy photon. This transfer of energy from the background field to the electron is required by momentum conservation. Without emission, there would be no such transfer of energy. This is consistent with the classical picture, in which plane waves do no work in the absence of radiation reaction. Strong-field QED calculations depend crucially on the fixed nature of the background field; however, for single nonlinear Compton scattering, near-total depletion of the field is predicted at a 0 1000 [113]. The theory must therefore allow for changes to the background [114]. Within the semiclassical approach, depletion is accounted for by the action of the classical currents through the j · E term in Poynting's theorem. Quantum effects are manifest in how photon emission (and pair creation), modify those classical currents, as illustrated in fig. 5. In Blackburn et al. [92], the classical work done on the electron is shown to agree well with the number of absorbed photons predicted by exact QED. This is consistent with the results of Meuren et al. [115], which indicate that the 'classical' dominates the 'quantum' component of depletion, the latter associated with absorption over the formation length, if a 0 1. The failure of the semiclassical approach to reproduce the low-energy part of the photon spectrum arises from the localization of emission. Most notably, the number spectrum (19)] diverges as ω −2/3 as ω → 0. This can be partially ameliorated by the use of emission rates that take nonlocal effects into account. Di Piazza et al. [116] suggest replacing the LCFA spectrum in the region f χ/a 3 0 with the equivalent, finite, result for a monochromatic plane wave, which they adapt for use in arbitrary electromagnetic field configurations. Ilderton et al. [117] propose an approach based on formal corrections to the LCFA, in which the emission rates depend on the field gradients as well as magnitudes. While the studies discussed above have given insight into the limitations of the LCFA, they do not examine the applicability of the factorization shown in fig. 5, as this requires by definition the calculation of a higher order QED process. At the time of writing, there are no direct comparisons of semiclassical simulations and strong-field QED for either double Compton scattering (emission of two photons) or trident pair creation (emission of a photon which decays into an electronpositron pair). Factorization, also called the cascade approximation, has been examined directly within strong-field QED for the trident process in a constant crossed field [103] and in a pulsed plane wave [104,105]. In the latter it is shown that at a 0 = 50 and an electron energy of 5 GeV, the error is approximately one part in a thousand. The dominance of the cascade contribution makes it important to consider whether the propagation of the electron between individual tree-level process, as shown in fig. 5, is done accurately. In the standard implementation, this is done by solving a classical equation of motion including only the Lorentz force [57,109]. The evolution of the electron's spin is usually neglected and emission calculated using unpolarized rates, such as eq. (19). King [99] show that the accuracy of modelling double Compton scattering in a constant crossed field as two sequential emissions with unpolarized rates is better than a few per cent. There are, however, scenarios, where the spin degree of freedom influences the dynamics to a larger degree. Modelling these interactions with semiclassical simulations requires spin-resolved emission rates [21,118] and an equation of motion for the electron spin [119,120]. In a rotating electric field, as found at the magnetic node of an electromagnetic standing wave [94], where the spin does not precess between emissions, the asymmetric probability of emission between different spin states leads to rapid, near-complete polarization of the electron population [121,122]. Similarly, an electron beam interacting with a linearly polarized laser pulse can acquire a polarization of a few per cent [118]. To make this larger, it is necessary to break the symmetry in the field oscillations, which can be accomplished by introducing a small ellipticity to the pulse [123], or by superposition of a second colour [124]. A more fundamental limitation on the applicability of the LCFA is that the emission rates are calculated at tree level only. The importance of loop corrections to the strong-field QED vertex grows as α χ 2/3 in a constant, crossed field [125], leading to speculation that α χ 2/3 is the 'true' expansion parameter of strong-field QED [126]. When χ 1600, this parameter becomes of order unity and the meaning of a perturbative expansion in the dynamical electromagnetic field breaks down. The recent review by Fedotov [127] has prompted renewed interest in this regime; recent calculations of the one-loop polarization and mass operators [128] and photon emission and helicity flip [129] in a general plane-wave background have confirmed that the power-law scaling of radiative corrections pertains strictly to the high-intensity limit a 3 0 /χ 1. In the high-energy limit, radiative corrections grow logarithmically, as in ordinary (i.e., non-strong-field) QED [128,129]. The difficulty in probing the regime α χ 2/3 1 is the associated strength of radiative energy losses, which suppress γ and so χ [127]. Overcoming this barrier at the desired χ requires the interaction duration to be very short. The beam-beam geometry proposed by Yakimenko et al. [130] exploits the Lorentz contraction of the Coulomb field of a compressed (100 nm), ultrarelativistic (100 GeV) electron beam, which is probed another beam of the same energy. In the laser-electronbeam scenario considered by Blackburn et al. [131], collisions at oblique incidence are proposed for reaching χ 100, exploiting the fact that the diameter of a laser focal spot is typically much smaller than the duration of its temporal profile. Even higher χ is reached in the combined laserplasma, laser-beam interaction proposed by Baumann and Pukhov [132]. While it seems possible to approach the fully non-perturbative regime experimentally, albeit for extreme collision parameters, there is no suitable theory at α χ 2/3 1, and quantitative predictions are lacking in this area. A. Geometries It may be appreciated that the radiation-reaction and quantum effects under consideration here, as particle-driven processes, can only become important if electrons or positrons are actually embedded within electromagnetic fields of suitable strength. However, the estimates in section I were made for a plane EM wave, in which case the electron is guaranteed to interact with the entire wave, including the point of highest intensity. In reality, such intensities are reached by compressing the laser energy into ultrashort pulses [133] that are focussed to spot sizes close to the diffraction limit [7][8][9]. The steep spatiotemporal gradients in intensity that result mean that laser pulses can ponderomotively expel electrons from the focal region, in both vacuum [134,135] and plasma [15], curtailing the interaction long before the particles experience high a 0 or χ. The literature contains many possible experimental configurations designed to explore or exploit radiation reaction and quantum effects. These configurations can be divided, broadly, into three categories, based on how they ensure the spatial coincidence between particles and strong fields. Figure 7 illustrates the three categories. In the first (laser-particle-beam), the electrons are accelerated to ultrarelativistic energies before they encounter the laser pulse. The effective 'mass increase' makes the beam rigid and so it passes through the entirety of the laser pulse, avoiding substantial deflection and ensuring that it is exposed to the strongest electromagnetic fields. Concretely, the ponderomotive force is suppressed at high γ: d p /dt = −m∇ a 2 /(2 γ ), where · denotes a cycle-averaged quantity [136]. It should be noted that it is possible for radiation reaction to amplify this force to the point that it can prevent an arbitrarily energetic electron from penetrating the laser field [137,138]; however, this requires a 0 300, far in excess of what it is available at present. In today's high-intensity lasers, ultrarelativistic electrons can reach the nonlinear quantum regime χ ∼ 1 even for a 0 ∼ 10 (see fig. 2). In the second (laser-plasma), the electrons are electrostatically bound to a population of ions, which are substantially more massive and therefore less mobile. Large-scale displacement of the electrons away from the laser fields is then suppressed by the emergence of plasma fields. If the plasma is overdense, i.e. opaque to the laser light, then only electrons in a thin layer near the surface experience the full laser intensity and are accelerated to relativistic energies. However, the high density of electrons in this region means that a significant fraction of the laser energy is converted to high-energy radiation, leading to, for example, dense bursts of γ rays and positrons [139,140], reduced efficiency of ion acceleration [68] and the generation of long-lived quasistatic magnetic fields [76]. If the target is close to underdense, by contrast, the laser can propagate through the plasma bulk and the interaction is volumetric in nature. The combination of laser and induced plasma fields, as well as radiation reaction, leads to confinement and acceleration of the electrons, and copious emission of radiation [141][142][143]. Finally, electrons can be trapped in the collision of more than one laser pulse (laser-laser), where they interact with an electromagnetic standing, rather than travelling, wave [96]. Radiation reaction induces a rich set of dynamics in this configuration [144][145][146][147]. The fact that standing waves can do work in reaccelerating the particles after they recoil means that, at intensities 10 24 Wcm −2 , the emitted photons seed avalanches of electron-positron pair creation [94]; this intensity threshold is lowered in suitable multibeam setups [148][149][150]. The case of optimal focussing is achieved in a dipole field [151], where the peak a 0 780P 1/2 [PW] [152]. Such extreme intensities, at moderate power, are the reason this configuration has been studied as means of high-energy photon production [153,154]. It is important to note that the distinction between the three categories defined here is not absolute. Mixing between them occurs in, for example, the interaction of a linearly polarized laser pulse with relativistically underdense plasma: here re-injected electron synchrotron emission, the radiation emission when electrons are pulled backwards into the oncoming laser by a chargeseparation field [140], exhibits features of both the 'laser-plasma' and 'laser-beam' geometries. Furthermore, the exponential growth of particle number in a QED cascade driven by multiple laser pulses can create an electron-positron plasma of sufficient density to shield the interior from the laser pulse [155], leading to a transition between the 'laser-laser' and 'laser-plasma' categories. Twin-sided illumination of a foil has features of both ab initio [88]. B. 'All-optical' colliding beams This paper focuses on the first of the three configurations discussed in section IV A, laserparticle-beam, for the reason that it allows χ > 0.1 to be reached at lower a 0 than would be required in a laser-plasma or laser-laser interaction. As is shown in fig. 2 and by eq. (6), given a 500-MeV electron beam, quantum effects on radiation reaction can be reached even at an intensity of 10 21 Wcm −2 . The colliding beams geometry therefore represents a promising first step towards experimental exploration of the radiation-dominated or nonlinear quantum regimes. Thus far we have not specified the source of ultrarelativistic electrons. The theoretical description of the interaction does not depend on the source, of course, but it is of immense practical importance. Furthermore, the characteristics of the source (its energy, bandwidth, emittance, etc.) are key determining factors in the viability of measuring radiation-reaction or quantum effects. For example, the fact that electron beam energy spectra are expected to broaden due to stochastic effects makes the variance of the spectrum, σ 2 , an attractive signature of the quantum nature of radiation reaction [59,61]. However, such broadening can occur classically in the interaction of an electron beam with a focussed laser pulse, because components of the beam can encounter different intensities and therefore lose different amounts of energy [156]. Thus a crucial role is played by the initial energy spread and size of the incident electron beam [3]. In fact, it was pioneering experiments with a conventional, radio-frequency (RF), linear accelerator that provided the first demonstration of nonlinear quantum effects in a strong laser field: nonlinearities were measured in Compton scattering [13] and Breit-Wheeler electron-positron pair creation [14] in The yield was strongly limited because, even though the electron energy was sufficient to reach a quantum parameter χ ∼ 0.3, in the regime a 0 1, the pair creation probability is suppressed as a 2n 0 , where n, the number of participating photons, was found to be n 5 [14]. Similarly, the photon emission process was weakly nonlinear, with harmonics of the fundamental Compton energy up to n = 4 observed [13]. At the time of writing, this experiment had yet to be repeated at a conventional linear accelerator, though concrete proposals have now been made to do so at DESY [158] and FACET-II [159]. While the electron beams will be less energetic (17.5 and 10 GeV respectively), the laser intensity will be higher (2 a 0 10), so the transition from the multiphoton to the tunnelling regimes could be explored. One of the challenges that must be overcome in realizing these experiments is that, as discussed in section IV A, lasers reach high intensity by focussing and compressing energy into a small spatiotemporal volume. Thus the region in which the electromagnetic fields are strong is only a few microns in radius, assuming diffraction-limited focussing and optical drivers (ω ∼ 1 eV), which is much smaller than the size of the focussed electron beam from a conventional accelerator. This limits the number of electrons that interact with the laser, reducing the relevant signal, as well as making the alignment and timing of the beams more difficult [3]. In the 'all-optical' geometry, these are overcome by using a dual-laser setup [16]: one laser provides the high-intensity 'target', and the other is used to accelerate electrons in a plasma wakefield. Laser-driven wakefield acceleration has undergone remarkable progress over the last two decades: from the first quasi-monoenergetic relativistic beams [160][161][162], they now produce electron beams with near-to multi-GeV energies [163][164][165]. Briefly, an intense laser pulse travels through a low-density plasma, exciting, via its ponderomotive effect, a trailing nonlinear plasma wave that traps and accelerates electrons [15]. As the medium is a plasma, already ionized and therefore immune to electrostatic breakdown, the accelerating gradients are much higher than in a conventional RF accelerator: GeV energies can be reached in only a few centimetres of propagation. Furthermore, as the size of the accelerating structure is only a few microns (at typical plasma densities n e ∼ 10 18 cm −3 , the plasma wavelength ∼ 20 µm), the electron beams produced in wakefield acceleration are similarly micron-scale, with durations of order 10 fs. Besides the high energy and the small size of the electron beam, we have the intrinsic synchronizations of the electron beam with the accelerating laser pulse, and of the accelerating laser pulse with the target laser pulse, if the two emerge from amplifier chains that are seeded by the same oscillator. Thus the 'all-optical' laser-electron-beam collision is promising as a compact source of bright, ultrashort bursts of high-energy γ rays [166][167][168]. Now, with advances in laser technology, a multibeam facility is capable of reaching the radiation-reaction and nonlinear quantum regimes. Recently two such experiments were performed using the Gemini laser at the Rutherford Appleton Laboratory [17,18], a dual-beam system that delivers twin synchronized pulses of duration 45 fs and energy ∼ 10 J, with a peak a 0 20: we discuss these experiments in detail in section IV C. Upcoming laser facilities, such as Apollon [10] or ELI [11,12], aim for laser-electron collisions at even higher intensity: see, for example, Lobet et al. [169] for simulations of dual-beam interactions at a 0 200. Not all high-intensity laser facilities have dual-beam capability. An alternative all-optical configuration, introduced by Ta Phuoc et al. [170], employs a single laser pulse as accelerator and target: a foil is placed at the end of a gas jet, into which a laser is focussed to drive a wakefield and accelerate electrons; when the laser pulse reaches the foil, it is reflected from the ionized surface back onto the trailing electrons. This guarantees temporal and spatial overlap of the two beams, but precludes the possibility of separately optimizing the two laser pulses; in Ta Phuoc et al. [170] the electron energies 100 MeV and the peak a 0 1.5, so radiation reaction effects were negligible. Simulations of similar single-pulse geometries predict the efficient production of multi-MeV photons at a 0 > 50 [171] and electron-positron pairs at a 0 300 [172,173]. optic allows for counterpropagation of the electron beam, which is accelerated by a laser wakefield in a gas jet, and the high-intensity laser pulse. The decelerated electrons, the γ rays they emit in the collision, and the accelerating laser pulse, pass through this hole before being blocked or diagnosed as appropriate. C. Recent results The Gemini laser of the Central Laser Facility (Rutherford Appleton Laboratory, UK) is a petawatt-class dual-beam system [174], well-suited for the all-optical colliding beams experiments discussed in section IV B. It delivers two, synchronized, linearly polarized laser pulses of duration 45 fs, energy 10 J and wavelength 0.8 µm. Available focussing optics include long-focal-length mirrors ( f /20 and f /40) for laser-wakefield acceleration and, most importantly, a short-focallength ( f /2) off-axis parabolic mirror with a f /7 hole in its centre [17,18]. The latter allows for direct counterpropagation of the two laser beams, the geometry in which χ is largest (see eq. (17)): the more weakly focussed laser that drives the wakefield passes through the hole and is subsequently blocked, avoiding backreflection in the amplifier chains; the accelerated electron beam, and any radiation produced in the collision with the tightly focussed laser, can pass through to reach the diagnostics. Both the experiments that will be described in this section used this geometry, which is illustrated in fig. 8, but with different electron acceleration stages. In Cole et al. [17], the accelerating laser pulse was focussed onto the leading edge of a supersonic helium gas jet, producing a ∼15 mm plasma acceleration stage with peak density n e 3.7 × 10 18 cm −3 . The use of a gas jet allowed the second laser pulse to be focussed close to the point where the electron beam emerges from the plasma (at the rear edge), so the collision between electron beam and laser pulse took place when the former was much smaller than the latter (approximately 1 µm 2 rather than 20 µm 2 , which includes the effect of a systematic time delay between the two). The advantages of using a gas cell, as in Poder et al. [18], are the higher electron beam energies and significantly better shot-to-shot stability. However, in this case, the second laser pulse must be focussed further downstream of the acceleration stage (approximately 1 cm), by which point the electron beam has expanded to become comparable in size to the laser. Thus full 3D simulations were required for theoretical modelling of the interaction, whereas 1D (plane-wave) simulations were sufficient in Cole et al. [17]. Fluctuations in the pointing and timing of the two lasers, as well as systematic drifts in the latter, mean that the overlap between electron beam and target laser pulse varies from shot to shot. It is helpful, therefore, to gather as large a dataset as possible (with the second, high-intensity, laser pulse both on and off), in which case high-repetition-rate laser systems are at a clear advantage. However, this is not nearly so important as being able to identify 'successful' collisions when they occur; even a small set of collisions (N ∼ 10) can provide statistically significant evidence of radiation reaction when this is done. This speaks to the importance of measuring both the electron and γ-ray spectra on a shot-to-shot basis; identifying coincidences between the two provides stronger evidence of radiation reaction than could be obtained by either alone. In Cole et al. [17], successful shots were distinguished by measuring the total signal in the γ-ray detector S γ ∝ N e a 2 γ 2 (background-corrected), where N e is the total number of electrons in the beam, γ 2 their mean squared Lorentz factor, and a an overlap-dependent, effective value for a 0 (the former two can be extracted from the measured electron spectra). Over a sequence of 18 shots (eight beam-on, i.e. with the f /2 beam on, ten beam-off ), four were measured with a normalized CsI signal,Ŝ γ = S γ /(N e γ 2 ) ∝ a 2 , four standard deviations above the background level. These four also had electron beam energies below 500 MeV (as identified by a strong peak feature in the measured spectra), whereas the ten beam-off shots had a mean energy of 550 ± 20 MeV. The probability of measuring four or more beams with energy below 500 MeV in a sample of eight, given this fluctuation alone, is approximately 10%. However, the probability that four beams have this lower energy and a significantly higher γ-ray signal is the considerably smaller 0.3%. Statistically significant evidence of radiation reaction was obtained by correlating the electron beam energy with the critical energy of the γ-ray spectrum ε crit , a parameter characterizing the hardness of the spectrum. This was accomplished by fitting the depth-resolved scintillator output in Poder et al. [18], the fractional reduction in the total electron beam energy is correlated with the total γ-ray signal, with the best agreement with theory given by the 'modified classical' model (see section III A). Details are given in the main text. to a parametrized spectrum dN γ /dω ∝ ω −2/3 exp(−ω/ε crit ), having first characterized its response to monoenergetic photons in the energy range 2 < ω[MeV] < 500 with GEANT4 simulations (see details in Behm et al. [175]). The four successful shots demonstrate a negative correlation between the final electron energy and ε crit , as is shown in fig. 9; this is consistent with radiation-reaction effects, as the hardest photon spectra should come from electron beams that have lost the most energy. The probability to observe this negative correlation and to have electron energy lower than 500 MeV on all four successful shots is 0.03%, which qualifies, under the usual three-sigma threshold, as evidence of radiation reaction. Simulations of the collision confirmed that the critical energies and electron energy loss were consistent with theoretical expectations of radiation reaction. The coloured regions in fig. 9 give the areas in which 68% (i.e. one sigma) of results would be found for a large ensemble of 'numerical experiments', given the measured fluctuations in the pre-collision electron energy spectra and the collision a 0 , and under specific models of radiation reaction. The results exclude the 'no RR' model, in which the electrons radiate, but do not recoil. They are more consistent with the stochastic, quantum model discussed in section III B than the deterministic, classical model of Landau-Lifshitz: however, it is important to note that both models are consistent with the data at the two-sigma level. Subsequent analysis has confirmed that the 'modified classical' model discussed in section III A, which includes the quantum suppression factor g(χ), given in eq. (20), but not the stochasticity of emission, gives practically the same region as the quantum model [176], as is stated in Cole et al. [17]. This is because the electron beam energy effectively parametrizes the mean of the spectrum, the evolution of which depends only on g(χ) according to eq. (23); to see stochastic effects, we must consider instead the width of the distribution [59,61,62]. Given electron beams with narrower initial energy spectra, it would be possible to identify stochastic effects (or their absence) by correlating the mean and variance of the final electron energy spectra [177]. Evidence of radiation reaction was also obtained in the experiment reported by Poder et al. V. SUMMARY AND OUTLOOK Let us now consider the relation between the results of these two experiments discussed in section IV C. Both present clear evidence that radiation reaction, in some form, has taken place. The reduction in the electron energies, the total γ-ray signal, and, in Cole et al. [17], the spectral shape of the latter, are all broadly consistent with each other. The differences arise in the comparison of different models of radiation reaction, bearing in mind that, in the regime where χ 0.1, a 0 10, quantum corrections are expected to be non-negligible, but not large, and the intensity is not so large that the LCFA is beyond question. In Cole et al. [17], the shot-to-shot fluctuations in the electron beam energy and alignment, and the fact that the electron spectrum is analyzed by means of a single value rather than its complete shape, mean that all three models (classical, modified classical, and quantum) are not distinguishable from each other at level of two standard deviations. At the one-standard-deviation level, the two models that include quantum corrections provide better agreement. Poder et al. [18], with significantly more stable electron beams, are able to confirm that the classical model is not consistent with the data either. However, the fact that neither the modified classical or quantum models provide a very good fit to the data leaves open the question of whether it is the failure of the LCFA or, as they state, "incomplete knowledge of the the local properties of the laser field." Accurate determination of the initial conditions, in both the electron beam and the laser pulse, will be of unquestioned importance for upcoming experiments that aim to discern the properties of radiation reaction in strong fields. It will be vital to characterize the uncertainties in both the experimental conditions and the theoretical models in our simulations, which are inevitably based on certain approximations. Nevertheless, these results demonstrate the capability of currently available high-intensity lasers to probe new physical regimes, where radiation reaction and quantum processes become the important, if not dominant, dynamical effects. These experiments provide vital data in the unexplored region of parameter space χ 0.1, a 0 1 (see fig. 2), allowing us to examine critically our theoretical and simulation approaches to the modelling of particle dynamics in strong electromagnetic fields. The current mismatch between simulations and experimental data has prompted, and will continue to prompt, new ideas in how to resolve the discrepancy: from the development of analysis techniques that are robust against shot-to-shot fluctuations [177,178], to improved simulation methodologies [116,117]. These are accompanied by renewed examination of the approximations underlying our simulations (see section III C). The development of theoretical approaches that can go beyond the plane-wave configuration, the background field approximation, or low multiplicity, in strong-field QED is vital if this theory is to be applied directly in experimentally relevant scenarios. There is also undoubtedly a need to gather more experimental data and explore a wider parameter space, increasing the electron beam energy and laser intensity, i.e. χ and a 0 . Not only will this make radiation reaction and quantum corrections more distinct, it will also allow us to measure nonlinear electron-positron pair creation by the γ rays emitted by the col-liding electron beam [107,108,169,179], a strong-field QED process without classical analogue. Such findings will underpin the study of particle and plasma dynamics in strong electromagnetic fields for many years to come. ACKNOWLEDGMENTS I am very grateful to Arkady Gonoskov, Mattias Marklund and Stuart Mangles for a critical reading of the manuscript. This work was supported by the Knut and Alice Wallenberg Foundation.
16,055.2
2019-10-29T00:00:00.000
[ "Physics" ]
Possibilities of Decreasing Hygroscopicity of Resonance Wood Used in Piano Soundboards Using Thermal Treatment : This article presents the possibilities of decreasing moisture sorption properties via thermal modification of Norway spruce wood in musical instruments. The 202 resonance wood specimens that were used to produce piano soundboards have been conditioned and divided into three density groups. The first specimen group had natural untreated properties, the second was thermally treated at 180 ◦ C, and the third group was treated at 200 ◦ C. All specimens were isothermally conditioned at 20 ◦ C with relative humidity values of 40, 60, and 80%. The equilibrium moisture content ( EMC ), swelling, and acoustical properties, such as the longitudinal dynamic modulus ( E’ L ), bending dynamic modulus ( E b ), damping coefficient ( tan δ ), acoustic conversion efficiency ( ACE L ), and relative acoustic conversion efficiency ( RACE L ) were evaluated on every moisture content level. Treatment at 180 ◦ C caused the EMC to decrease by 36% and the volume swelling to decrease by 9.9%. Treatment at 200 ◦ C decreased the EMC by 42% and the swelling by 39.6%. The 180 ◦ C treatment decreased the value of the longitudinal sound velocity by 1.6%, whereas the treatment at 200 ◦ C increased the velocity by 2.1%. The acoustical properties E L (cid:48) , E b , ACE L , and RACE L were lower due to the higher moisture content of the samples, and only the tan δ increased. Although both treatments significantly affected the swelling and EMC , the treatment at 180 ◦ C did not significantly affect the acoustical properties. Introduction Wood is commonly used to make musical instruments across the world, and until today, there has been no available substitute for this superb material. The natural hygroscopicity and related hygroexpansion are the main disadvantages of using wood in musical instruments. Resonant spruce (high-quality Picea abies L. Karst) is a popular material for piano soundboard fabrication [1]. Resonance spruce wood is generally characterized by narrow annual rings (at least four in one centimeter), and it usually grows on either poor soil in high-altitude plateaus or on north-oriented hillsides. High-quality resonance wood for soundboards is characterized by high sound velocity in a longitudinal direction, low internal friction, relatively low density, high radiation ratio, and a dynamic modulus or specific Young's modulus E L /δ. It is generally known that air humidity causes pianos to become untuned, as swelling causes the soundboard to move, which affects the strings [2]. Therefore, the frequencies change over time. The dimensional instability of wood under different humidity conditions can increase the number of services due to untuning or defects in a piano. There are several possibilities to stabilize the conditions inside the instruments. Appl. Sci. 2021, 11, 475 2 of 10 For example, [3] describes a humidifier/dehumidifier device that can regulate humidity with an accuracy of 1%. Piano users state that these solutions are insufficient, especially in conditions with significant humidity changes. There are several specialized wood modifications that can be applied in order to improve certain wood properties [4]. Active and passive way of wood modification can be applied. In general, the passive one-lumen filling modification highly influences the physical and mechanical properties. The less invasive cell wall active modification involves reactions with polymers, cross linking, and degradation of the cell wall. From the group of chemical modifications are known cross-linking, bulking, grafting, and degradation of cell wall [5]. Furfurylation is the impregnation of wood with furfuryl-alcohol whilst increasing the temperature during the reaction [5]. The raised temperature then acts as a reaction catalyst. Although this process prevents humidity absorption, it increases density, which decreases the acoustic conversion efficiency (ACE L ) and relative acoustic conversion efficiency (RACE L ) [6,7]. Additionally, the DiMethylolDihydroxyEthyleneUrea (DMDHEU) resin and thermo-hydro-mechanical (THM) treatment decrease the hygroscopicity; however, this leads to an undesirable increase of the wood density [4,8]. Acetylation is an anhydride vapor curing process that uses catalysts, such as pyridine or sodium acetate. It decreases both swelling and sound velocity. The ACE L and quality factor Q −1 remain almost unchanged [9][10][11]. Moreover, saligenin treatment or nano-solution SurfaPore TM improve the water-related properties and have an insignificant effect on the wood density [12,13]. These treatments are potentially suitable for the soundboard properties; however, they are technically and economically unavailable in piano production. Heat treatment is one of the most commonly used modification to improve dimensional stability and biological durability of wood [4,14]. At higher temperatures (180-260 • C), the hydroxyl groups reduced hygroscopicity and swelling [4,14]. Esteves et al. [15] modified resonance spruce at 190 • C. They found that although the acoustical properties improved, the wood became brittle. In addition, the density, Equilibrium Moisture Content (EMC), and damping coefficient decreased, and the modulus of elasticity and sound velocity increased. Wagenfuehr et al. [16] and Pfriem et al. [17] had similar results. According to Puszynski et al. [18], the most suitable temperature for modification lies between 160-180 • C. As there is a lower content of hydroxyl groups after thermal modification, the sorption is decreased [4]. Zhu et al. [19] measured acoustic-vibration parameters after heat treatment. Specific Young's modulus, the coefficient of sound-radiation resistance and the ratio of Young's modulus to the dynamic stiffness modulus increased, whereas sound resistance decreased. The best vibration performance was obtained by 4 h treatment at 210 • C. Esteves et al. [15] found that thermal modification caused a lower density (by approximately 5%) and lower EMC (about 42%). The modulus of elasticity increased by around 4%; the longitudinal sound velocity increased by around 5%. Almost identical results were published by Pfriem et al. [17] and Pfriem [20]. This kind of modification could be used in piano production. Unfortunately, published studies related to modification techniques of wood for musical instruments are still limited. Therefore, the present study aims to find a way to decrease hygroscopicity and swelling of the piano soundboard wood, focusing on sustainability and maintaining its acoustical properties. At first, all specimens were conditioned at a temperature of 20 • C and an air humidity of 40% in a climatic chamber (Memmert CTC256). These conditions match with approx. EMC 7.7%. Moisture content was determined using the oven-dry method according to EN 13183-1(2002) [21] and the following equation (Equation (1)): where, according to de Boer-Zwicker pay: A = 7.731706 − 0.014348 * T; B = 0008746 + 0.000567 * T. The hygroscopicity of the wood was evaluated via the equilibrium moisture content in both the unmodified and modified specimens using the standard gravimetric method (Equation (2)): where m w -weight of saturated wood, m 0 -weight of bone-dry wood. The main characteristic of dimension changes is volume swelling because it includes all dimensions of the specimen. In this research paper, volume swelling was used to compare unmodified specimens with modified specimens. For this comparison, the air humidity increase from 40 to 60% was selected. Volume swelling was determined using the following equation (Equation (3)): where, α imax -dimension in any direction after swelling; α i (w)-the same dimension before swelling. Moderate thermal modifications at 180 • C for 8 h and 200 • C for 10 h were selected for the purpose of this study. The modification course is shown in Figure 1, and the thermal modification chamber scheme is shown in Figure 2. After this modification, all specimens were climatized at a temperature of 20 • C and an air humidity of 60%. The acoustical properties and dimension changes were measured in every humidity status. To test the dynamic properties, the sample was positioned on two flexible supports (free-free support condition). The points of support were located in the nodes (minimum amplitude) of the fundamental bending mode shape of vibration (at 0.224 and 0.776 of the sample length, which is in 100.8 and 349.2 mm length of the specimen) [22,23]. The first and second bending frequencies, longitudinal natural frequencies, logarithmic decrement of damping, and the sound velocities were measured. The flexural vibration was induced by the impact of a soft hammer to the center of the sample perpendicularly to the length (see Figure 3). The sound propagation velocity was measured using the FAKOPP Ultrasonic Timer apparatus with US10 sensors in longitudinal and transversal directions. At first, all specimens were conditioned at a temperature of 20 °C and an air humidity of 40% in a climatic chamber (Memmert CTC256). These conditions match with approx. EMC 7.7%. Moisture content was determined using the oven-dry method according to EN 13183-1(2002) [21] and the following equation (Equation (1)): where, according to de Boer-Zwicker pay: A = 7.731706 − 0.014348 * T; B = 0008746 + 0.000567 * T. The hygroscopicity of the wood was evaluated via the equilibrium moisture content in both the unmodified and modified specimens using the standard gravimetric method (Equation (2)): where mw-weight of saturated wood, m0-weight of bone-dry wood. The main characteristic of dimension changes is volume swelling because it includes all dimensions of the specimen. In this research paper, volume swelling was used to compare unmodified specimens with modified specimens. For this comparison, the air humidity increase from 40 to 60% was selected. Volume swelling was determined using the following equation (Equation (3)): where, αimax-dimension in any direction after swelling; αi (w)-the same dimension before swelling. Moderate thermal modifications at 180 °C for 8 h and 200 °C for 10 h were selected for the purpose of this study. The modification course is shown in Figure 1, and the thermal modification chamber scheme is shown in Figure 2. After this modification, all specimens were climatized at a temperature of 20 °C and an air humidity of 60%. The acoustical properties and dimension changes were measured in every humidity status. To test the dynamic properties, the sample was positioned on two flexible supports (free-free support condition). The points of support were located in the nodes (minimum amplitude) of the fundamental bending mode shape of vibration (at 0.224 and 0.776 of the sample length, which is in 100.8 and 349.2 mm length of the specimen) [22,23]. The first and second bending frequencies, longitudinal natural frequencies, logarithmic decrement of damping, and the sound velocities were measured. The flexural vibration was induced by the impact of a soft hammer to the center of the sample perpendicularly to the length (see Figure 3). The sound propagation velocity was measured using the FAKOPP Ultrasonic Timer apparatus with US10 sensors in longitudinal and transversal directions. where f1-first bend frequency; m-specimen weight; l-specimen length; w-specimen width; h-specimen thickness. Acoustic conversion efficiency (Equation (6) = ´ tan (6) The first and second bending frequencies, longitudinal natural frequencies, logarithmic decrement of damping, and the sound velocities were measured. The flexural vibration was induced by the impact of a soft hammer to the center of the sample perpendicularly to the length (see Figure 3). The sound propagation velocity was measured using the FAKOPP Ultrasonic Timer apparatus with US10 sensors in longitudinal and transversal directions. where f1-first bend frequency; m-specimen weight; l-specimen length; w-specimen width; h-specimen thickness. Acoustic conversion efficiency (Equation (6) = ´ tan (6) The derived acoustical properties are as follows: Dynamic Young's modulus (Equation (4)) c -sound velocity; ρ-density. Dynamic bending modulus of elasticity (Equation (5)) where f 1 -first bend frequency; m-specimen weight; l-specimen length; w-specimen width; h-specimen thickness. Acoustic conversion efficiency (Equation (6)) where E L is the dynamic Young's modulus; ρ-density; tan δ-internal friction. A measured Logarithmic Decrement of Damping (LDD) was used to calculate the internal friction as follows (Equation (7)): Relative acoustic conversion efficiency (Equation (8)): Due to chemical changes, the specimens' weight reduced after the modification. The weight loss was calculated as a Mass Loss (ML) using the following equation (Equation (9)): Hygroscopicity Significant differences were found among the modified and unmodified specimens. Specimens with 60% air humidity were compared, modified, and referenced during this process. The multiple comparison was determined using Scheffe test. The decrease in EMC between the groups was statistically significant. The specimens that were modified at a temperature of 180 • C showed an approximately 36% lower EMC than the unmodified specimens. The specimens that were modified at 200 • C showed an approximately 42% decrease in water content. The results are shown in Figure 4. The EMC reduction is related to the time and temperature of the process in correspondence to [4,24,25]. Akylidiz et al. [24] found reduction of EMC by 25% at 180 • C and 41% at 230 • C for black pine wood (Pinus nigra). Ates et al. [25] found 18% reduction of EMC at 8-h/180 • C modification and 51% reduction of EMC at 8-h/230 • C modification of calabrian pine wood (Pinus brutia Ten.). The EMC evaluated after modification at 180 • C are slightly higher in comparison with above mentioned results; however, higher temperature of modification brought results comparable to other researchers. The slight differences may reflect the wood species individuality (e.g., higher number of extractives in pine wood) as well as the differences in duration and temperature of treatment processes. A measured Logarithmic Decrement of Damping (LDD) was used to calculate the internal friction as follows (Equation (7): Hygroscopicity Significant differences were found among the modified and unmodified specimens Specimens with 60% air humidity were compared, modified, and referenced during this process. The multiple comparison was determined using Scheffe test. The decrease in EMC between the groups was statistically significant. The specimens that were modified at a temperature of 180 °C showed an approximately 36% lower EMC than the unmodi fied specimens. The specimens that were modified at 200 °C showed an approximately 42% decrease in water content. The results are shown in Figure 4. The EMC reduction is related to the time and temperature of the process in correspondence to [4,24,25]. Aky lidiz et al. [24] found reduction of EMC by 25% at 180 °C and 41% at 230 °C for black pine wood (Pinus nigra). Ates et al. [25] found 18% reduction of EMC at 8-h/180 °C modifica tion and 51% reduction of EMC at 8-h/230 °C modification of calabrian pine wood (Pinus brutia Ten.). The EMC evaluated after modification at 180 °C are slightly higher in com parison with above mentioned results; however, higher temperature of modification brought results comparable to other researchers. The slight differences may reflect the wood species individuality (e.g., higher number of extractives in pine wood) as well as the differences in duration and temperature of treatment processes. Swelling In the modified specimens, the thermal modification at 180 • C decreased the volume swelling by 9.9% compared to the unmodified specimens. Moreover, thermal modification at 200 • C decreased the volume swelling by 39.6% compared to the unmodified specimens (see Figure 5). This result is crucial, as swelling is the main factor that needs to be eliminated. Swelling causes instruments to rapidly detune in changing climatic conditions [2,3]. These air humidity changes can cause various parts of the piano to crack, especially the soundboard. Scheffe test showed a statistically significant decrease in swelling in all the thermally modified groups of specimens. Swelling reduction measure, similarly to EMC, depends on used treatment parameters, e.g., temperature, time of duration, used inertial atmosphere and other standards of modification [4,15,25,26]. For example, Icel et al. [26] achieved reduction of volumetric swelling of spruce (Picea abies) by around 53% after modification at 212 • C and 2-h duration. Ates et al. [25] presented reduction of volumetric swelling by about 13 and 42% after 180 • C/2-h and 230 • C/8-h thermal treatment, respectively. In general, our study also confirmed that the higher time and temperature of thermal modification brings higher dimensional stability. Acoustical Properties The first and second resonant frequencies and longitudinal resonant frequencies were captured using the free-free resonant method. Internal friction was also determined at this time. The rubber head stick was used for this task, and the specimen was supported by polyurethane segments using the formula 0.224 × L (L = length of the specimen). These vibrations were captured using a microphone and external sound card (EDIROL FA-101) via the FireWire interface. The sound velocity was determined using the abovementioned ultrasonic method. This measured the ultrasonic wave velocity in longitudinal and transversal directions. Sound velocity in the longitudinal direction is affected by high moisture content. Sound velocity is significantly lower in wood with higher moisture content due to the lower Young's modulus. The higher moisture content wood contains the lower velocity sound it achieves [1,18,27]. A Scheffe test was performed. A total of seven groups were selected for this task, depending on the air humidity. These groups are described in Figure 6. The observation focused on groups 2, 4, 6 and 3, 5, 7. The only statistically significant differences were between groups 2 and 6; it means that the sound velocity was not largely affected by the modification. However, the sound velocity in the transverse direction was affected differently. In dry wood, the velocity in this direction is approximately 1200 m·s −1 . In water, it is 1485 m·s −1 [1]. By comparing these two velocities, it is clear how increasing the water content in the wood increases the sound velocity increases. However, in this study, we measured values with a high variance. Therefore, it is not statistically significant. The results are shown in Figure 6. The derived acoustical properties that were mentioned above are dependent on measured frequencies and velocities. Therefore, there is a strong correlation with these quantities. Figure 7 shows the results. Generally, increasing the air humidity decreases the values of these properties. Otherwise, tan δ increases with higher moisture content and it is reduced by thermal treatment. Tan δ represents damping which correlates with moisture content [16,19,28,29]. Specific Young's modulus is related to sound velocity and density ratio and it was affected especially by the modification at 200 • C. The same results were reported by many other authors [16,18,19,27,29]. ACE and RACE values also depend on the moisture content in the wood. The moisture content influences density, tan δ and Young modulus-all input parameters for ACE and RACE definition (Equations (6) and (8)). The improvement of sonic efficiency especially with the modification at 200 • C was reported in our study. This level of thermal treatment decreases density, tan δ and increases Young modulus. The result is in agreement with other studies [4,19,25,27,29,30]. Treatment at temperature 180 • C did not showed significant change comparing to untreated specimens. Bending modulus of elasticity was affected by moisture content as well (the higher moisture content the lower modulus). Only modification at 200 • C causes significant increase of this parameter. In general, based on outputs of analysis of variance the thermal modification at 180 • C did not cause any noticeable changes to the acoustical properties; however, the modification at 200 • C caused the properties to increase. Weight Loss Weight loss was detected after the thermal modification, and the referenced and modified specimens were compared. The result was an average weight loss of 4.7 and 7% after modification at 180 and 200 • C, respectively. This loss is captured in Figure 8. Alén et al. [31] found weight loss 1.5% at 180 • C and 12.5% at 225 • C treatment of spruce wood. Zaman et al. [32] determined a values of weight loss from 5.7 to 7.0% for pine wood treatment at temperature 205 • C. The weight loss is due to chemical changes in the wood. Hemicelluloses are reduced due to their thermal instability [4,33]. The higher modification temperature is used, the higher is reduction of weight [4,15,31,32]. Conclusions The resonance spruce specimens were thermally modified and conditioned in various conditions. The volume swelling in the modified specimens was reduced due to thermal modification at 180 and 200 • C by 9.9 and 39.6%, respectively, in comparison to the unmodified specimens. Sorption was reduced by 36 and 42% at a modification of 180 and 200 • C, respectively. Additionally, the sound velocity in a longitudinal direction decreased with a higher moisture content. The remaining derived acoustical properties depended on the measured frequencies, velocities, and densities. A strong correlation within these parameters was found. Generally, thermal modification at 180 • C did not cause any significant changes to the acoustical properties. The thermal modification at 200 • C had a more significant effect on the measured properties. Thermal modification at 180 • C did not significantly affect the acoustical properties; however, the modification at 200 • C had a more significant effect. Both modifications significantly reduced swelling. The thermal modifications satisfy the requirements regarding the appearance of the musical instrument, sound quality, cost, and feasibility for the sound wood production. Two piano soundboards (for upright piano and grand piano) made of resonance spruce treated by process with parameters based on our study were produced and mounted in pianos for real-life test of sound quality and tuning stability. The darker color of soundboard brought also a positive feedback from piano designers. Data Availability Statement: The data are not publicly available due to privacy restrictions. The data presented in this study are available on request from the corresponding author.
4,932.2
2021-01-06T00:00:00.000
[ "Materials Science" ]
pkudblab at SemEval-2016 Task 6 : A Specific Convolutional Neural Network System for Effective Stance Detection In this paper, we develop a convolutional neural network for stance detection in tweets. According to the official results, our system ranks 1 st on subtask B (among 9 teams) and ranks 2 nd on subtask A (among 19 teams) on the twitter test set of SemEval2016 Task 6. The main contribution of our work is as follows. We design a ”vote scheme” for prediction instead of predicting when the accuracy of validation set reaches its maximum. Besides, we make some improvement on the specific sub-tasks. For subtask A, we separate datasets into five sub-datasets according to their targets, and train and test five separate models. For subtask B, we establish a two-class training dataset from the official domain corpus, and then modify the softmax layer to perform three-class classification. Our system can be easily re-implemented and optimized for other related tasks. Introduction There are several requirements for stance detecting applications on the internet. However it is unpractical for humans to classify massive amounts of tweets. Twitter stance detection aims to automatically determine the emotional tendency of tweets. To classify tweets polarity, mainstream approaches are based on Pang (Pang et al., 2002), like regression problem, using machine learning algorithm to build classifiers from tweets with manually annotated polarity to classify the polarity of a tweet (Jiang et al., 2011;Hu et al., 2013;Dong et al., 2014). In this direction, most studies focus on designing effective features to obtain better classification performance (Pang and Lee, 2008;Liu, 2012;Murakami and Raymond, 2010). For example, Mohammad (Mohammad and Turney, 2013) implements some sentiment lexicons and several manually-selected features. To leverage massive tweets containing positive and negative emoticons for automatically feature learning, Tang proposes to learn sentiment-specific word embedding. We transfer this method to detect tweets stance. In this paper, we develop a specific convolutional neural network learning model for stance detection. Firstly, we learn word embedding from Google News database as the input of our system. Afterwards, we train the CNN model with the Se-mEval2016 Task 6 dataset. Finally, we design a "vote scheme" using the softmax results to predict the label of test set. We also make some task specific improvement. For subtask A, we separate datasets into five sub-dataset, and train and test five separate models. For subtask B, we establish a twoclass training dataset from the official domain corpus based on several special expressions. We evaluate our deep learning system on the test set of Se-mEval2016 Task 6. Our system ranks 1 st on subtask B and 2 nd on subtask A. The good performance in the Task 6 evaluation verifies the effectiveness of our model and schemes. Architecture overview The architecture of our convolutional neural network is mainly inspired by the architecture proposed by Kim, which performs well and efficiently in sentence classification tasks (Kim, 2014). The reason why we base on Kim's model is that there is much in common between stance detection task and sentence classification task when the amount and the distribution of dataset is rather reasonable. Our architecture is shown on Fig. 1. In the following, we give a brief introduction of the main components of our network architecture in the connecting order: look-up table, input matrix, convolutional layer, activation function, pooling layer and softmax layer. We also describe the approach to train this model. Look-up table Look-up table is a huge word embedding matrix. Each column of the table, which is d-dimensional, corresponds to a word. Word embedding in the look-up table are pre-trained vectors published by word2vec team (Mikolov et al., 2013) 1 . These vectors are trained on part of Google News dataset (about 100 billion words). Input matrix An input matrix S, S ∈ R d×|s| , is the representation of an input sentence: [w 1 , w 2 , ..., w |s| ]. |s| is the length of the sentence, w i is the corresponding d-dimensional vector found in look-up table. If this word does not exist in the look-up table, make it a zero vector or a vector whose components are numbers randomly generated in a given range. 1 https://code.google.com/archive/p/word2vec/ Convolutional layer The goal of the convolutional layer is to extract patterns, so that some common abstractive representation can be found among the dataset. Pattern means specific sequential words in a sentence. Patterns can be extracted by different filter matrixes F which are discriminatively sensitive to different patterns. More formally, the convolution operation between an input sentence matrix S ∈ R d×|s| and a filter F ∈ R d×m , where m is an assigned width, is defined as follow: where 1 ≤ i ≤ |s| − m + 1. S [:,i:i+m−1] is a matrix slice of size m along the columns and is the element-wise multiplication. Both S and F have the same d rows. As shown on Fig. 1, filter F slides along the column dimension of S generating vector c: [c 1 , c 2 , ..., c |s|−m+1 ], named feature map. So far we have introduced how to compute a convolution between the input sentence matrix and a single filter. To get a richer representation of the dataset, we apply n filters on every input sentence matrix to compute feature maps matrix C, C ∈ R (|s|−m+1)×n . Note that every input sentence matrix has a corresponding C matrix and every column of matrix C corresponds to a convolution result between a filter and this input sentence matrix. In practice, we also add a bias vector b ∈ R n to every row of matrix C element-wise to train a more appropriate model. Activation function To fit the non-linear boundaries better, convolutional layer is always followed by a non-linear activation function f () in practice. f () is applied element-wise on feature maps matrix C. Among the most popular choices of activation functions: sigmoid, tanh (hyperbolic tangent) and ReLU (rectified linear), we finally choose ReLU, since it is rather simple and sometimes more efficient 2 . Pooling layer For the purpose of simplifying the information in the output from the convolutional layer (passed through the activation function), pooling layer is used. We adopt the max-pooling method, which is choosing the maximum value from every column of f (C) (f () is the ReLU operation), to form a condensed representation vector. More formally, after the max-pooling operation, f (C) ∈ R (|s|−m+1)×n → pool(f (C)) ∈ R 1×n , which is also shown on Fig. 1. Output layer: softmax The fully connected softmax layer is for classification. To a K-class dataset, the probability distribution of j-th class is as follows: where x is the input vector (the vector produced by pooling layer in our network), w k and b k , having the same dimensionality as the input vector x, are weight vector and bias vector of the k-th class respectively. Softmax layer calculates the probability of each class and then chooses the class having the maximum value as the predict label. Approach to train the network The parameters trained by our network are as follows: where W is the word embedding of all words in dataset, including those found in the look-up table and those randomly assigned; F is the set of all the filters; b is the bias vector in the convolutional layer; w k and b k are the weight and the bias vector of k-th class in the softmax layer. We use backpropogation algorithm to optimize these parameters and we adopt Adadelta (Zeiler, 2012) update rule to automatically tune the learning rate. We also opt our network by another two methods: l2-norm regularization terms for the parameters to mitigate overfitting issues and dropout scheme (Srivastava et al., 2014), which is to set the chosen value zero, to prevent feature co-adaptation. Improvement for stance detection In this session, we briefly introduce our improvement on the CNN archtecture we described above. The improvement is task specific. Vote scheme. We validate our model by cross validation method. For the models of subtask A and subtask B, we design ten parallel epochs, whose validation sets are randomly selected from the training set and non-overlap. Different from general network, we design a "vote scheme" for prediction instead of predicting when the accuracy of validation set reaches its maximum. In each epoch, we choose some iterations deliberately to predict the test set. Then, when this epoch ends, for every sentence in the test set, we appoint the label which appears most frequently in these predictions as the result of this epoch. Finally, when ten epochs end, we vote within results of these ten epochs by the same method described above to determine the final labels. By performing multiple times independently and voting twice, we get a rather robust mechanism for predicting. "divide and conquer" scheme. For subtask A, we separate both training and test datasets respectively into five sub-datasets according to their targets, and then train and test five separate models with these divided datasets. The contrast experiment between this "divide and conquer" model and the model trained by the integral dataset is shown in Session 4. "2-step" scheme. For subtask B, in the condition that the official corpus is unlabeled whereas training set is necessary for our supervised model, we come up with a solution having two steps: 1. Build a twoclass training dataset; 2. Modify the softmax layer to perform three-class classification on the two-class training dataset. According to some expressions and hashtags revealing a distinct tendency, for example, "go trump" and "#MakeAmericaGreatAgain" reveal favor tendency whereas "idiot" and "fired" reveal against tendency, we finally establish a two-class training dataset, which has about 2000 favor tweets and about 3000 against tweets, from the domain corpus for subtask B (Mohammad et al., 2016). Then, we modify the softmax layer. For a test sentence, if the absolute value of the subtraction between the probability values of the two classes is less than a randomly selected real number α(α ∈ [0.05, 0.1]), predict this sentence as "None" stance. Otherwise, predict it the class having the greater probability value. Experiments and evaluation Dataset. For subtask A, the training set is the official training data for Task A (Mohammad et al., 2016). For subtask B, the training set is described in Session 3. Details about datasets are shown in Table 1. Parameters setup. Word embedding matrix is described in Session 2.1, the dimensionality d is 300. We design three different width filters, 100 in width 3, 100 in width 4 and 100 in width 5, which means that there are 300 filters in total. We choose ReLU as activation function and we use max-pooling. L2-norm regularization term is set to 1e-6, the probability of dropout is set to 0.5. Bias vector b, as well as w k and b k in softmax layer are all set to zero vectors. Test result. We perform contrast experiments on subtask A. The results of the "divide and conquer" model and its contrast integral model, as well as the five separate models are shown in Table 2. The description of these models is in Session 3. We can see from Table 2 that "divide and conquer" model does not always have a better performance. However, since the words using in the sentences which belong to the same target are expected to be more similar, the "divide" model still performs much better on some dataset (e.g. Atheism). The "divide and conquer" model is the one we submit for evaluation. Official ranking. Part of the official rankings for both subtask A and B are summarized in Table 3. As we can see our model performs well on both subtasks. Our model ranks 2 nd on subtask A, whose official metric is only 0.5% lower than the first team. On subtask B our model ranks 1 st , and the official metric is 56.28%, about 10% higher than the second team. Conclusions We develop a specific convolutional neural network system for detecting twitter stance in this paper. We give a detailed description of our model and specific adaptation for different subtasks. Among 28 submitted systems, our system obtains good rank on both subtask A and subtask B on the test set of Se-mEval2016 Task 6. Our system has good scalability for other related tasks. Future work Due to the tight schedule, there are still many aspects need to explore. For example, why the Google news word2vec performs well in this context? How much does this word embedding improve the score compared with randomly initial word embedding? Is the more suitble word embedding exists? What's more, the vote scheme is somewhat curt, we should do more experiment to validate its robustness. Our code is available in github for anyone who has a interest in further exploration 3 .
2,987.6
2016-06-01T00:00:00.000
[ "Computer Science" ]
Development of Interactive Digital Learning Multimedia Applications as Independent Learning Module in 2-Dimensional Game Programming Courses Abstract INTRODUCTION The Covid-19 pandemic has had a significant impact, one of which is in the implementation of learning. All educational institutions, educators and students must be able to adopt with technology and improve their digital competences in line with the new global trends and realities in education (Onyema, et al., 2020). Educators are required to be able to organize face-to-face learning online using the currently developed technology. Some of the online face-to-face platforms that are widely used are zoom and google meet (Surani, Kusuma, & Kusumawati, 2020). Student awareness of online learning practices during the pandemic was highest on the WhatsApp group platform, then Zoom, and finally Google Classroom. It's mean that if the educators implement online learning then the educators should first use the WhatsApp platform, then Zoom, and finally Google Classroom (Fahruddin, et al., 2022). Problems arise in the higher education learning process, such as students having trouble interacting with other students or lectures. Therefore, in helping students interact and improve the learning environment, educators can form social media groups to communicate between students and others to improve the atmosphere of the learning environment (Amin, Alimni, & Lestari, 2021). In addition to the interaction between educators and students, students` understanding of learning material is of course the most important thing. When learning outside the classroom, educators should provide self-contained learning modules that are easily accessible to students. One of the independent learning carried out by students is using interactive multimedia applications based on Android. Interactive multimedia can empower the educational process through increased interaction between teachers and students. The use of technology in the development of learning media has an important role in improving the quality of teaching and learning outcomes for students. The game technology study program that received an operational permit in 2019, is trying to improve the quality of learning, one of which is by increasing the availability of multimedia to support learning. 2-dimensional game programming is one of the core courses in the game technology study program that hones students' skills in the competence of making and developing 2-dimensional educational games. These courses require high logical and critical thinking skills from students. Based on the analysis and discussion during the lecture, many students had difficulty understanding the 2-dimensional game programming lecture. Students who are new to programming courses think that learning programming is something complicated. In response to this, a guided tutorial strategy is needed to gradually change the paradigm. Visual media and creative tasks are one of the tools that are requested by digital natives in learning activities. Previous research on the use of multimedia in programming learning shows that students who are categorized as digital native students are interested in visual media and creative tasks (Saeeda Naz, Iqbal, Irfan, Junaid, & Naseer, 2014). The integration of multimedia as a reflection tool in learning is very important to maintain student motivation and involvement in programming classes (Annamalai & Salam, 2017) The application of multimedia technology in the development of learning media is able to integrate aspects of knowledge and skills. The success of multimedia technology has revolutionized teaching and learning methods (Rajendra & Sudana, 2017). In studies on the use of multimedia in education, it has been agreed that multimedia increases student success, positively influences student attitudes, and makes learning more enjoyable and understandable (Ilhan & Oruç, 2016). Based on the theory and problems that have been described previously, it can be concluded that the use of multimedia learning in programming courses can help students to understand the material better. Therefore, this research was carried out to produce The First Android-Based Interactive Learning Multimedia Application product as a new Learning Media of independent learning in Game Programming Courses. The interactive learning multimedia application will be developed as a learning video feature that can be accessed when the user is online. Students can play learning videos according to their needs. In this learning multimedia application, interactive quizzes are also included so that it will increase students' understanding of the content of the material being studied. METHODS This study aims to produce interactive learning multimedia applications for 2-dimensional game programming courses. Sampling was carried out by nonrandom (non-probability) sampling with purposive sampling technique. The population in this study were all students who studied 2D game programming. The sample in this study were 30 students in 3 rd semester of game technology program. This is based on the problems raised in this study to help students in learning 2D game programming. The questionnaire served as a research tool. Data collection is carried out by direct observation when the stages of product implementation in learning process. The data analysis method used is a qualitative analysis using the Likert method. This research was conducted during 2022 from January to October. Application development uses the ADDIE (Analysis, Design, Development, Implementation, Evaluation) approach. The ADDIE approach to the development of learning media refers to the ADDIE approach from Kurt (ADDIE Model: Instructional Design, 2018) also from Branch (Branch, 2013) describe in Figure 1. The research procedure used in this study is described in Figure 1. The first stage is the analysis stage. At the analysis stage, the activities carried out are needs analysis in study programs related to learning media, analysis of learning materials that will be raised in learning multimedia applications, validation of gaps between learning resource needs and current online learning conditions, and analysis of software and device requirements hard to develop products. The second stage is the design stage. At the design stage, the activities carried out are the selection of learning materials for applications, application design, application interface design, trial design and evaluation instruments. The third stage is the development stage. At the development stage, the activities carried out are application development, validation by media experts. Alpha testing carried out by developers, revision of test results, beta testing carried out by involving 30 students of game technology study, revision of test results and Apk builds. The fourth stage is the implementation stage. At the implementation stage, the activities carried out were implementing applications in learning by involving two classes of 3 rd -semester students of the game technology study program. Before learning begins, a pre-test is given to students. After the pretest, learning is carried out using applications that have been developed. After the learning was completed, a post-test was conducted and data was collected through a questionnaire to obtain data on students' perceptions of the learning applications used. The last stage that is passed is the evaluation stage. The activities carried out at the evaluation stage are evaluation of the use of applications in the learning process, as well as evaluation of perceptions of applications developed using questionnaire data that has been filled in the previous process. st Stage: Analysis Analysis of study program needs in learning media The first stage in this research activity is the analysis stage. The analysis stage is a very important stage in the development process. At this stage, the author contacted and discussed with the creative media state polytechnic quality assurance agency and the head of the game technology study program. This activity was carried out to obtain information about the need for learning media in the game technology study program. From this activity, information was obtained that many practicum courses do not yet have learning media that support students' independent learning. One of the practicum courses in the game technology study program is 2-Dimensional game programming. Based on the results of the discussion, the researchers intend to develop interactive learning media to support independent learning in 2-Dimensional game programming courses. Analysis of learning materials that will be raised in learning multimedia applications The 2-Dimensional Game Programming Course consist of topics that must be submitted to achieve learning outcomes that have been determined by the study program. topics that must be conveyed in this course are object transformation, collision detection, animation, audio in games, user interface games, making platformer game projects, making puzzle games and making educational games. These materials are taught both online and offline. From the existing materials, the subtopic of making a platformer game was chosen as the material content that will be raised in the interactive multimedia that will be built. The platformer game is the first game that is taught thoroughly from project setting to finalization. Validation of the chosen learning materials The material contained in this course really needs reinforcement apart from the material explained by the lecturer in class. The selected material is material about the practice of making platformer games. The practicum material for making platformer games is a comprehensive material that provides a systematic learning experience on how to make platformer games, from project setup to scoring function creation. This comprehensive material requires independent learning media that can be used by students independently without being accompanied by lecturers or instructors. In validating the material, the researchers discussed with their colleagues and fellow practitioner lecturers who both teach game programming courses. Analysis of application requirements The developed application must qualify as an independent learning module. This module must be able to facilitate students to study independently without being accompanied by a tutor or others. Based on this, features are needed that are able to accommodate this. Multimedia is a combination of text, graphics, audio, video and animation. In the developed multimedia application, the following features will be created: 1. Features that can provide information about learning outcomes, course descriptions, assessment systems and assessment criteria. 2. Learning videos that can be accessed through mobile-based applications. Through these learning videos, students can repeat again, if there is material that is left behind or that still needs time to understand it. 3. Structured learning module. Text-based module developed to accommodate when learning videos cannot be played. This text module is made systematically according to the procedures that will be carried out in the platformer game development process. 4. An interactive quiz to measure the level of students' understanding after following the learning process using an application that has been developed. nd Stage: Application Design The author is looking for a design reference or user interface design that is suitable for learning applications. Next, the author discusses with colleagues from the design to obtain input related to the interface that suits the needs. The interface is made adapted to the functional requirements that have been defined previously. The following is a user interface design. The user is a 3rd semester student in the game technology study program and a 4th semester student in the multimedia study program. The interface is designed as follows 1. Color selection. The color chosen is green. This color shows a natural atmosphere and represents the existence of technology in learning. 2. The interface is designed to meet the need to be able to convey learning messages in the selected courses 3. The interface is made to display features that can provide information related to course learning achievements 4. The interface is made to be able to provide features that can provide information related to learning objectives and competency standards and basic competencies of the topics presented 5. The interface must facilitate the video and text used in the digital module. 6. The quiz interface must be able to display the scores that have been achieved by students 7. Learning materials are presented either through learning videos that are not too long in duration, which raise each sub-topic on the project 8. Provide a minisite that presents digital modules on the topics raised User interface Design Figure 2. Loading and register screen Why should there be a registration and login menu? This is an effort to increase awareness of students and increase engagement between applications and students. So that students are more motivated to learn. Button CPL Study Program, CP MK, Sub-CP MK, Description of the Court, and Credits will display a Pop-Up that displays the appropriate information when clicked c. Button Learning material will move scene when clicked d. The evaluation button will switch scenes and display an interactive quiz On this page, the interactions that occur are as follows: a. Name, NIM, Class taken from database b. The title of the material is in the form of a panel, when clicked it only changes color, does not change the scene c. The Sub-CPMK button, assessment indicators, assessment criteria and assessment techniques will display a Pop-Up that displays the appropriate information when clicked. d. Button Learning material will move scene when clicked rd Stage: Development Process Learning module Development Develop the Concept of Material in accordance with the Curriculum The author correlates the material on the subject raised in this learning application with the basic concepts that have been explained in the previous meeting. the author Develops a procedure for making a 2-Dimensional platformer game using the Unity 3D game engine. The making of a 2-dimensional platformer game in a learning application is completed by taking 10 procedures that have been defined. Creation of explanatory content on selected subjects Based on the previous activity stages, it has been defined 10 steps to create a 2D platformer game using Unity. The author creates application content in the form of text and graphics which is a narrative explanation of making a 2Dimensional platformer game in Unity 3D. The procedure in developing practicum videos on selected subjects: 1. The author makes a learning video using screen cast o matic 2. The learning video consists of 11 short videos 3. The composition of the 11 videos contains 1 video overview of selected subjects and 10 learning videos about making platformer games using Unity3D 4. The author uploaded the 11 videos into the youtube channel which can be accessed at https://youtu.be/XOOVZlXrseE Application Development In developing a 2-dimensional game programming learning application, the author chose android technology because android users in Indonesia are very superior compared to iphone, blackberry and symbian. The editor used by the author is Android Studio because it is the official Integrated Development Environment (IDE) tool for developing Android applications from Google and Jetbrains. The language used is kotlin because it is a modern programming language through static typing that is used by more than 60% of professional Android developers to help increase productivity, developer satisfaction, and code security. The database uses roomdatabase which is a library part of Android Jetpack which can increase productivity in Android application development. Use case diagram In this context, the researcher chooses Android Smartphone users (users) as actors. The following is a use case diagram that describes user activities: In the use case diagram above, the user as an actor has a login, register, home, material, platformer, evaluation, and credits use case. Database Diagram The following is for the Room Database design that was made: Figure 6. Database Diagram Notes: In the LRS diagram above, the application using the room data base has 3 tables, namely students, evaluations, and quizzes 3. Fitur Development a. A login page, each user must login first before entering the home. b. A register page, if you don't have an account to login, the user can register first. c. Material pages that can be in the form of text, images, and videos. so that it can make it easier for users to understand the material. d. An evaluation page to find out how far the user understands the material. e. A score page to find out how many scores all users get. Alpha Testing Alpha testing is done to test the application to several users, to find out whether there are bugs that occur or not. From this test, information was obtained that the application was successfully installed on the Android application with the Samsung, OPPO and Xiao Me brands. All features work well and can convey information well. Beta Testing This test is carried out on students who have the potential to use this application, namely in semester 3 students. From this test, information is obtained that students are greatly helped by this application. The following is the documentation at the time of beta testing. th Stage: Implementation The implementation is carried out in class C game tech. Where students independently work on modules and the results are uploaded into SIAK. The activities carried out by students during implementation are described in the flowchart as follows: The implementation is limited to the material for making platformer games as a prototype of this independent learning module application. th Stage: Evaluation Evaluation activities are carried out to determine the extent to which learning outcomes can be achieved. The instrument used by the author is a multiple-choice assessment to determine the extent to which students understand the material given. From 2 classes with a total of approximately 40 students, the quiz scores in the learning materials presented were obtained an average of 80, which means that students have a good level of understanding of the material provided. DISCUSSION The development of interactive learning multimedia applications has resulted in successful applications that have been tested according to development research criteria. The product developed includes features that are able to present comprehensive information about a course. This application is guaranteed quality because starting from material analysis, multimedia needs analysis and interface design and experience guided by experts in their respective fields. The resulting system is an interactive learning multimedia application that functions as a student self-learning module that can be accessed easily, both online and offline. The system resulting from this research is expected to be a special learning supplement in 2D game programming courses at the vocational level. There are no research products that produce 2D game learning applications that can be used as learning modules. The research that has been done focuses more on interactive learning media (Simanihuruk, Mukhtar, & Tanjung, 2020) and game-based learning multimedia (Hidayanto, Munir, Rahman, & Kusnendar, 2017) for basic programming, and developing web-based learning media to deliver web programming material (Manggopa, Kenap, Manoppo, Batmetan, & Mewengkang, 2019). This shows that the system being built is relatively new in the competence of game programmers. This learning multimedia application has gone through expert testing to verify that the system built really meets the research objectives. The development model used is the ADDIE Model which has been widely used in developing applications. Research conducted by Hidayanto, Munir, Rahman, & Kusnendar (2017) used the ADDIE Model to find out how to design and build adventure game-based multimedia learning and its effect on increasing students' understanding of basic programming in SMK. the results show that the level of understanding of students has increased in the medium criteria. Meanwhile, the resulting media showed very good results in terms of learning and visual communication. Meanwhile, Mujib, Widyastuti, Suherman, Mardiyah, & T D Retnosari (2020) Uses the ADDIE model in their research which aims to find out how to develop it, determine its feasibility, and find out the responses of teachers and students to Construct 2 as a medium for learning mathematics on polyhedron material. By implementing the ADDIE model, it is found that the implementation of Construct 2 is categorized as very good based on the results of small group trials and large group trials. Furthermore, the teacher's response during the trial showed that the Construct 2 learning media was included in the very good category. Interactive multimedia as an effective learning media to achieve learning goals (Syahputra & Maksum, 2020). Interactive multimedia teaching materials are very practical and effective for improving learning outcomes and are feasible to apply in the learning process (Krismadinata, Elfizon, & Santika, 2018). It can be used as interactive learning module that very effective to help improve student understanding (Tawardjono, Sulistyo, & Efendi, 2017). Interactive multimedia can also be used in learning and is effective in strengthening student character (Solihah, Septiani, Rejekiningsih, Triyanto, & Rusnaini, 2020). The use of interactive learning modules contributes to showing changes in students' positive attitudes shown by students becoming more active and motivated in the learning process (Leow & Neo, 2014). Interactive multimedia can also be used as a medium to improve basic teaching skills (Sudarman, Riyadi, & Astuti, 2020). The developed application received a fairly good response from students. With today's growing media, more innovation is needed in the development of this independent learning module. There is still a lot of learning content in 2dimensional game programming that has not been added. These materials are material on object transformation, collision detection, in-game animation, audio, user interface, puzzle games, and educational games. Further innovation is needed in this learning media so that it is appropriate to technological developments, although this independent learning application has had a good response from the students. They find it very helpful to understand the material with the learning video in addition to the text module. Moreover, there is an evaluation in the form of a quiz that helps students find out the extent of achievement in the material that has been studied. Among the alternative innovations that can be done is the creation of interactive videos that involve users when users access the video. One form of interaction that can be built, for example, is that users need to respond to questions given by the instructor when the video is played. It takes strategy and knowledge to build or create videos of such specifications. CONCLUSION This research has succeeded in providing an interactive multimedia learning application that can support independent learning in 2D game programming courses. The applications produced in this study have been tested by experts in related fields, namely media, materials, design, and communication experts so as to produce quality learning applications. This application can be used in study programs at vocational colleges that provide game programming competencies. The results of the application testing show that the student response is very good towards the applications that have been made both from the design and material aspects, so that these applications are ready to be used as independent learning media in 2dimensional game programming courses. CONFLICT OF INTEREST In the research conducted, the author does not have a conflict of interest with anyone. The author developed this application because of motivation in providing quality learning modules for students, especially students of the 3 rd -semester game technology study program.
5,189.8
2022-12-31T00:00:00.000
[ "Computer Science", "Education" ]
Li-Doping and Ag-Alloying Interplay Shows the Pathway for Kesterite Solar Cells with Efficiency Over 14% Kesterite photovoltaic technologies are critical for the deployment of light-harvesting devices in buildings and products, enabling energy sustainable buildings, and households. The recent improvements in kesterite power conversion efficiencies have focused on improving solution-based precursors by improving the material phase purity, grain quality, and grain boundaries with many extrinsic doping and alloying agents (Ag, Cd, Ge…). The reported progress for solution-based precursors has been achieved due to a grain growth in more electronically intrinsic conditions. However, the kesterite device performance is dependent on the majority carrier density and sub-optimal carrier concentrations of 10 14 –10 15 cm − 3 have been consistently reported. Increasing the majority carrier density by one order of magnitude would increase the efficiency ceiling of kesterite solar cells, making the 20% target much more realistic. In this work, LiClO 4 is introduced as a highly soluble and highly thermally stable Li precursor salt which leads to optimal ( > 10 16 cm − 3 ) carrier concentration without a significant impact in other relevant optoelectronic properties. The findings presented in this work demonstrate that the interplay between Li-doping and Ag-alloying enables a reproducible and statistically significant improvement in the device performance leading to efficiencies up to 14.1%. Introduction The integration of photovoltaic (PV) technology in buildings and products (BIPV, PIPV) for self-sustainable architecture and Internet of Things (IoT) applications is pivotal to make the future society sustainable.The perspectives on the potential candidates for light-harvesting in in-situ energy consumption devices have drastically changed due to the progress of emerging thin-film kesterite (Cu 2 ZnSn(S,Se) 4 , CZTSSe) technology. [1,2]][13] Furthermore, significant progress in the chemical passivation of grain boundaries with non-isovalent (co-)doping (Ga) has also been achieved. [14]esides the mentioned strategies, majority carrier concentrations are crucial for the PV performance of the device.However, even though the optimal hole concentration for kesterite solar cell performance has been demonstrated to be 10 16 cm −3 , consistently reported values for high-efficiency devices have remained within the range of only 10 14 -10 15 cm −3 . [4,8,15]The lack of significant progress by precise control of the hole density in molecular ink-based kesterite probably has its origin in an improved grain quality of the material and therefore a less defective and more electronically intrinsic and ordered kesterite phase. [16]The hole concentration in kesterite is governed by the presence of abundant I II (Cu Zn ), II I (Zn Cu ) and V I (V Cu ) intrinsic defects. [17]he Cu Zn presents a relatively high ionization energy, and V Cu presents a relatively high formation energy even in the typical and optimal composition of Cu-poor and Zn-rich conditions.The partial substitution of Cu by Ag leads to a lower concentration of Cu Zn due to the lower Cu chemical potential.The high formation energy of Ag Zn leads to Ag atoms mostly incorporated in Cu positions, leading to a reduction of the hole density. [7,18]Also, the wellknown formation of secondary phases prevents the regulation of the hole concentration through compositional optimization. [17][21][22] In contrast with heavier elements, Li has demonstrated higher solubility in kesterite as well as a shallow donor state Li Zn with lesser ionization energy than Cu Zn and lower formation energies than V Cu . [23][25][26][27] Similarly, a beneficial interaction between Ag and Li soft postdeposition treatments (PDT) has been previously reported. [28,29]owever, previous works demonstrate that the implementation of Ag-alloying during grain growth is critical to improving the solar cell performance due to improvements in the morphology and cation (Cu─Zn) disorder, consequently, the hole density is reduced. [7,30]Similarly, the addition of Li precursor in the Ag-free kesterite solution can also improve the solar cell performance relating to improvements in the morphology and crystalline quality and, in contrast with Ag, a slight increase in the hole density is measured. [26]This work aims to demonstrate the possibility of specifically tailoring the material optoelectronic properties with Ag and Li multinary and isovalent doping and alloying.The incorporation of Li in the Ag-alloyed CZTSSe is achieved via the introduction of the LiClO 4 into the molecular ink, which is highly soluble in organic solvents to make the doping level more flexible.The beneficial effects of Ag alloying in morphology, crystalline quality, and disorder are preserved.Due to the high decomposition temperature of LiClO 4 , it effectively delays the formation of Li 2 Se.Consequently, a postdeposition treatment like (PDT-like) passivation and controlled doping for kesterite in high-temperature selenization processes can be achieved.The reduced occupancy of the Zn sites caused by Ag alloying enables a higher density of shallow acceptor Li Zn defects leading to a remarkable increase in the hole concentration of one order of magnitude.Through this synergy with Ag, the introduction of 2% LiClO 4 in the precursor ink leads to an absorber material with high crystalline quality, low disorder, and optimal hole concentration for photovoltaic performance enabling a maximum efficiency of 14.1%. Results and Discussion The previous report demonstrated a significant exchange between Na and Li during the high-temperature selenization process, [31] complicating the understanding of the true role of Li in kesterite.To mitigate uncertainties and comprehend the specific impact of Li on kesterite, we introduced SiO x as a Na diffusion barrier at the bottom of the Mo layer.The device architecture is depicted in Figure 1a and comprises Soda-lime glass (SLG)/SiO x /Kesterite/CdS/i-ZnO/ITO.With this, the outdiffusion of alkali metals from the SLG can be avoided, however, the exchange of Li and Na through the gaseous Na─Se and Li─Se phases cannot be supressed.The introduction of different LiClO 4 concentrations from 0.015 to 0.78 m, corresponding to ratios of Li/(Cu+Ag) of 2-100% have been explored.In this work, the effects of the addition of LiClO 4 in the solution are investigated by keeping the Cu and Ag concentration constant, implying that the cation ratio [I]/[II]+[IV] is increased from 0.75 to 1.5, depending on the Li concentration.The addition of LiClO 4 in the precursor solution leads to significant changes in the solar cell device performance, as showcased in Figure 1b-e,f. The samples with Li concentration in the range of 2% to 10% show a remarkable increase in the open-circuit voltage (V OC ) and fill factor (FF), demonstrating a flexible optimal range of Li incorporation.Specifically, the sample with 2% Li/(Cu+Ag) content shows an impressive increase of FF, improving the reference by almost 10% in absolute value.The slight dispersion of the low Li content devices is an indication of the homogeneity of the beneficial effects induced by the introduction of Li, leading to a significant improvement in the average value of the power conversion efficiency (PCE) from 9.8% to 11.4%.In contrast, the samples with a Li concentration higher than 20% show a steep decrease in all the optoelectronic parameters.The diode parameters were obtained by fitting the dark and illuminated current-voltage (J-V) curves from the second batch of devices in Figure S1 and Table S1 (Supporting Information) using the single diode method. [32]The results are summarized in Table 1.The most relevant change induced by LiClO 4 introduction is the significant increase in the shunt resistance (R sh ) and reduction in recombination current (J 0 ) and series resistance (R s ).These changes correlate with the enhancement in fill factor (FF) and open-circuit voltage (V OC ) of the devices.The reduction in the R s value of the 2% Li device probably results from increased absorber conductivity.A reduced transport barrier for the carriers consequence of modified band alignment could also reduce the R s .These changes could stem from the increased hole concentration by Li incorporation, as shown throughout the paper.For Table 1.Summary of the single diode parameters extracted from illuminated and dark I-V curves, along with the carrier density and space charge region (SCR) width extracted from capacitance-voltage analysis for the devices with the optimal range of Li content.devices with high Li content (>2%), the observed increase in R s can be attributed to the presence of secondary phases resulting from excessive Li incorporation.The small changes in the ideality factor suggest that the dominant recombination mechanism and the region in the device, the SCR are unchanged by the Li incorporation, suggesting that Li has little influence on the recombination rates. The changes in the current collection are characterized by the external quantum efficiency (EQE) and are presented in Figure 1g, showing an improvement of the transport properties for the samples with Li/(Cu+Ag) from 2% to 5% and a slight decrease for 10% content and thereof, consistently with the J-V data.The band-gap energy (E g ) can be extracted from the EQE data and the corresponding Tauc plots are presented in Figure S2 (Supporting Information).With increasing Li content (2% to 20%), E g gradually rises until reaching 50%, suggesting the formation of Li alloyed (Li,Ag,Cu) 2 ZnSn(S,Se) 4 , causing the observed slight increase in E g . [33]In this work, the slight change in E g implies that Li is not extensively incorporated into the kesterite lattice.However, it is evident that the minor change in E g is not the direct cause of the observed current drop in the J-V curve.This will be further discussed throughout the manuscript.Also, it has been previously shown that the implementation of isovalent alloying elements leads to a reduction of the Urbach energy (E U ) and is usually related to improvements in the device performance due to reduced recombination losses. [7,34,35]In this case, the E U is unchanged suggesting that the grain quality is similar for all the samples, regardless of the Li content, which indicates that the effects induced by Ag on the Cu─Zn disorder dominates over the effects of Li.Therefore, it seems that Li doping is not significantly impacting the grain quality and consequently the changes T (K) in the device performance are related to changes in other optoelectronic properties. The capacitance-voltage profiling analysis shown in Figure 2a reveals a clear increase in the apparent carrier concentration (N CV ), leading to one order of magnitude increase from 10 15 to 10 16 cm −3 for the devices with Li content from 2% to 10%, as can be seen in Table 1.The small difference in the C-V profiling values close to the junction suggests that the main effect induced by Li incorporation is a bulk-related effect while the interface quality is similar. To further confirm that, the activation energy (extracted from the extrapolation of the V OC vs T data) as shown in Figure 2b, is practically unchanged with values very close to the E g showing that, if any, the changes at the interface induced by the Li doping do not have a remarkable impact in the device performance.Nevertheless, the increase in the bulk density of the majority carrier can explain the improvements in the device performance, since, for sufficiently high lifetimes, the measured increase in carrier concentration can lead to a remarkable performance improvement, as demonstrated in the literature. [15,36]Moreover, the impact of alkaline treatments on the intra-grain lifetime and grain boundary recombination rates could be masked by a low carrier concentration. [37]Therefore, the implementation of a Li precursor salt in the solution precursor, leading to ACZTSSe with carrier concentrations near 10 16 cm −3 , is critical to significantly improve the device performance.On the other hand, other bulk properties might be affected by the increase in hole density therefore the impact of Li doping on the minority carrier lifetime should be assessed.[40] Figure 2c shows the TRPL decay of samples with 0, 2, and 5% Li content and a slight decrease of lifetime for the concentration of 5% can be observed.Hence, our investigation reveals that the threshold of Li content required to influence minority carrier lifetime is higher compared to the threshold affecting hole density.Interestingly, the interplay between hole density and lifetime has not been observed for several post-deposition heat treatment (PDT) strategies, which have demonstrated modifications of the hole concentration without detrimental effects on the lifetime and V OC [41,42] In light of these findings, it becomes apparent that the incorporation kinetics of Li in the solution-processed kesterite material is dependent on the Li content.Consequently, further insight into the incorporation mechanism is essential for a comprehensive understanding. Currently, there is no absolute consensus on the mechanism by which the hole density increases nor is there a universal correlation between carrier density and lifetime due to the complex behavior of kesterite solar cells.However, different mechanisms have been proposed to explain the increase in hole density with alkali doping in Cu-based chalcogenides: i) an increase in the Cu vacancies due to a decreasing Li solubility and the segregation to the grain boundaries during the cooldown or ii) the formation of easily ionized acceptor Li Zn antisites. [19,23,40]In this work, we note that these mechanisms might be valid depending on the initial conditions of the kesterite material and that, furthermore, the effect on the lifetime should be different for both cases.In case (i) it is assumed that the defects are defined during the absorber formation from a chemical reaction at high temperatures.Interestingly, case (i) would affect the lifetime of kesterite since the Li atoms are incorporated in the chalcogenide phase prior to the formation of the material.In case (ii) the kesterite phase is already formed and well-crystalized.Then, only the Cu and Zn defects with much higher mobility can be affected by Li incorporation and change concentrations, leading to a practically unchanged lifetime dominated by the Sn-related defects.This differentiation stems from the substantial disparity in activation energies required for Sn diffusion in the kesterite material as compared to that of Cu and Zn.The different transition temperatures for the Cu─Zn disordered kesterite phase (at 200-250 ˚C) and the Cu-Zn-Sn disordered cubic phase (1000 ˚C) provide clear evidence of this characteristic. [43]Therefore, we refer to the terms formationlike for case (i) and PDT-like for case (ii).In this work, the lifetimes of the samples with 2% Li remained unchanged compared to the reference samples, however, the hole density of the 2% and 5% samples are similar, suggesting that the recombination centre density of the 2% Li sample is similar to the reference case while the shallow acceptor defect density is increased.This hints at a PDT-like Li effect for low Li/(Cu+Ag) ratios which becomes increasingly more formation-like with increasing Li content, which clearly implies that the kinetics of Li incorporation is affected by the Li concentration.To understand the underlying mechanism driving the N CV increase with different effects in the minority carrier lifetime further characterization is necessary to shed light on the role of Li during the high-temperature step. It has been previously reported that the addition of alkali in the kesterite system leads to a performance improvement correlated to morphology improvements for formation-like alkali incorporation. [27,36]However, a decoupling of the efficiency enhancements due to changes in optoelectronic properties and the morphology effect with evaporated NaF doping, revealing that the former had a more significant contribution via an increase in the hole density from 10 15 to 10 16 cm −3 . [44]It is thus also necessary here to assess the influence of morphology first to properly understand the observed PCE improvements.It has been previously demonstrated that the introduction of Ag has a significant impact on grain size and morphology. [7,45]The effects of Li incorporation on the absorber morphology for different concentrations are shown in Figure 3, with almost no impact observed for concentrations below 20% indicating that, if present during the grain crystallization, the Li-(S,Se) phases do not have a direct impact on the grain growth mechanism, confirming that the dominant grain growth mechanism is not affected by the presence of chalcogen-rich and low melting point phases or the incorporation of Li in the lattice.This hints at a PDT-like behavior, where the Li content only affects the shallow defect structure of the material.In contrast, the samples with Li/(Cu+Ag) above 50%, show bigger grains as well as a morphology degradation.These results are typically observed with the introduction of high concentrations of alkali chalcogenide species. [21]It has been previously shown that the formation of the top grains occurs during the first stages of the high temperature step. [6]herefore, as shown in Figure 3 and Figure S3 (Supporting Information), the unchanged morphology of low Li content samples suggests that the formation of Li-Se phases is posterior to the top grain growth.When the Li content is sufficient, the formation of Li-Se phases seems to be accelerated, being able to impact the morphology of the absorber film.It has been reported that high concentrations of Li introduction can lead to the formation of Li-alloyed kesterite by substituting the Cu position. [33]In this investigation, introducing Li without reducing the concentrations of Cu and Ag may lead to an overabundance of [I] elements in the film.This elucidates why the 100% Li films display a large and compact grain morphology, as depicted in Figure 3, yet exhibit a lack of PV performance in the device. The Li loss during the different stages of the processing of kesterite materials has been reported previously. [21]It is wellknown that due to the inability of Li to complex with thiourea major loss of Li is observed during the spin-coating of the precursor film due to the volatilization of uncoordinated metal-chloride compounds.Also, Li re-dissolution during the deposition of the following layers has been observed. [21]In this work, we characterize the different Li content by ICP-MS for the different Li composition targets, the results of which are presented in Table S2 (Supporting Information).Aside from the outlier behavior observed in samples with lower Li content, the use of LiClO 4 as the Li precursor, instead of LiCl, led to approximately an order of magnitude higher Li detection in the absorber at similar molecular inks concentrations (e.g., 20%), compared to the literature. [21]his phenomenon can be attributed to the very low hygroscopicity of LiClO 4 , preventing substantial loss of Li during re-spin coating.In previous studies, LiCl doping solutions were spincoated in ambient air.We speculate that the loss of Li is facilitated by LiCl's ability to readily absorb moisture from the air, forming LiCl-H 2 O microdroplets that easily mix with DMSO, leading to significant Li loss during the re-spin-coating process.Also, the Li present in the precursor film is expected to be in the LiClO 4 phase which has high thermal stability, with thermal decomposition starting at temperature of 420˚C and peaking at 500˚C, well beyond those used during spin-coating and similar to the selenization highest setpoint. [46]Finally, a minor Li loss during the CdS deposition has been reported and is also observed in this work, as shown in Table S3 (Supporting Information), showing that excess Li can be dissolved during the CBD, suggesting that excess Li is removed during this process.To assess the distribution of Li during the grain growth stage the Li content of the absorber films prior to the CdS deposition and after the CdS+JHT has been characterized by ToF-SIMS and presented in Figure S4a (Supporting Information), the distribution of Li over any of the other metals (Cu, Ag, Zn, and Sn) changes with Li concentration.The Li content of the samples is homogeneously distributed across the absorber.In the case of the 2% samples, the distribution remains unchanged after the CdS+JHT, indicating that the Li is effectively incorporated in the kesterite lattice.When the Li content is increased to 5%, the ToF-SIMS shows a reduction in signal after the CdS+JHT, suggesting that excess Li segregates in the film as a secondary phase, which is (at least partially) dissolved during the CBD.The high thermal stability of LiClO 4 can prevent its decomposition during the first stages of the thermal process, which are also more chalcogen-poor (and therefore the formation of Li-Se phases is not displaced toward the product). [46]Also, it has been previously shown that the formation of ACZTSSe topmost layer occurs within the first minutes of the high-temperature step. [6]he observed Li loss from the CBD process but the unchanged ToF-SIMS profile for the 2% Li content suggests that (naturally) Li-Se phases are formed at the surface and incorporate to ACZTSSe diffusing from the top, consistently with the unidentified phase observed in the SEM top-view of Figure 3. Also, the reduced J sc with increasing Li content could be explained by the parasitic absorption of secondary phases.Therefore, the Li incorporation, typically expected from Li-Se liquid phases, seems to be posterior to the top grain growth and kinetically limited by the down-diffusion and reaction of Li-Se phases with the kesterite absorber at the surface and grain boundaries, leading to remaining unreacted Li-Se phases within the absorber film.The PL emission peaks measured in the ACZTSSe-Li/CdS (w/JHT) configuration are shown in Figure S4b (Supporting Information) and show a slight blueshift at 5% content, which is related to an increase in the E g due to Li incorporation. Besides the determination of how much Li is incorporated in the absorber material, it is also important to understand the lattice position occupation of the Li atoms and its effects on the distribution of other cations such as Ag, Cu, and Zn.Even though Li has the highest solubility in CZTSSe among all the alkali it is still relatively low.Then, excess Li can form Li-Se phases and partially displace Cu or Zn to other kesterite lattice positions.Completely understanding these effects is complex since we explore for the first time the effects of triple cation competition in kesterite materials by changing the Li/(Li+Cu+Ag) ratio from Li-poor (0.017) to Li-rich (0.43).The competition of the Cu, Zn, Ag, and Li atoms for the occupation of Cu and Zn positions will result in a complex interplay with the increasing Li content.In this sense, due to the large Ag atomic size, the formation energy of Ag Zn is much higher compared to the formation of Cu Zn and Li Zn antisites.Therefore it can be assumed that only Ag Cu defects are formed and that their concentration is not affected by the Li content. [18,23]Li will naturally tend to occupy Cu positions by forming Li Cu neutral defects, which have been found to show negative formation energy in ab initio studies, indicating the spontaneous formation of this defect for the dilute limit. [23]The charge neutrality of the Li Cu defect implies a formation energy independent from the Fermi energy and therefore only the composition will determine the concentration of this defect.The effects of the chemical potential of Cu on the alkali incorporation have been studied by Haass et al, [27] showing that a Li-induced increase in E g could only be achieved in Cu-poor compositions.Also, the presence of Li Zn defects is expected in our samples, consistently with its low formation energy, which is further reduced in intrinsic low p-type growth conditions.Besides, the different antisites that can be formed in the kesterite lattice may have a completely different impact on the optoelectronic parameters of the material as well as a different response in the Raman spectra.The Raman spectra obtained for all the devices are shown in Figure S5a (Supporting Information).First, in Figure 4a we show the position of the main peak of (A)CZT(S)Se with the different Li/(Cu+Ag) ratios.The peak typically centered ≈196 cm −1 corresponds to Se-Se vibrations and similar peaks can be detected for many Se-based compounds.The incorporation of Li tends to shift the peaks toward higher wavenumbers, indicating the incorporation of a lighter element in the lattice.Surprisingly, for concentrations above 50%, the position of the peak starts to shift toward lower wavenumbers indicating the segregation of two phases.The area of the 176 cm −1 peak shown in Figure 4b presents a similar behavior and is correlated to a decreasing density of the [V Cu +Zn Cu ] defect cluster. [47]It can be easily observed that the density of this cluster increases up to 50%, then drastically decreasing up to 100% Li content.Therefore, a decreased density of [V Cu +Zn Cu ] in the system induced by the Li incorporation can be concluded.The formation of Li Cu by a reduction of the concentration of V Cu is consistent with the slight increase in E g and is therefore likely confirmed by this analysis.However, an increased density of the Zn Cu defects due to the Li Zn formation and displacement of Zn and/or Cu from Zn positions could also be influencing the area of the 176 cm −1 peak.To verify this, the area of the 250 cm −1 peak is analyzed and correlated to an increasing density of the [Zn Sn +2Zn Cu ] (ZnSe-like) defect cluster higher Li content as shown in Figure 4c. [47]The area of this peak directly increases up to 80% Li content and then drops to very low values at 100% Li content.The latter indicates an increased content of Zn out of its lattice position, suggesting the increased presence of Li in Zn positions.Therefore, a reduced density of V Cu and an increased density of Zn Cu defects can be extracted from the Raman peak area analysis.It has been previously shown that an increase in the concentration of acceptor Li Zn defects can have first order impact in the hole density due to very low formation and ionization energies.Furthermore, the formation of Li Zn could also reduce the density of less ionizable Cu Zn defects since the measured reduction of the V Cu could be related to the formation of both Li Cu and Cu Cu .The stabilization of Zn Cu with increasing Li content is expected due to free charge self-compensation effects, which would justify the saturation of hole concentration for higher Li contents.Therefore, according to the Raman analysis, the shallow defect driving the increase of hole density in Li-doped ACZTSSe are not V Cu which seems to be reduced in density and neither Zn Cu which is increasing in concentration.Therefore, the most likely option is an increase in the Li Zn defect density.Cu Zn defects could be also present in higher concentrations, however, considering their high ionization energy and the tendency to form [Cu Zn +Zn Cu ] clusters it is becomes a less plausible option.Certainly, the formation of Li Zn is likely the main reason for the increase in carrier concentration observed in the CV profile.Li Zn also has the potential to mitigate the Fermi level pinning issues at the Kesterite/CdS interface, but further investigations are needed.These results indicate that the incorporation mechanism of Li introduces an additional shallow defect state without impacting the morphology, that is, PDT-like mechanism, consistently with all the previous results. Hence, the incorporation of Li in Cu and Zn positions seems evident, however, an outlier behavior for the Li5% and Li10% samples is observed and could be explained by a weak contribution to the A 196 from the overlapping of a Sn-Se phase-related peak, which also correlates to the emergence of two peaks centered at ≈95 and 125 cm −1 for Li concentrations of 10% and above.Furthermore, the intensity of the peak at 95 cm −1 shown in Figure S5b (Supporting Information) increases proportionally with the Li content, hence, an expected presence of a Sn-related secondary phase is expected.The only reasonable explanation for this behavior is an enhanced decomposition of the absorber induced by excess Li, as indicated by an enhanced Sn-loss detected by ICP-MS shown in Table S2 (Supporting Information), which is known to be related to the formation volatile of Sn-Se species, consistently with the observed morphological changes.The formation of SnSe 2 species for high alkali contents has been previously reported in the kesterite system. [22]herefore, with the presented characterization and the effects observed in previous studies, understanding the Li dynamics from the solution to the final device is possible.In the solution, Li cannot form complexes with thiourea, so Li is present as solvated LiClO 4 in the precursor solution.Due to the high decomposition temperature (420 ˚C) of LiClO 4 , the incorporated Li is primarily present as LiClO 4 in the precursor film. [46]As described in step 1 of Figure 5, the LiClO 4 present in the precursor film decomposes and reacts with Se during the thermal treatment ramp starting above 420˚C, following the reactions outlined below: According to the Li-Se phase diagram, the Li 2 Se phase can form a liquid phase at the used annealing temperatures for Serich conditions.The high thermal stability of LiClO 4 and the Se poor conditions during the ramp lead to the formation of liquid Li-Se phases during the first stages of the high-temperature step, which could be consumed by already crystallized ACZTSSe following Equations ( 3) and ( 4), as shown in step 2 of Figure 5.The formation of liquid Li-Se phases and the subsequent reaction with the kesterite material at high temperatures explains the de-layed incorporation of significant quantities of Li in the kesterite phase after the crystallization is completed, as denoted in step 3 of Figure 5, consistently with the increased hole density without impacting the lifetime and the unchanged morphology, explaining the PDT-like behavior of the system for low Li contents.Also, it seems that with increasing Li content the incorporation of Li is accelerated, hinting that low Li concentration is necessary to control the kinetics of Li incorporation prior to the grain formation.Naturally, during the cool-down stage, the Li solubility is decreased by temperature and the excess Li will be segregated to the grain boundaries and surface, however, this effect does not seem to dominate the hole concentration in this system. Hence, according to the proposed model, based on the experimental observations, the Ag-alloying determines the recrystallization mechanism, crystalline quality, and disorder level of the kesterite material, while Li doping does only contribute to an increase in the hole density.It has been previously demonstrated that Ag alloying contributes to a reduction of Cu Zn defects while the density of V Cu and Zn Cu defects remains constant. [46]esides the well-known formation of Li Cu defects, the Ag-induced reduction of Cu Zn defects enables the efficient incorporation of Li in unoccupied Zn positions, increasing the hole density.Hence, the synergistic interaction between Li and Ag enhances the formation of beneficial Li Zn shallow acceptor defects, enabling optimal hole densities for solar cell performance. For the Li-rich conditions (above 20%), the presence of Li 2 Se phase is also expected as an intermediate.The small density of the Li 2 Se phase (1.66 g cm −3 ), compared to LiClO 4 and LiCl (2.42 and 2.07 g cm −3 , respectively), combined with the much higher content of LiClO 4 in the precursor film, can result in the formation of micro-or nano-sized Li 2 Se grains.These grains may react with the chalcogen gas and the kesterite material before ACZTS recrystallization, capturing Sn and forming Liintercalated Sn(S,Se) species.These species are susceptible to volatilization, explaining the enhanced Sn and Li loss, as well as the presence of voids and pinholes and the larger, less faceted grains. After the detailed study of the influence of LiClO 4 as Li precursor salt in high-efficiency ACZTSSe in DMSO, the method has been transferred to similar solvents such as 2-methoxyethanol (MOE) and N,N-dimethylformamide (DMF), which are known to result in practically identical amorphous precursor film. [5,6,48]he utilization of DMF and MOE solvents facilitates the spincoating process in an ambient atmosphere.It is noteworthy that all characterizations conducted in this study were performed on samples prepared using DMSO as a solvent.When combined with optimized window layer deposition, this approach results in a maximum device performance of 14.1% with MOE as the solvent.The J─V, EQE, and CV data of the device is shown in Figure 6.The reproducibility of the process and its statistical significance have been assessed by repeating the same process several times and monitoring the optoelectronic parameters of every batch.The improvement in all the optoelectronic parameters upon the implementation of Li doping can be observed in Figure 7.The small dispersion in the measured optoelectronic parameters reveals the resilience of the method to batch-to-batch variations.The implementation of Li-doping leads to a remarkable average efficiency of 12% for almost 300 cells. In recent years, as shown in Figure 8a, the implementation of Ag alloying together with other extrinsic doping and alloying agents has been key to achieve high-efficiency kesterite solar cells.In this work, we demonstrate that the introduction of Li as a co-dopant leads to record efficiencies with the highest V OC for Se-rich kesterite materials (573 mV).The voltage deficit (V OC def = V OC SQ -V OC Measured , where V OC SQ = 0.932•E g /q-0.1667V) of the champion device of this work is one of the lowest among the literature, and the lowest for its bandgap (E g = 1.15 eV) as shown in Figure 8b, indicating the high quality of the material and the optimized device performance.Further improvement could be achieved by the implementation of antireflective coating (ARC) increasing the J SC by at least 1 mA cm −2 , leading to a projected efficiency of 14.5%. Conclusion In conclusion, this work demonstrates that the incorporation of high solubility group I element Li via easily dissolved and thermally stable LiClO 4 can be adopted to enhance the hole density of Ag-alloyed kesterite material without severely impacting the lifetime.Consequently, this brings large performance improvements and raises the efficiency limit for strategies aiming to increase the effective minority carrier lifetime.The performed characterization reveals that the beneficial changes in crystal quality, morphology, and cation disorder induced by Agalloying remain unchanged with the Li incorporation.The interplay between Ag-alloying, reducing the density of Cu Zn defects, and Li-doping, seemingly promoting the formation of beneficial Li Zn defects, enables devices with PCEs above 14%.These findings demonstrate that both multinary and isovalent doping and alloying are critical to tune and improve the properties of kesterite absorbers for photovoltaic applications. Figure 1 . Figure 1.a) Schematic structure of the kesterite device based on a soda lime glass substrate with a SiO x Na barrier layer.b-e) Statistical distribution of the photovoltaic parameters for devices with varying Li content.f) Representative illuminated and dark J-V curves of the devices for varying Li content.g) EQE spectra of the device for varying Li content. Figure 2 . Figure 2. a) Carrier density profiles extracted by C-V analysis.b) Plots of V OC versus temperature (V OC -T) and linear fits of the recombination E a of the devices.c) TRPL decay for the Ref and low Li content devices. Figure 3 . Figure 3. Top-view scanning electron microscopy images of the absorbers with varying amounts of added LiClO 4 in the precursor solution. Figure 4 . Figure 4. a) Plot of the ACZTSSe main peak position in Raman spectra as a function of Li content.b) Plot of the area ratio of the 176 cm −1 peak to the main peak.c) Plot of the area ratio of the 250 cm −1 peak to the main peak. Figure 5 . Figure 5. Schematic of the Li incorporation dynamics during the selenization step in a atomic scale (top) and macroscopic scale (bottom).In (1), the conditions during the heating ramp are schematized, the Se is evaporated forming a saturated Se atmosphere and the amorphous ACZTS layer and the LiClO 4 inclusions remain unreacted.In (2), during the first minutes of the dwelling step, the re-crystallization of the top and bottom layers is started.Simultaneously, the LiClO 4 in the film decomposes and reacts with Se forming Li-Se phases.The Li-Se phases are consumed during the posterior stages of the dwelling, where Li incorporates in the kesterite lattice forming Li Cu and Li Zn defects of (3). Figure 6 . Figure 6.a) Light and dark J-V curves, b) EQE and c) C-V profile of the champion ACZTSSe-Li device. Figure 7 . Figure 7. PV parameters of different batches of device before and after the implementation of optimal Li doping conditions: a) J SC , b) V OC , c) FF, and d) PCE.Dashed lines are included as visual aid. Figure 8 . Figure 8. a) Record efficiencies of kesterite solar cells as a function of E g .b) V OC deficit of the corresponding devices as a function of E g .
8,359.4
2024-06-07T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
ANALYSING FEATURES OF E­COMMERCE SYSTEMS ARCHITECTURE The object of the research is the process of designing the architecture of high­load systems. The conducted research is based on the system approach to design the architecture of e­commerce systems, characterized by high workload due to the large number of users working simultaneously with the system, a large amount of data and a significant number of complex calculations. The main hypothesis of the research is that the efficiency of such systems depends on the efficiency of each individual step to scale up the system and the consistency of these steps. The maximum efficiency can be achieved only if the resource constraints and requirements, which are determined by the key stakeholders of the projects, consider the specifics of the business system. This paper examines the methodological support of the developing high­load systems architecture. Within this research let’s analyze such specific features of high­loaded systems as scalability, rigidity, and response time and demonstrate the impor­ tance of considering these features when designing the architecture of high­loaded systems. This paper analyzes approaches to developing high­load systems architecture, their advantages, and disadvantages. It is suggested to use hybrid scaling method, which is based on combining two approaches – microservices and monolithic. It is also suggested to use a microservices approach for high­loaded and requiring scaling parts and a monolithic approach for non­loaded parts of the system. The research indicates the parts of the system that are usually highly loaded in e­commerce systems and require a microservices approach to design their architecture. This paper analyzes approaches to database scaling and organization of data replication. The application of the proposed approach to design the architecture of high­load systems, including the e­commerce systems, allows designing a system that can be easily scaled when necessary. At the same time, the system can be improved and further developed. Introduction The pandemic, quarantine restrictions and the necessity of organizing remote work have increased the demand for the development of ecommerce systems and business process automation. These systems are often classified as highload systems. Highload systems are applications with high workload, which occurs through: -many users simultaneously working with the system; -high volume of data to be processed; -the presence of numerous complex calculations [1]. The above factors are typical for highload systems, both separately and jointly. Such a system requires a significant number of resources to operate. The development of highloaded systems has certain peculiarities [1][2][3]: -The main feature of highload business systems is their rigidity: it is possible to modify only some parts, because the flexibility of such systems requires a signifi cant number of resources. For example, it is impossible to make access to the data flexible. It is necessary to clearly define the database for the system to work with, considering the amount of data and the frequency of requests, in order to ensure its stable performance. -The response time is another important factor. The interaction between users and the application is carried out by submitting requests, which should be responded to in a suitable time span. -Scalability is a necessary feature for highload systems, which determines their ability to increase the maximum allowable workload (the number of users working simul taneously with the system, the amount of data, etc.). These peculiarities require critical analysis when develop ing the architecture of ecommerce systems. Therefore, the aim of this research is to develop the architecture of ecom merce systems, considering the peculiarities of highload systems operation. The object of this research is the process of developing the architecture of highload systems. TECHNOLOGY AUDIT AND PRODUCTION RESERVES -№ 1/4(63), 2022 ISSN 2664-9969 Research methodology The conducted research is based on the approaches described in papers [2,4,5]. The key hypothesis of the research is that the efficiency of such systems depends on the efficiency of each individual step to scale up the system and the consistency of these steps. The maximum efficiency can be achieved only under consideration of resource constraints and requirements, determined by the key stakeholders of the projects, and the specifics of the business system. The principles of developing the archi tecture of highload systems, presented in papers [2,5,6], are analyzed. Based on this analysis and considering the architecture of data processing mechanisms and synchro nization of modules [4,7], database architecture [8], the necessity of using a systematic approach are determined to develop the architecture of highload systems. Research results and discussion Development of webbased applications is optimal solution to develop ecommerce systems. A webbased application is a clientserver application (the client is a browser, and the server is a web server) for which data is stored on the server, and data exchange takes place in the network. Their impor tant advantages for ecommerce systems are the following: -The system can be operated by a great number of users at once. -They do not require installation on users' devices, thus they can be used whenever, and do not require additional workstations, increasing hardware power, etc. Users need only a browser and access to the Internet to work with the system. -Developing webbased applications is cheaper. -All the updates and changes to the web application became automatically available for all users. A webbased application comprises the application code and database. When a user plans to use it, he or she sends a request to the server. The server processes the request, selects the necessary data from the database, generates a re sponse and sends it to the user. According to the results of research, the maximum time to perform these operations should not exceed 6 seconds for complex requests. Optimal average time to process a request for comfortable work of the user, which indicates the normal functioning of the system, is 3 seconds. A longer response time is unaccept able for such systems due to the high probability to lose potential customers. Another important characteristic of the system per formance is a maximal number of requests processed per second (Requests Per Second (RPS)). Therefore, the most important is scalability of the sys tem, which give opportunity to manage its performance indicators: the time of processing requests and RPS. Let's analyze the approaches to scaling. There are two approaches to scaling [9,10]: -Vertical scaling involves increasing the capacity of some components of the system to increase the overall capacity. As a rule, vertical scaling is performed by replacing some devices with more powerful ones. This is the simplest way of scaling, which does not require any changes to the program. -Horizontal scaling involves splitting the system into structural components distributed among different com puters and increasing the number of servers to perform specific functions in parallel. Consequently, the need to add nodes to the system, working as a single unit, and to modify the program for efficient use of the additional resources is evident. The development of a highload system requires a flexible approach to scaling and combination of both approaches. Obviously, the vertical scaling is not endless [11]. It is optimal to use it when the server is too old to bear the workload. Thus, it is quicker and cheaper to replace it with a new one, instead of changing the program code. The horizontal approach to scaling involves splitting the application into several modules, which can be distributed among the servers, or multiplication of the highestloaded part of the application. For example, the most loaded part of ecommerce sys tems can be the catalog. In this case, the task is to share the performance of this module between different servers, i. e., to organize parallel computing of this function. This solution also has certain restrictions, because parallel computing creates the problem of data synchronization: each server, each stream, has its own data. They are no longer synchronized. The growing workload leads to a haphazard data state: users can change the same data simultaneously, but this situation is not acceptable. The synchronizer imple mentation removes the parallelism, but the load on the synchronizer grows. Therefore, the optimal solution is to isolate those highload components performed synchronously and to use a vertical scaling approach for them. Therefore, the second approach in combination with vertical scaling is optimal to scale highload systems. How ever, there is a mention that the application of this ap proach can differ for a particular system depending on its specific features and business requirements. Another peculiarity to consider when scaling the sys tem is the load on the integrating module. The more the number of separate modules is, the more the load on the integrating module grows with the increase of the load on the modules. Obviously, the goal of scaling is to get a well performing system rather than a set of separate modules. Therefore, one should consider that the complexity of communication and load on the integrating module grow in geometrical progression with increasing the number of system modules. In highload systems, it is also important to dupli cate critical components. All the critical components of a highload system, which affect its functionality, must be duplicated both in software and hardware (duplicate the equipment). These duplicates are not necessarily to work simultaneously, but they must ensure availability to use them in the case when the primary duplicates are not able to handle the load. An important component of highload systems is the monitoring system. As a rule, problems in the system opera tion arise at moments of peak load, i. e., when the business earns the most. The monitoring system allowsto identify the issue causing the failure in the work, and immediately go to fixing these issues. Naturally, it is possible to identify the issue without the monitoring system. But in this case, it is necessary to involve highly qualified specialists and spend time to find the reason. Moreover, the monitoring system allowsto analyze the module's performance and pre vent possible failures, and thus minimize the probability of profit loss. ISSN 2664-9969 The system's inflexibility allows to save money on equipment. The cost of the hardware for a highload system is always much higher than the cost of a typical application. When developing a flexible highload system, the number of required hardware increases significantly. So, the first method to develop the architecture for ecommerce systems is the method of monolithic archi tecture: creating the program code using a set of plugins. Each plugin provides the implementation of a particular feature. For example, integration with external solutions, personalization of the payment terms, integration of pay ment systems, discount calculation system, etc. An alternative approach is the microservice architec ture, under which the program is composed of a set of microservices and each microservice has its own database and operates independently. The advantage is the ability to develop, maintain, scale, and improve each of the mi croservices separately. But an important disadvantage is its cost. For example, to improve a particular microservice, it is necessary to analyze the code, make the appro priate changes, plan updates, test the system to ensure the ab sence of conflicts, prepare documentation, etc. Developing monolithic applications is much cheaper, while they have a single code, and the application works with a single database. In the case of microservices architecture it is necessary to develop a protocol for interconnection with the core, the main product page, for each microservice. In this case, when processing a user's request, the program works with several databases of different microservices, all of which provide their own response to the request, which the application consolidates into a single response for the user. It is obvious that the optimal solution is a hybrid (com bined) architecture. A monolithic part is developed for unloaded functions of ecommerce systems, and microser vices are developed for functions which are highloaded and could require scaling. For ecommerce systems, as a rule, microservices are required to implement such functions: -catalog; -data buses for importing data from external software; -API for interaction with mobile devices and other services; -mailer and other services to communicate with clients. The advantages of this approach are, foremost, reduc tion of costs for development and support of the system. The hybrid method requires less time for development, and it is cheaper. For that reason, it is the optimal solu tion for startup projects, companies with limited budgets, companies willing to test a business hypothesis, etc. More over, the application has no disadvantages in functionality and scalability. An example of such hybrid architecture for an ecom merce system is shown in Fig. 1. Fig. 2 shows an example of the infrastructural archi tecture of the ecommerce system. The principle of development and performance of such a system is as follows. It is recommend to use Python, Framework and Django within the development process, which is optimal regarding the development terms. Microservices architecture is used to implement such functions: -notification service; -search engine based on Elastic Search; -product matching service (finding analogues for pro ducts that are not available in the catalog); -bus of input and output data processing; -Rest API; -signing documents with EDI. The PostgreS as a database of enterprise level with significant scalability is optimal for ecommerce system. Elastic is used to scale the catalog: all complex requests are sent to Elastic, and then the results of processing are shown in the catalog. Search within the system is fulltext and implemented with Elastic. Data import is arranged using the Celery task broker. Each file for import or export is sent to the task bro ker, which creates a list of such files, and for each of them it determines the process, performing the im port or export procedure. This approach to organize data import and export enables horizontal scaling of this microservice. Ecommerce systems, as a rule, have a complex struc ture of data storage, complex personalization logics, huge amounts of data. Therefore, inmemory database Redis is suitable for scaling the catalog, as it stores the data in the operating system in nonrelational form. The implementation of the log monitoring system en ables identifying errors in the system. Automated monitor ing of projects and errors allows responding immediately: to renew projects or, when possible, to fix errors. Simi larly, monitoring is used for integration processes with external programs and services. The system Jenkins could be used to ensure continuous integration of ecommerce system with external software. The integration with the accounting system to provide displaying realtime data on the availability and stock of goods can be an example. Another example is transferring information about orders from the ecommerce system to the accounting system for document processing, updating amounts of stocks, etc. Scaling is implemented by containerization with the technology Docker. This system allows to quickly set up a server environment for web applications and control resources. The approach has a significant advantage com pared to virtualization. Virtual computers are created with software to perform some operations. As each virtual com puter requires a separate operating system, which runs on the server using its resources, the efficiency of using the server's resources is low. Containerization, however, allows to place programs and processes in a separate container and reserve a certain number of resources for it, manage its performance, ac cessibility, processes, transfer the container among servers, perform monitoring, and so on. The system Docker Swarm is used for orchestrationmanagement of containerization process. This system al lows to create clusters (sets of servers) and control their performance. The software is distributed on these clusters: the master application, the database and the microservices. Each element has its own cluster. The orchestration system Docker Swarm provide monitoring and management of the servers. When a cluster is unavailable, the system replaces containers to other servers and resumes their operation, thus ensuring the system's availability and continuity of its performance. Moreover, the orchestration system allows online replica tion of data. Docker Swarm routes traffic in such a way: requests to read data go to the Slave database, and re quests to write data go to the Master database. At the same time, the Master database constantly transfers all the updates to the Slave databases. Due to regular rep lication in the cluster, there is always a stable version of the database to restore all the data. The specific features of suggested architecture for the ecommerce solutions are relevant for the development of mobile and web applications. It is reasonable to use such principles to develop highload systems, or systems with the potential need for scaling. A probable develop ment of this research could be further improvement of the architecture with the aim of increasing the speed and permissible loads. Conclusions The paper analyzes the approaches to designing high load systems, their advantages, and disadvantages. It is suggested to use hybrid method for scaling, which is based on combining two approaches -microservices and monolithic. Microservice method could be used for high loaded and requiring scaling parts of the system, the monolithic method could be applied for nonloaded parts. It is found that usually highloaded parts of the system are the catalog, data buses for data import, API support for interaction with mobile devices and other services, and services to communicate with users. These are the parts of ecommerce systems requiring a microservices approach to develop the architecture. The paper provides analysis of approaches to database scaling and organi zation of data replication. Scaling can be implemented by tools of containerization technology Docker, and the management of the containerization process could be implemented using system Docker Swarm. The routing system distributes traffic from the application and send all the reading requests to the Slave database and all the writing requests to the Master database. The infor mation in the databases is constantly updated due to regular replication. As a result, the cluster always has a stable version of the database to restore all the data. The implementation of the suggested approach to design highload systems architecture, including ecommerce sys tems, enables developing a system, which can be easily scaled up anytime. At the same time, the system can be improved and updated.
4,193
2022-01-01T00:00:00.000
[ "Computer Science" ]
A story of (non)compliance, bias, and conspiracies: How Google and Yandex represented Smart Voting during the 2021 parliamentary elections in Russia On 3 September 2021, the Russian court forbade Google and Yandex to display search results for “Smart Voting,” the query referring to a tactical voting project by the jailed Russian opposition leader Alexei Navalny. To examine whether the two search engines complied with the court order, we collected top search outputs for the query from Google and Yandex. Our analysis demonstrates the lack of compliance from both engines; however, while Google continued prioritizing outputs related to the opposition’s web resources, Yandex removed links to them and, in some cases, promoted conspiratorial claims aligning with the Russian authorities’ anti-Western narrative. search engines in Russia, on the eve of the Russian parliamentary elections in September 2021. • Despite being ordered by the Russian court not to display results for "Smart Voting," both search engines kept retrieving results for the query. However, Yandex did remove links to the opposition's web resources dealing with Smart Voting and, in some cases, prioritized conspiratorial claims aligning with the anti-Western narrative of the Russian authorities. This finding corroborates earlier research which suggested that Yandex is susceptible to governmental pressure and highlights the possibility of the corporation's algorithms enabling a less pervasive form of censorship in favor of the Kremlin. • Our observations stress the importance of scrutinizing how platforms implement government-driven content moderation, in particular in authoritarian contexts where declared anti-disinformation efforts can facilitate censorship. The presence of political bias in Google and Yandex outputs, together with conspiratorial information in the case of Yandex, also raises concerns about platform-wide filter bubbles and the current state of information freedom in Russia. This stresses the importance of mechanisms to monitor how platforms' algorithms distribute (politically) contentious content and how this process is affected by state regulation and other forms of political pressure. Implications The Smart Voting project was initiated by the currently jailed Russian opposition leader Alexei Navalny and his associates (Team Navalny) in 2018. Its aim is to undermine the political monopoly of the ruling United Russia party by preventing the dispersion of opposition votes across multiple candidates through voter coordination. The effectiveness of this strategy has been demonstrated, for instance, during the 2019 local elections in Saint Petersburg (Golosov & Turchenko, 2021). Via a dedicated website and the mobile app, Smart Voting shows voters which alternative candidate is most likely to defeat the candidate of the ruling party in their polling district. In the lead up to the parliamentary elections (September 17-19, 2021), the Russian authorities sought to limit the effect of Smart Voting: access to the app was restricted by the Russian telecommunications regulator Roskomnadzor on August 23, and the Smart Voting website was blocked on September 7. The Russian authorities also put pressure on Western and domestic online intermediaries to undermine the effectiveness of Smart Voting. Google and Apple complied with a government request to remove the Smart Voting app from their app stores (Lokot & Wijermars, 2021). At the same time, the Moscow arbitration court requested the two leading search engines in Russia, Yandex and Google, to remove Smart Voting from their search results. The request was presented as an interim measure in a case filed by the Russian company Woolintertrade, which had registered Smart Voting as its own trademark in July. Specifically, the court ruled to "Prohibit [Google/Yandex] the use of the 'Smart Voting' designation in search results of a search engine owned by the defendant [Google/Yandex] as one of the search keywords" (Elektronnoe pravosudie, 2021a, p. 1; 2021b, p. 1). How exactly the search engines should have implemented this order was not specified. The growing pressure on both Google and Yandex ahead of the 2021 elections highlights the increasing importance the Russian government attributes to expanding its influence over online intermediaries, including search engines. While some years ago the Russian authorities aimed to increase their control over online information distribution by (unsuccessfully) trying to create a state-controlled and a state-owned search engine (Sanovich et al., 2018), by now their strategy has shifted towards putting pressure on existing platforms to make them comply with the regime's demands. In recent years, Russia's capacity to exert control over online information distribution has been strengthened through increasingly restrictive Internet regulation and enhanced state control over internet infrastructures (Ermoshina & Musiani, 2017;Sivetc, 2021;Stadnik, 2021). Since Yandex is a Russian domestic corporation, it is generally thought to be more responsive to government pressure (Daucé & Loveluck, 2021) while Google is seen by many Russian NGOs as a protector of civil liberties (Bronnikova & Zaytseva, 2021) Compared with Google, Yandex was more intensively targeted by Russian regulatory mechanisms (Wijermars, 2021) and demonstrated more politically biased performance during periods of political contention (Kravets & Toepfl, 2021). However, the Russian authorities also put increasing pressure on Google as demonstrated by the growing number of requests for removing results from its search, particularly since the second half of 2020 ( Figure 1). Figure 1. Number of Russian government removal requests for Google (web search). The graph is created using data from Google's Transparency Report (Google, 2021). Across all Google services, Russia is responsible for the highest number of removal requests (Google, 2021). While more than 90% of these requests are related to copyright issues, it is impossible to estimate what share of copyright requests might be politics-related. Furthermore, Google is increasingly complying with the Russian government's requests ( Figure 2). This trend seems to be specific to Russia: in the case of Turkey, for example, Google's compliance rate is around 40%, while for China it is close to 0% (Google, 2021). Given the limited information, it is difficult to ascertain whether the high compliance rate indicates a change in Google's treatment of Russian removal requests (i.e., becoming more likely to comply with them) or a qualitative change in these requests (e.g., a higher proportion of evidently illegal content which is requested to be removed). Figure 2. Share of content removed by Google in response to the Russian government's requests (all services including web search). The graph is created using data from Google's Transparency Report (Google, 2021). Note that the beginning of 2019 is the earliest period for which the data is available in the report. Against this backdrop of growing governmental pressure, we looked at how Google and Yandex dealt with the above-mentioned court order and found that both search engines continued to display results for the "Smart Voting" query. However, Yandex did limit access to the opposition's web resources dealing with Smart Voting, which, presumably, was the main aim of the politically motivated court case. By contrast, Google did not remove Smart Voting from its search results and continued to link to Team Navalny resources. These differences in (non)compliance were amplified by political bias in the outputs, with Google outputs including more pro-Smart Voting content (and vice versa for Yandex), as well as Yandex tending to promote conspiratorial claims about Smart Voting coming from the Russian authorities (e.g., it being a project of the Pentagon aimed at harming the Russian people) and Google usually omitting such claims from its top results. There are several implications of our observations. First, the example of Yandex illustrates the importance of scrutinizing how compliance with government removal requests as well as governmentdriven content moderation is implemented. Particularly in the case of "informational autocracies" (Guriev & Treisman, 2020, p. 1) in restricted media systems, such as Russia, where the preservation of the regime is largely contingent on successful censorship, the selective removal of links from search results can be used to facilitate dissemination of misinformation and anti-Western conspiracy theories. It can also serve as a form of "masked censorship" (Makhortykh & Bastian, 2020, p. 14), namely a less intrusive way of filtering out undesired information. The potentially damaging effects extend far beyond the removal of the links themselves as information suppression can erode democratization processes (Stoycheff et al., 2018). Under these circumstances, the decisions of online intermediaries concerning the implementation of removal requests play a key role in mediating the impact of censorship. Depending on these decisions, intermediaries might limit the visibility of outlets spreading extremist or false claims, but also facilitate state censorship by integrating it with "private censorship" (i.e., censorship effort by private companies) (see Beazer et al., 2021;Crabtree et al., 2015) or amplifying self-censorship practices (e.g., of Russian journalists in the aftermath of the annexation of Crimea) (Schimpfössl et al., 2020;Zeveleva, 2020). This raises questions about the feasibility of one-size-fits-all regulation of intermediaries' activities that would be applicable to their operations across democratic and non-democratic contexts, especially in the case of countries with long-standing censorship traditions (e.g., Russia and many other post-Communist states) (Ognyanova, 2019). Second, our observations demonstrate the possibility of profound information inequalities forming between the users of Russia's two largest search engines. While the mere presence of such inequalities is not unexpected, especially in the context of the increasingly fragmented and polarized Eastern European digital ecosystems (Urman, 2019;Urman & Makhortykh, 2021a), the empirical evidence of their amplification by algorithmic mechanisms is concerning. Considering that these inequalities in the case of Smart Voting (and, potentially, other political matters in the region) are subjected to political bias, such amplification may encase Yandex and Google users into platform-wide filter bubbles (Pariser, 2011), in particular as the Russian search market is roughly split between Google and Yandex (Statscounter, 2021). Third, while one might argue that, in the case of Smart Voting, Google search algorithms enabled a less censored selection of information, future cases may be less clear-cut. In particular, specific features of non-free media systems raise concerns about the universality of source prioritization principles. While both Google and Yandex tend to prioritize journalistic media (Zavadski & Toepfl, 2019), our observations demonstrate that their selection of specific outlets is subjected to political bias with the former prioritizing less state-dependent outlets and the latter giving priority to more Kremlin-oriented media. It is less clear what Google will do if the number of independent media in Russia continues to decrease, considering the ongoing campaign against journalistic "foreign agents" (Roth, 2021, para 1). Will it shift towards prioritizing the state-sponsored media, following the general policies of the company which are also reflected in its algorithmic systems? Or will these policies be adapted for the case of Russia (and, potentially, other non-free media systems)? To answer these questions, it is important to keep monitoring both how platform algorithms distribute contentious political content and how such distribution is affected by regulation in both Western and non-Western contexts. Together, our findings provide empirical evidence that may lay ground for a public debate on how and to what degree online intermediaries collaborate with authoritarian governments in enabling censorship. The importance of this debate is amplified by the ongoing transition from traditional violence-based autocracies to informational autocracies. Informational autocracies rely on the regime's ability to convince the public of its competency (Guriev & Treisman, 2020), which is a task increasingly fulfilled with the help of online intermediaries. Under these circumstances, it is essential to define what behavior is ethically desirable for the corporations working in non-democratic contexts. For instance, should Google collaborate with authoritarian states as it seems to increasingly do in Russia? Or should it non-comply as in the case of Smart Voting and risk fines and, potentially, withdraw from the regional market as in the case of China? Answering these questions can be the first step for building up pressure from the public and policymakers to enforce the desirable behavior from the online information intermediaries. Similarly, it can help determine how intermediaries coming from authoritarian states and, potentially, serving as integral components of the informational autocracy shall be treated. Is there a point, for instance, at which Yandex search engine should be viewed as another information asset of the Kremlin, similar to Russia Today, and approached accordingly? Findings Finding 1: Google and Yandex kept displaying results in response to the "Smart Voting" query, despite being ordered to stop doing so. Despite being ordered to stop displaying search results in response to the "Smart Voting" query, neither Google nor Yandex did so on the eve of the Russian elections. Both search engines kept providing the results throughout the whole period of data collection (September 10-20). This non-compliance can be primarily attributed to the technical implausibility of the initial request. Unlike the requests for Google and Apple to remove the Smart Voting app from their App Stores (Lokot & Wijermars, 2021), which occurred around the same time, the technical realization of non-displaying the results for a particular query is a non-trivial task. While modification of search outputs is not impossible, it usually involves filtering out individual websites from the outputs in general or for specific, but more narrowly defined, queries (e.g., as in the case of the implementation of the right to be forgotten, Mangini et al., 2020). The imprecision of the request was also noted by the Yandex representatives, who remarked that "it is unclear what exactly we are requested to do [by the court] and how we can do it" (Interfax, 2021, para 2). Google did not provide any commentary on the decision, but it can be presumed that it also found the court decision difficult to implement. The fact that Yandex stopped prioritizing the links to the Team Navalny resources can be viewed as a form of compliance (albeit more with the legal requirement to stop displaying links to blacklisted websites rather than the interim measure from the trademark case). Finding 2: Google and Yandex prioritized different types of sources. Contrary to earlier studies which did not observe substantive differences in the selection of sources by Yandex and Google (Zavadski & Toepfl, 2017), we discovered substantial discrepancies in the content prioritized by the two engines, with the exception of journalistic media which dominated the outputs in both cases (Figure 3). Google consistently returned links to the Team Navalny resources related to Smart Voting, including the main Smart Voting website that the Russian government had blocked. Yandex did not return any outputs affiliated with the Team Navalny with the exception of September 13, when a link to the Smart Voting website appeared in the results once. We suggest that this exceptional behavior can be attributed to the output randomization used by search engines to maximize user engagement by testing different ways of ranking search results. Sometimes referred to as "Google Dance" (Battelle, 2005), this phenomenon remains largely under-studied (for some exceptions see Haim et al., 2016;, but it can be one of the factors influencing changes in the visibility of individual resources. We also observed differences in the prevalence of blogs and social media in search outputs. On Yandex, between 20% and 40% of top links were to blogs, while on Google only a few links to blogs appeared during the selected period. For social media the situation was reversed: Google returned more links to the platforms, such as Facebook and YouTube, than Yandex. There also were differences in the affiliation of blogs and social media accounts. Whereas the blogs, in the case of Yandex, usually belonged to anonymous individuals who often promoted pro-Kremlin narratives, the social media pages prioritized by Google usually belonged to Team Navalny. The share of links to reference websites (e.g., Wikipedia) was similar across the two engines. Finding 3: Content prioritized by Google and Yandex was politically biased. Figure 3. Shares of links to content of different types returned by Google (above) and Yandex (below), by date. The numbers on the bar segments denote the exact share 3 of links of a specific type for all rounds of data collection on We found major differences in the share of content with different stances towards Smart Voting ( Figure 4). Both search engines returned a similar share of links to neutral content (hovering around 50% of all links), with Google having a higher share of neutral content at the beginning of the data collection period than Yandex. However, the non-neutral content distribution was very different: in the case of Google, almost all non-neutral content had a positive stance towards Smart Voting, while on Yandex the share of pro-Smart Voting content never went above 30% and was consistently lower than the share of anti-Smart Voting content. Thus, we observe a political bias in the outputs of the two search engines. Among positive and negative assessments of Smart Voting a broad variety of perspectives was found. In the case of anti-Smart Voting content prioritized by Yandex, we observed claims that Smart Voting is a form of manipulation aiming to "deprive the citizens of their right to vote" (Babin, 2020, para 1) or a security threat to personal data (REN TV, 2021a), as well as arguments presenting Smart Voting as ineffective or morally questionable (e.g., because it often recommended voting for the Communist Party) (Novye Izvestiia, 2021). In the case of Google, a positive stance was often expressed in the form of independent media either hosting op-eds that endorsed Smart Voting or linking to the Smart Voting candidate lists. Figure 5. Shares of content with conspiratorial claims about Smart Voting returned by Google (above) and Yandex (below), by date. The numbers on the bar segments denote the exact share of links of a specific type for all rounds of data collection on a particular day. For example, out of all Google results collected on September 11, 75% linked to web pages without mentions of conspiracy theories, 16% linked to pages with conspiracy-promoting content, and 9% linked to pages that mentioned conspiracies without promoting or debunking them. In some cases, search outputs also included conspiratorial claims related to Smart Voting ( Figure 5) such as that it is aiming to harm Russian voters (Lenta, 2021) or is a U.S. Pentagon project designed to interfere in the Russian elections (REN TV, 2021b). Often, these claims originated from Russian officials such as the spokesperson of the Ministry of Foreign Affairs, Maria Zakharova, or the spokesperson of the Russian President, Dmitrii Peskov, that also resulted in them being reiterated by authoritative sources (e.g., established journalistic media). While both Google and Yandex included such claims in their top results, Google tended to prioritize content without mentions of conspiratorial information (e.g., stories about Smart Voting coming from BBC or RBC). By contrast, Yandex gave more visibility to items promoting conspiratorial claims which were usually coming from pro-Kremlin outlets (e.g., TASS or REN TV). Data collection We conducted an agent-based audit of the first page of search results of Yandex and Google for the query "умное голосование" (Smart Voting) (for further details on the chosen auditing approach, which seeks to simulate human browsing behavior, see Appendix, Section A). This specific query was chosen because it is the term to which the court order applies. While users may have also used other search terms, examining the content retrieved for such related (e.g., "SmartVote app") or derived queries (e.g., "Smart Voting Navalny") is beyond the scope of the court order and therefore this study. Data collection started one week before the start of the voting period (September 10) and ended on the last voting day (September 20). To increase the robustness of our results, we aimed to have four rounds of the data collection each day at the same time; however, due to technical issues (e.g., a change in the underlying HTML code of Google's search page) on September 14 we had only one round of data collection, on September 19 two, and on September 16 three rounds instead of four. Notwithstanding this and a few other limitations (see Appendix, Section B), we suggest that our results are robust enough to infer how information about Smart Voting has been distributed by the two engines. Data analysis We extracted all hyperlinks gathered from the organic results on the first page of Yandex and Google search results. Altogether, we collected 731 hyperlinks. The odd number is due to search engines sometimes providing fewer than ten organic results on their first page because of the presence of other elements (e.g., "Knowledge Panels" or "People also ask"). Among the 731 results, there were 119 unique hyperlinks. We manually coded each hyperlinked page into the following categories: • Whether the page contains information related to Smart Voting: a binary variable (yes/no). Then, for pages containing information related to Smart Voting we used two additional coding categories: • Whether the page content is politically biased: 1) arguing in favor of Smart Voting, 2) arguing against Smart Voting, 3) neutral regarding Smart Voting. • Whether the page contains statements about Smart Voting that can be classified as conspiratorial. For example, statements that present Smart Voting as part of secret agenda pursued by domestic (e.g., the Kremlin) or foreign (e.g., the US) actors to interfere in the elections and, potentially, harm the Russian people, were coded as conspiratorial: 1) the page promotes such statements, 2) the page debunks them, 3) the page mentions such statements without adopting clear stance towards them, or 4) the page does not mention any conspiratorial information. The coding was performed independently by two coders, both fluent in Russian and knowledgeable of the Russian political context (on intercoder reliability, see Appendix, Section C). Finally, we matched the links collected for each of the search engines with their classifications and computed the shares of different types of content distributed by each search engine. Competing interests All of the authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Ethics Because our research did not involve data collection from human users or any interaction with human users, it was exempt from the ethical review under the guidelines of the Ethics Committee of the Faculty of Business, Economics and Informatics, University of Zurich. Further, since we ran only a few agents that did not perform actions such as clicking on specific items, we believe that our study did not interfere with the search algorithms in any potentially harmful way or affect the engagement statistics for individual results or put a heavy load on the search engines that could potentially affect their functionality. B. Limitations The conducted research has several limitations. The use of remote vantage points (e.g., VPNs) has been criticized for being not reliable (e.g., not being located in the advertised countries) (see Weinberg et al., 2018). While we verified that the vantage point was, indeed, located in Saint Petersburg, future studies can benefit from alternative approaches such as recruiting crowdworkers from the respective region to run either the search queries directly or the scripts for powering the agents. An additional benefit of using crowdworkers can be the possibility to examine actual search behavior (e.g., by asking crowdworkers to use not a pre-fixed set of queries but come up with their own search suggestions) and its interactions with search censorship and not just the behavior of engines in response to a query. Another limitation concerns the implementation of the auditing method used to conduct the study. The current study relies on a single search query and uses agents deployed via a single browser. While we assume that Google and Yandex did not censor other queries related to Smart Voting because of not being explicitly requested to do so, it would be worthwhile to empirically check the validity of our assumption. Similarly, further research can examine whether there are cross-browser differences in retrieved results, which is a phenomenon observed by some earlier studies (e.g., Urman et al., 2021). С. Intercoder reliability To evaluate the intercoder reliability, we compared the results produced by the two coders in the course of the original coding. There were no disagreements between the coders with regard to whether a page was related to Smart Voting as well as the page types. For the last two categories -political bias and conspiratorial information -the coders agreed in 80% (24 disagreements out of 119 hyperlinks) and 78% (26 disagreements out of 119 hyperlinks) of cases respectively. These disagreements were resolved through consensus coding. Finally, we matched the links collected for each of the search engines with their classifications and computed the shares of different types of content distributed by each search engine.
5,637.8
2022-03-07T00:00:00.000
[ "Political Science", "Computer Science" ]
Thermoluminescence characteristics of different phase transitions from nanocrystalline alumina Nanocrystalline boehmite material was synthesized using the hydrothermal method. Different annealing temperatures have been used to transform boehmite into different alumina phases to study the effect of different phase transitions on the thermoluminescence properties of alumina. XRD analysis was carried out to investigate the crystal structure of the different alumina phases. The thermoluminescence glow curves for different alumina phases showed different structures; however, the sensitivity was almost constant for all the phase transitions of alumina over the applied dose ranging from 0.55 to 330 Gy. Introduction Thermoluminescence (TL) dosimetry is considered one of the most used techniques for studying the interaction of radiation with the matter for various applications [1]. Although numerous natural materials [2][3][4][5][6] are employed in thermoluminescence dosimetry, developing and synthesizing novel materials remains an ongoing research topic [7][8][9]. There is a great demand for nanomaterials in all applications, especially in luminescence applications, because of their advantages in optical and electronic properties [10][11][12]. Aluminium oxide (Al 2 O 3 ), particularly on the nanoscale, is a ceramic material utilized in various applications. It can be used as a catalyst supporting manufacturing electronic equipment, as a biological material substitute, or as a radiation dosimeter [13][14][15][16]. Nano alumina can be a good candidate for luminescence materials because of some intrinsic defects in the crystalline lattice, such as OH, non-bridging oxygen hole centers, V-type centers and oxygen vacancies. Such defects are responsible for trapping and storing charge carriers under irradiation exposure and for electron-hole recombination at thermal stimulation [17]. Several synthesis processes and precursors have been used to synthesize nano alumina, with boehmite (AlOOH) as one of the most popular applications. There are more than fifteen transition phases of alumina, such as χ, κ, γ, δ, η, and θ, as well as the most stable α-Al 2 O 3 phase produced after heat treatment of alumina by high temperature [13,18,19]. The transformation from one phase of alumina to another until it reaches α-Al 2 O 3 phase requires a reconstruction of the crystal structure of alumina [20,21] Any change in the chemical and crystal structure, particle size, shape, and morphology of luminescence material may change the thermoluminescence characteristics of this material, which would cause an error in the estimation of the radiation dose [1,22,23]. Previous research showed that a modification could occur in the glow curve structures due to phase transitions. For example, Sahare et al. investigated the TL response of various phases of K 2 Ca 2 (SO 4 ) 3 :Cu. According to their findings, variations in the TL glow curve structures may be attributed to changes in the phase transition of the nanophosphor materials [22]. Furthermore, Rani and Sahare discovered that the structures of the TL glow curves changed due to the phase transition of aluminium oxide when annealed at selected temperatures [13]. As a result, further advanced studies are required to determine the change in thermoluminescence characteristics of alumina caused by heat treatment at selected temperatures. The current work studied the thermoluminescence properties 1 3 of some transition alumina obtained from the thermal treatment of nanocrystalline boehmite at selected temperatures. Preparation The boehmite was synthesized as follows: Amounts of 0.05 mol of AlCl 3 ·6H 2 O and 0.1 mol of CO(NH 2 ) 2 were dissolved in distilled water, then placed on a magnetic stirring for 0.5 h to dissolve completely. After that, the solution was autoclaved for 3 h at 453 K. The resulting white precipitate was rinsed many times in distilled water before being heated in an oven at 353 K for 20 h [10]. For the transfer of the boehmite powder to different alumina phases, the boehmite powder was annealed at various temperatures (773 K-973 K-1273 K-1473 K-1673 K) for 3 h, then cooled to room temperature. The expected phase transitions of alumina were obtained after annealing the boehmite at selected annealing temperatures (Fig. 1). Characterization The different phase transition of Al 2 O 3 was determined using X-ray diffraction (Shimadzu XD-DI Diffractometer) powder diffractometry (Cu K α1 radiation) operated at a voltage of 40 kV and intensity of 30 mA. where k is the average crystallite's shape factor (predictable shape factor is 0.9), "λ" is the wavelength (1.54056 Å for (1) crystallinesize = k cos CuKa), "β" is the full width at half maximum (FWHM) in radians, and "θ" is the peak position in radians. The TL-glow curves for distinct Al 2 O 3 phases were recorded using a Lexsyg Smart TL/OSL luminescence reader in the Nuclear Radiation Measurements Lab, Department of Physics, Faculty of Science, Ain Shams University. Lex Studio 2.0 operating software runs on a personal computer linked to the reader. The reader is additionally linked to a nitrogen source for cooling purposes. The reader has 90 Sr/ 90 Y beta source with a maximum energy of 2.2 MeV and a dose rate of 110 mGy/s. Samples from different alumina phases were irradiated with different doses from 0.55 to 330 Gy from β-particles to study the dose-response range for different alumina phases. Thermoluminescence glow curve analysis T m -T stop method The T m -T stop method was utilized to determine the number and locations of the overlapped peaks in the glow curves of different alumina phases [26][27][28][29]. Samples from alumina phases were annealed at 673 K for 30 min before being irradiated with 55 Gy from a β-source. After that, they were heated to a temperature of T stop (328 K) and then were cooled down to room temperature. Then, the samples corresponding to each alumina phase were heated at 623 K, TL-glow curves were obtained. The previous process is repeated numerous times with the same dose and the same samples were heated to a slightly higher T stop each time, in steps of 5 K throughout T stop = 328-618 K. For each reading cycle, the first maximum intensity recorded in the second turn of each reading cycle was recorded as T m . At the end of all reading cycles, different T stop and T m temperatures were obtained and plotted as T m versus T stop . Glow curve deconvolution (CGCD) After determining the number of the glow peaks composing the glow curves of the different phases of Al 2 O 3 experimentally using the T m -T stop technique, the computerized glow curve deconvolution method (CGCD) [30] was used to resolve the glow curves into the glow peaks all at once and determine the kinetic parameters. The MATLAB software employed to analyze the glow curves is based on the Nelder-Mead non-linear optimization method and employs the general order kinetics deconvolution equation given by where I m is the maximum intensity, and k is the Boltzmann constant. The functions F(T, E) and F T m , E are defined as (2) The frequency factor (S) can be found exploiting the condition at the maximum intensity and is given by where β is the heating rate. The lifetime of each trap can be determined the employing the equation given by where T is the storage temperature (about 300 K). X-ray diffraction (XRD) results The XRD analysis for the boehmite and obtained alumina phases after annealing of the boehmite at selected annealing temperatures are shown in Fig. 2. The results confirmed the complete transformation of the annealed samples at temperatures 1473 K and 1673 K into α-alumina. The crystallite Table 1 that as the annealing temperature increases, the particle size decreases and then increases again with an increase in the annealing temperature because the gaps between the chains and the crystal defects are gradually reduced and finally disappeared, resulting in complete crystallization of α-alumina, these results agree with those result obtained by Rani and Sahare [13] and Takayuki Tsukada et al. [20] Thermoluminescence glow curve structure The glow curve structure is one of the most important properties that should be studied to identify the change caused by annealing the alumina samples at selected temperatures. Thus, samples from various prepared alumina phases were irradiated at 330 Gy from β-particle, then the relationship between TL-intensities and temperatures were represented graphically (Fig. 3). The glow curves of the prepared alumina samples have almost the same shape as they contain two main glow peaks, but the location of these peaks and the area under each curve is different from one type to another (Fig. 3). The positions of the two main peaks were at 408 K and 539 K for Al 2 O 3 annealed at 773 K; 396 K and 518 K for Al 2 O 3 annealed at 973 K; 390 K and 500 K for Al 2 O 3 annealed at 1273 K and 454 K, and 583 K for Al 2 O 3 annealed at 1473 K and 1673 K. In addition, the area under the glow curves of the annealed alumina samples at different temperatures increased gradually with the increase of the annealing temperature for the annealed samples at temperatures (773 K-973 K-1273 K). After that, as the annealing temperature increased more than 1273 K, the area under the glow curves decreased. The increased annealing temperature from 773 up to 1273 K may help to improve defect and ion diffusion and eliminate intrinsic tensile stresses in the crystalline lattice. Hence, the area under glow curves increases. Then, by increasing the annealing temperature more than 1273 K, the electron traps and recombination centers may be damaged or increase the competition, leading to a decrease and e 1673 K after exposed with 330 Gy from β-particle in the area under the curves as previously observed in other works [32]. Thermoluminescence glow curve analysis The glow curve is a superposition of the glow peaks, which corresponds to the de-localized states between the valence and conduction bands. To resolve the glow curve to its composing peaks and find the associated kinetic parameters, the T m -T stop method was employed (Fig. 4). The relation between T m and T stop appeared like a staircase structure, and the number and location of the plateau regions represent the number and locations of the glow peaks that make up the glow curve (Fig. 4). Therefore, the glow curves for Al 2 O 3 annealed at 773 K and 973 K have seven expected peaks, while Al 2 O 3 annealed at 1273 K has nine expected peaks and Al 2 O 3 annealed at 1473 K and 1673 K have four expected peaks. The position of each expected glow peaks for different alumina phases is given (Table 2). Based on the expected number of glow peaks determined by T m and T stop computerized glow curve deconvolution (CGCD) technique was used to resolve the glow curves into the glow peaks all at once and determine the kinetic parameters (Fig. 5). It is clearly shown in Fig. 5 that the annealing of the alumina at selected annealing temperature causes a change in the glow curve structure, which may be attributed to the rearrange of the available electronic energy levels. This change may be due to the phase transformation or a change in the size of the particles occurs with the change in the annealing temperatures [25]. The kinetic parameters are calculated using the CGCD method ( Table 2). The quality of the fitting was tested by the Figure Of Merit (FOM) criteria [33], and the FOM value is less than 2% which indicates the goodness of the fitting. Dose-response It is important to find the relationship between the light emitted from any material after irradiation and the applied doses. This relationship generally involves a linear function that allows this material to be used to determine an unknown irradiation dose. The Al 2 O 3 samples were irradiated at various β-doses ranging from 0.55 to 330 Gy for this analysis. The relationship between the irradiation doses and the given TL-intensities per mass of the investigated samples are obviously revealed (Fig. 6). The changes in the glow curves according to the different applied doses for the investigated Al 2 O 3 samples are shown in Fig. 7a-e. These figures show that increasing irradiated dose increased the area under the glow curves without changing the position of the peaks. The TL-intensity is thought to occur as follows: After absorbing high-energy radiation, the F center loses an electron and becomes the F + center. The recombination of the electron with the F + center creates an exciting F center (F * ), and by thermal stimulation, that decays into its ground state (3P transition to 3S) with photon emission. So, the β-dose increases, the number of exciting F * centre increases which an increase in the photons produced with the thermal stimulation. Thus, the area under the glow curves increases with the increase in the radiation dose [32,34,35]. A way to quantify the linearity of a material is the linearity index F(D) [31], given by the following formula. where f (D) and f (D 1 ) are the TL responses at doses "D" and "D 1 ", D 1 is the normalization dose in the linear region. The linearity index is 1 within the linear range, > 1 within the supra-linear range, and ˂ 1 within the sub-linear range. It was found that all the Al 2 O 3 samples annealed at 773 K, 973 K, 1273 K, 1473 K, and 1673 K exhibited a linear dose-response in the range from 0.55 Gy up to 330 Gy (Fig. 6). The characteristic glow curve of Al 2 O 3 (a) at 773 K, (b) at 973 K, (c) at 1273 K, (d) at 1473 K and (e) at 1673 K after being exposed to different β-doses is presented (Fig. 7). In Fig. 6 Dose-response curves for different phase of Al 2 O 3 respectively, after exposure to different doses from β-particle all alumina phases the shape and position of the glow peaks do not change after exposure to different β-doses (Fig. 7). However, the area under all glow curves of studied alumina phases increases with increasing β-doses. The linearity index as a function of the applied dose is shown (Fig. 8). Observing (Fig. 8) confirmed the linear dose-response behaviour as the linearity index values for all the phases were close to 1. The linearity behaviour of the samples under consideration is summarized (Table 3). The characteristic glow curve of boehmite sample annealed at a 773 K, b 973 K, c 1273 K, d 1473 K and e 1673 K, after exposed to different β-doses Sensitivity Sensitivity is defined as the TL response per unit dose per unit mass of the dosimeter. To investigate this property for the different phases of Al 2 O 3 , the samples were irradiated at different doses of beta particles from 0.55 up to 330 Gy. The sensitivity was calculated at each dose. The sensitivity behaviour as a function of the applied dose for the investigated samples is revealed (Fig. 9). The sensitivity was almost constant for all the phases of Al 2 O 3 over the applied dose ranging from 0.55 to 330 Gy (Fig. 9). Conclusions In the present work, the nanocrystalline boehmite was synthesized and annealed to selected temperatures (773 K-9 73 K-1273 K-1473 K-1673 K) to transform boehmite into different alumina phases. The XRD results confirmed the complete transformation of the annealed alumina samples at temperatures 1473 K and 1673 into α-alumina. The thermoluminescence glow curves for the five alumina phases showed different structures indicating the effect of the annealing temperature on the trap distribution in the material under investigation. The investigated samples showed a long-range linear dose-response, almost constant sensitivity as a function of the applied dose and long lifetimes indicating signal stability. This makes the samples good candidates for dosimetric applications, especially the samples annealed at temperatures 1473 K and 1673 K due to the formation of the α-Al 2 O 3 phase which is the most stable phase of Al 2 O 3. Funding Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB). . 9 The sensitivity for different phases of Al 2 O 3 , after exposure to different doses from β-particle
3,774.4
2022-08-05T00:00:00.000
[ "Materials Science" ]
On some conjectures concerning critical independent sets of a graph Let $G$ be a simple graph with vertex set $V(G)$. A set $S\subseteq V(G)$ is independent if no two vertices from $S$ are adjacent. For $X\subseteq V(G)$, the difference of $X$ is $d(X) = |X|-|N(X)|$ and an independent set $A$ is critical if $d(A) = \max \{d(X): X\subseteq V(G) \text{ is an independent set}\}$ (possibly $A=\emptyset$). Let $\text{nucleus}(G)$ and $\text{diadem}(G)$ be the intersection and union, respectively, of all maximum size critical independent sets in $G$. In this paper, we will give two new characterizations of K\"{o}nig-Egerv\'{a}ry graphs involving $\text{nucleus}(G)$ and $\text{diadem}(G)$. We also prove a related lower bound for the independence number of a graph. This work answers several conjectures posed by Jarden, Levit, and Mandrescu. Introduction In this paper G is a simple graph with vertex set V (G), |V (G)| = n, and edge set E(G). The set of neighbors of a vertex v is N G (v) or simply N (v) if there is no possibility of ambiguity. If X ⊆ V (G), then the set of neighbors of X is N (X) = ∪ u∈X N (u), G[X] is the subgraph induced by X, and X c is the complement of the subset X. For sets A, B ⊆ V (G), we use A \ B to denote the vertices belonging to A but not B. For such disjoint A and B we let (A, B) denote the set of edges such that each edge is incident to both a vertex in A and a vertex in B. A matching M is a set of pairwise non-incident edges of G. A matching of maximum cardinality is a maximum matching and µ(G) is the cardinality of such a maximum matching. For a set A ⊆ V (G) and matching M , we say A is saturated by M if every vertex of A is incident to an edge in M . For two disjoint sets A, B ⊆ V (G), we say there is a matching M of A into B if M is a matching of G such that every edge of M belongs to (A, B) and each vertex of A is saturated. An M -alternating path is a path that alternates between edges in M and those not in M . An M -augmenting path is an M -alternating path which begins and ends with an edge not in M . A set S ⊆ V (G) is independent if no two vertices from S are adjacent. An independent set of maximum cardinality is a maximum independent set and α(G) is the cardinality of such a maximum independent set. For a graph G, let Ω(G) denote the family of all its maximum independent sets, let See [1,9,14] for background and properties of core(G) and corona(G). For a graph G and a set X Zhang [16] showed that max{d(X) : The set S ⊆ V (G) a critical independent set if S is both a critical set and independent. A critical independent set of maximum cardinality is called a maximum critical independent set. Note that for some graphs the empty set is the only critical independent set, for example odd cycles or complete graphs. See [2,7,8,16] for more background and properties of critical independent sets. Finding a maximum independent set is a well-known NP-hard problem. Zhang [16] first showed that a critical independent set can be found in polynomial time. Butenko and Trukhanov [2] showed that every critical independent set is contained in a maximum independent set, thereby directly connecting the problem of finding a critical independent set to that of finding a maximum independent set. For a graph G the inequality α(G)+µ(G) ≤ n always holds. A graph G is a König-Egerváry graph if α(G) + µ(G) = n. All bipartite graphs are König-Egerváry but there are non-bipartite graphs which are König-Egerváry as well, see Figure 2 for an example. We adopt the convention that the empty graph K 0 , without vertices, is a König-Egerváry graph. In [7] it was shown that König-Egerváry graphs are closely related to critical independent sets. Theorem 1.1. [7] A graph G is König-Egerváry if, and only if, every maximum independent set in G is critical. For any graph G, there is a unique set X ⊆ V (G) such that all of the following hold: is a König-Egerváry graph, (iii) for every non-empty independent set S in G[X c ], |N (S)| ≥ |S|, and (iv) for every maximum critical indendent set I of G, X = I ∪ N (I). Larson in [8] showed that a maximum critical independent set can be found in polynomial time. So the decomposition in Theorem 1.2 of a graph G into X and X c is also computable in polynomial time. Figure 1 gives an example of this decomposition, where both the sets X and X c are non-empty. Recall, for some graphs the empty set is the only critical independent set, so for such graphs the set X would be empty. If a graph G is a König-Egerváry graph, then the set X c would be empty. We adopt the convention that if K 0 is empty graph, then α(K 0 ) = 0. In [5,11] the following concepts were introduced: for a graph G, However, the following result due to Larson allows us to use a more suitable definition for diadem(G). Theorem 1.3. [8] Each critical independent set is contained in some maximum critical independent set. For the remainder of this paper we define diadem(G) = {S : S is a maximum critical independent set in G}. Note that if G is a graph where the empty set is the only critical indepedent set (including the case G = K 0 , the empty graph), then ker(G), diadem(G), and nucleus(G) are all empty. See Figure 2 for examples of the sets ker(G), diadem(G), and nucleus(G). is not a König-Egerváry graph and has ker( In [4,5], the following necessary conditions for König-Egerváry graphs were given: In [4] it was conjectured that condition (i) of Theorem 1.4 is sufficient for König-Egerváry graphs and in [5] it was conjectured the necessary condition in Theorem 1.5 is also sufficient. The purpose of this paper is to affirm these conjectures by proving the following new characterizations of König-Egerváry graphs. Theorem 1.6. For a graph G, the following are equivalent: The paper [4] gives an upper bound for α(G) in terms of unions and intersections of maximum independent sets, proving 2α(G) ≤ | core(G)| + | corona(G)| for any graph G. It is natural to ask whether a similar lower bound for α(G) can be formulated in terms of unions and intersections of critical independent sets. Jarden, Levit, and Mandrescu in [4] conjectured that for any graph G, the inequality | ker(G)| + | diadem(G)| ≤ 2α(G) always holds. We will prove a slightly stronger statement. By Theorem 1.3 we see that ker(G) ⊆ nucleus(G) holds implying that | ker(G)| + | diadem(G)| ≤ | nucleus(G)| + | diadem(G)|. In section 4 we will prove the following statement, resolving the cited conjecture: It would be interesting to know whether the sets nucleus(G) and diadem(G), or their sizes, can be computed in polynomial time. Some structural lemmas Here we prove several crucial lemmas which will be needed in our proofs. Our results hinge upon the structure of the set X as described in Theorem 1.2. Lemma 2.1. Let I be a maximum critical independent set in G and set X = I ∪ N (I). Then diadem(G) ∪ N (diadem(G)) = X. Proof. By Theorem 1.2 the set X is unique in G, that is, for any maximum critical independent set S, X = S ∪ N (S). Then diadem(G) = X follows by definition. Proof. Let S be a maximum critical independent set in G. Using Theorem 1.2 we see that S is a maximum independent set in G[X] and also G[X] is a König-Egerváry graph. Then Theorem 1.1 gives that S must also be critical in G[X], which implies that diadem(G) ⊆ diadem(G[X]). Now let v ∈ nucleus(G[X]). Then v belongs to every maximum critical indepedent set in G[X]. As remarked above, since every maximum critical independent set in G is also a maximum critical independent set in G[X], then v belongs to every maximum critical independent set in G. This shows that v ∈ nucleus(G) and nucleus(G[X]) ⊆ nucleus(G) follows. is an independent set in G[X] larger than S, which cannot happen. Therefore we must have |S ′ | ≥ |A ′ | as desired. Proof. First note that if the set X is empty, then by Lemma 2.1 both sides of the inequality are zero. So let us assume that X is non-empty. Now consider the set A = nucleus(G) \ nucleus(G[X]). If this independent set is empty, then nucleus(G) = nucleus(G[X]) and there is nothing to prove since diadem(G) ⊆ diadem(G[X]) holds by Lemma 2.2. If A is non-empty, for each v ∈ A there is some maximum independent set S of G[X] which doesn't contain v. Since S is a maximum independent set there exists u ∈ N (v) ∩ S. Since v ∈ nucleus(G), then u does not belong to any maximum critical independent set in G. Recall by Theorem 1.2 (ii) G[X] is a König-Egerváry graph, so Theorem 1.1 gives that S is a maximum critical independent set in G[X]. It follows that u ∈ diadem(G[X]) \ diadem(G), which shows each vertex in A is adjacent to at least one vertex in diadem(G[X]) \ diadem(G). Now we will show there is a maximum matching from A into diadem(G[X])\ diadem(G) with size |A|. For sake of contradiction, suppose such a matching M has less than |A| edges. Then there exists some vertex v ∈ A not saturated by M . By the above, v is adjacent to some vertex u ∈ diadem(G[X]) \ diadem(G). Since M is maximum, u is matched to some vertex w ∈ A under M . Now let S be a maximum independent set of G[X] containing u. We now restrict ourselves to the subgraph induced by the edges (A ∩ N (S), S ∩ N (A)), noting this subgraph is bipartite since both A ∩ N (S) and S ∩ N (A) are independent. In this subgraph, consider the set P of all M -alternating paths starting with the edge vu. Note that all such paths must start with the vertices v, u, then w. Also, such paths must end at either a matched vertex in A∩ N (S) or an unmatched vertex in S ∩ N (A). We wish to show that there is some alternating path ending at an unmatched vertex in S ∩ N (A). For sake of contradiction, suppose all alternating paths end at a matched vertex in A ∩ N (S) and let V (P) denote the union of all vertices belonging to such an alternating path. We aim to show this scenario contradicts Lemma 2.3. Now clearly we must have N (V (P) ∩ A) ∩ S ⊆ V (P) ∩ S, else we could extend an alternating path to any vertex in (N (V (P) ∩ A) ∩ S) \ (V (P) ∩ S). Also, since all paths in P end at a matched vertex in A ∩ N (S), then every vertex of V (P) ∩ S is matched under M , and such a situation should look as in Figure 3. From this it follows that |V (P) ∩ S| < |V (P) ∩ A|. The previous statements exactly contradict Lemma 2.3, so there is some alternating path P ending at an unmatched vertex x ∈ S ∩N (A). This means that P is an M -augmenting path. A well-known theorem in graph theory states that a matching is maximum in G if, and only if, there is no augmenting path [15]. So P being an M -augmenting path contradicts our assumption that M is a maximum matching. Therefore there is a matching M from A into diadem ( New characterizations of König-Egerváry graphs Proof (of Theorem 1.6). First we prove (ii) ⇒ (i). Suppose that diadem(G) = corona(G) holds and let I be a maximum critical independent set with X = I ∪ N (I). We will use the decomposition in Theorem 1.2 to show that X c must be empty and hence, G = G[X] is a König-Egerváry graph. By Lemma 2.1 we have corona(G) = diadem(G) ⊆ X, in other words every maximum independent set in G is contained in X. This implies that |I| = α(G[X]) = α(G). Now by Theorem 1. showing that we must have α(G[X c ]) = 0. Now clearly the result follows, since α(G[X c ]) = 0 implies that X c must be empty. To prove (iii) ⇒ (i), again we will use the decomposition in Theorem 1.2 to show that X c must be empty and hence, G is a König-Egerváry graph. So suppose that | diadem(G)| + | nucleus(G)| = 2α(G) and let I be a maximum critical independent set in G with X = I ∪ N (I). Lemma 2.4 implies that Combining Theorem 1.7 and the inequality 2α(G) ≤ | core(G)|+| corona(G)| proven in [4], the following corollary is immediate. These upper and lower bounds are quite interesting. The fact that every critical independent set is contained in a maximum independent set implies that diadem(G) ⊆ corona(G) for all graphs G. However, the graph G 2 in Figure 2 has core(G 2 ) nucleus(G 2 ) while the graph G in Figure 1 has nucleus(G) = {a, b, c} core(G) = {a, b, c, h}. Acknowledgements Many thanks to my advisor László Székely for feedback on initial versions of this manuscript. Partial support from the NSF DMS under contract 1300547 is gratefully acknowledged.
3,238.2
2015-09-16T00:00:00.000
[ "Mathematics" ]
Exploring the Bioactive Mycocompounds (Fungal Compounds) of Selected Medicinal Mushrooms and Their Potentials against HPV Infection and Associated Cancer in Humans Medicinal mushrooms have been used as a medicinal tool for many centuries and, nowadays, are used in the prevention and therapy of various diseases, including as an adjunct to cancer treatment. It is estimated that 14–16% of global cancer cases are caused by infectious events; one well-known infectious agent that leads to cancer is the human papillomavirus (HPV). HPV is responsible for more than 99.7% of cervical cancer cases and also may play a role in vaginal, vulvar, penile, anal, rectal, and oropharyngeal carcinogenesis. Coriolus versicolor, a basidiomycetes class mushroom, consists of glycoproteins called polysaccharide-K (PSK) and polysaccharopeptide (PSP), which are mainly responsible for its effectiveness in the fight against a variety of cancers. Its beneficial effect lies in its ability to arrest different phases of the cell cycle, immunomodulation or induction of apoptosis. Coriolus versicolor extractcan reduces BCL-2 expression or increases the expression of p53 tumour suppressor genes in breast tumour cell lines. Inhibition of proliferation was also demonstrated with HeLa cells, while cervical cytology abnormalities improved in patients who locally applied Coriolus versicolor-based vaginal gel. Coriolus versicolor extract itself, and also its combination with another medicinal mushroom, Ganoderma lucidum, leads to improved HPV clearance in HPV cervical or oral-positive patients. Medicinal mushrooms can also increase the effectiveness of vaccination. This review considers the use of medicinal mushrooms as a suitable adjunct to the treatment of many cancers or precanceroses, including those caused by the HPV virus. Introduction It is estimated that 14-16% of global cancer cases are caused by infectious events while persisting virus infections are responsible for many of them [1], for example, the hepatitis B and C viruses in hepatocellular carcinoma [2]; the Epstein-Barr virus in Burkitt's lymphoma [3]; and human papillomavirus (HPV) in cervical, vaginal, vulvar, penile, anal, rectal, and oropharyngeal cancers [4]. Cervical cancer is the fourth leading cause of cancer death in women worldwide [5], and more than 99.7% cases are caused by HPV [6]. HPV types are classified into four groups according to their carcinogenic potential, with 12 high-risk HPV (hrHPV) types [7]. After hrHPV DNA is incorporated into the DNA of the infected cell, oncogenic HPV proteins E6 and E7 are synthesized. These oncoproteins cause dysfunction of tumour suppressor proteins, leading to the dysregulation of the cell cycle, with neoplastic transformation of the affected tissue [8]. HPV types 16 and 18 cause over 70% of all cervical cancer cases worldwide. The most frequently detected oncogenic type is HPV16, followed by HPV18, HPV31, HPV52, and HPV58. In general, the highest incidence of HPV infection is in younger women, with the peak incidence occurring below the age of 25; incidence decreases with an increase in age. Such a decrease is not observed in developing countries [9]. HPV Infection in Humans Papillomaviruses are small, circular, double-stranded DNA viruses. Persistent infection with oncogenic types of papillomaviruses can lead to the development of precancerous lesions and, later, to the development of cancer. The Papillomaviridae family contains 39 genera, and HPV can be found in five of them: alphapapillomavirus, betapapillomavirus, gammapapillomavirus, mupapillomavirus, and nupapillomavirus. The International Agency for Research on Cancer (IARC) classified HPV into groups according to carcinogenic potential: group 1 is carcinogenic for humans, group 2A is probably carcinogenic for humans, and group 2B is possibly carcinogenic for humans [10]. Thirteen HPV types belonging to groups 1 and 2A are responsible for up to 96% of cervical cancer cases [11]. Group 3 includes low-risk HPV types. Of the more than 200 known types, HPV groups 1 and 2A with oncogenic potential belong to the alphapapillomavirus genus, while HPV infection from the gamma and beta genera cause skin papillomas [12]. HPV can cause non-genital (cutaneous), mucosal or anogenital infections, or epidermodysplasia verruciformis. HPV infection can lead to laryngeal, oral, lung, and anogenital cancers [13]. Worldwide, HPV is the second most common cancer-causing infectious agent after Helicobacter pylori. About 5% of cancers are associated with high-risk HPV. During their lifetimes, 80% of the population will encounter HPV infection, but the majority of those will clear the infection without clinical symptoms. On the other hand, nearly all cervical cancer cases are associated with HPV infection. The prevalence of HPV infection in tumour tissues is estimated at 90% in the case of cervical and anal cancer, 70% in the case of vulvar and vaginal malignancies, and more than 60% in penile cancer cases. Oropharyngeal cancers are associated with tobacco and alcohol use, but 70% of them may be linked to HPV [14]. Table 1 provides these details. Table 1. Percentage of HPV associated malignancies and HPV prevalence [14,15]. Affected Tissue Percentage of HPV Associated Cancers in Women and Men HPV Prevalence in Affected Tissue Cervix uteri 49% in female HPV-associated cancers 90% Types of Medicinal Mushrooms and Their Biopotentials For many years, mushrooms have been used as an effective therapeutic tool in the treatment of various diseases. For example, around 5300 years ago, Ice Man used amadou mushrooms (Fomes fomentarius (L.) Fr.) to survive in the inhospitable conditions of the Italian Alps. Hippocrates also described this mushroom as a potent anti-inflammatory treatment. On the other side of the world, the first inhabitants of North America used puffball mushrooms (Calvatia genus) to improve the wound healing process [16]. The people of Asia have also used mushrooms as a medicinal tool for many centuries. Nowadays, medicinal mushrooms have been approved in eastern countries as an adjunct to cancer treatment. Commonly used species include Ganoderma lucidum (Curtis) P. Karst, Lentinus edodes (Berk.) Singer, and Trametes versicolor (L.) Lloyd, which is also called Coriolus versicolor or turkey tail. Medical mushrooms are also distributed in other parts of the world, but in the US, for example, they are distributed as dietary supplements and regulated as food, not drugs. Manufacturing consistency is not controlled for dietary supplements, so it is not possible to guarantee that a product contains the ingredients listed on the label. The US Food and Drug Administration (FDA) has these dietary supplements as treatments for any medical condition [17]. Many countries fail to regulate the handling of medicinal mushrooms and their components, which can lead to a reduced content, a lack of effective components in the sold supplements, or even replacement of the effective components by others that can have an adverse effect on human health. Due to the fact that the fungal extract may contain a large spectrum of demonstrably or potentially bioactive compounds, it is difficult to monitor the effectiveness of sold supplements. Determining the exact dose of a substance whose beneficial effect on human health could be incorporated into a study is challenging. Therefore, it is difficult to prove the effectiveness of medicinal mushrooms; however, despite the lack of evidence, their beneficial effect on human health has been known for a long time [18]. Mechanism of Cell Proliferation and Immunomodulation Properties The effectiveness of C. versicolor polysaccharides is well documented. Several studies have demonstrated the effectiveness of C. versicolor in the fight against a variety of cancers, mostly using polysaccharopeptide (PSP) and polysaccharide K (PSK) called krestin, extracted from this mushroom. They have proven to be helpful in ovarian [21], cervical [22], prostate [23], colon [24], lung [22], and breast [25] cancer treatment, as well as in the fight against leukemia [26] and other cancers. The protein extract of this mushroom can cause cell cycle arrest [27]. It can also affect apoptotic pathways. Proteins BCL-2 and BCL-X L are BCL-2 family proteins, which are regulators of the mitochondria-mediated apoptotic pathway. While BH3-only proteins, BAK, and BAX are pro-apoptotic, BCL-2 and BCL-X L have anti-apoptotic function [28]. In breast cancer cells, 17β-estradiol stimulates overexpression of BCL-2, which decreases levels of mitochondrial apoptotic factors [29]. C. versicolor extract demonstrably reduces BCL-2 expression in breast cancer cells. An increased expression of genes for tumour suppressor protein p53 has also been observed in some breast tumour cell lines incubated with C. versicolor extract [30]. The cytotoxic effect of C. versicolor protein-bound polysaccharides on melanoma cells has also been confirmed via increased intracellular reactive oxygen species [31]. Caspase-3 is a death protease, one of the crucial mediators of apoptosis. Its precursor, precaspase-3, has at least 200-fold less activity than caspase-3. The overexpression of this precursor was confirmed in cancer tissue [32]. The genes of this precursor are the target of the E2F family of transcription factors. E2Fs are in an inactive form due to binding with the retinoblastoma protein (Rb) [33]. The dissociation of this bond leads to the excessive activity of the transcription factor. The dysregulation of the cell cycle based on this dissociation has been demonstrated in multiple cancers while the oncogenic potential of the E7 HPV protein also lies in this mechanism. This pRb/E2F pathway dysregulation leads to the eventual upregulation of gene transcription for precaspase-3 [32]. In promyelomonocytic leukemia cells, PSK activates caspase-3, which leads to the induction of apoptosis [34]. In the field of neurotoxicity, C. versicolor aqueous extract was found to have protective value in nitric oxide-induced brain diseases due to its effect on caspase-3 enzyme activity [35]. The Nuclear Factor kappaB (NF-κB) is in the transcription factor family; these affect immune response and inflammation and determine expression of p53 tumour suppressor protein genes or genes for signal transducers and activators of transcription (STAT) [36]. In interferon (IFN) signaling, after binding pathogen-associated molecular patterns (PAMP) to pathogen recognition receptors (PRR), interferon-regulatory factors drive expression of IFN genes [37]. In the next step, IFN binds to its receptors, leading to STAT activation. IFN molecules bind to cell surface receptors and initiate a signaling cascade through the Janus kinase signal transducer and activator of transcription (JAK-STAT) pathway, leading to the transcriptional regulation of hundreds of IFN-regulated genes [38]. STAT promotes expression of interferon stimulated genes (ISGs), which mediate antiviral responses [39]. STAT1-regulated genes are important targets of host gene regulation by HPV [40]. For example, HPV31 E7 can suppress STAT1 at the transcriptional level, resulting in reduced IFN-mediated gene expression [41]. HPV16 E7 inhibits IFN-induced phosphorylation, the nuclear translocation of STAT1, and the downstream expression of ISGs [42]. It has also been established that overexpressed E6 and E7 in keratinocytes repress the expression of innate immune genes [43]. Ethanolic extract of C. versicolor reduces prostate cancer cell growth. An in vitro study showed that this extract increased the levels of STAT1, a possible mechanism of its action [44]. On the other hand, C. versicolor extract showed anti-inflammatory effects in mice model inflammatory bowel disease by reducing STAT1 and STAT6 expression, leading to lower IFN-γ and interleukin-4 (IL-4) expression [45]. The immunostimulatory effects of PSP were demonstrated in animal models, through elevation of pro-inflammatory cytokines like IL-6 and tumor necrosis factor α (TNF-α) [46]. PSP in simultaneous activation with antigens, such as lipopolysaccharide bacteria wall components, leads to activation of the PRR toll-like receptor 4 (TLR4),which increases IL-6 production. Induction of the TLR4 signalling pathway also leads to the activation of NF-κB [47]. These two inducers may also activate the signalling pathway via STAT3 [48]. On the other hand, incubation of human leukemia cells with aqueous extracts of C. versicolor leads toa decrease in transcription factor NF-κB and a decrease in the expression of cyclooxygenase 2 (COX-2), whose products are responsible for higher levels of cell proliferation and angiogenesis and the reduction of apoptosis. A study of C. versicolor extract on human leukemia cells also shows STAT1 elevation [49]. PRR ligands such as N-acetyl glucosamine, beta glucans, and lipopolysaccharide activate innate and adaptive immunity by binding to receptors such as TLR4 or complement receptor 3. This leads to the secretion of inflammatory cytokines like IL-6 or TNF-α [50]. PSK through TLR4 plays a role in the activation of TNF-α secretion [51]. Another work describes two possible routes of C. versicolor extract's effect on pro-inflammatory cytokine expression. Secretion of cytokines IL-6 and TNF-α by macrophages and TLR4 expression were stimulated by the extract itself. Additionally, during treatment of cells with lower concentrations of lipopolysaccharide, the extract increased cytokine production, while higher dose of lipopolysaccharide led to their reduced synthesis. In other words, C. versicolor extract showed an antagonistic or additive effect according to lipopolysaccharide concentration [52]. Pleurotus ferulae [53] is another medicinal mushroom, which affects immunological response. By improving maturation and function of dendritic cells, it helps to link innate and adaptive immunity. T and B lymphocytes with antigen-specific surface receptors play an important role in adaptive immunity. Lymphocyte effector clones are formed after antigen binding to lymphocyte receptors. Cytotoxic T-lymphocytes and NK cells are the main parts of innate immunity in the immune response against viral pathogens [54]. The major histocompatibility complex (MHC) plays an important role in the process of the activation of T and B lymphocytes. By MHC, class I processes endogenous antigens as viral proteins produced by the cell. They are marked in cytoplasm by ubiquitin and are destroyed by proteasomes. Subsequently, they are moved to the endoplasmic reticulum, where α chain and β2microglobulin are synthesized, then transported to the Golgi complex, and finally transported to the cell surface, where they are recognized by CD8+ T lymphocytes. After binding CD8+ to MHC, class I CD8+ form a receptor for IL-2 and, with the help of the Th1 subpopulation of CD4+ T lymphocytes, CD8+ lymphocytes mature into mature cytotoxic Tc lymphocytes. Tc lymphocytes release perforins and granzins from cytotoxic granules-enzymes that lead to apoptosis of the target cell. Thus, after recognizing tumor cells or cells attacked by intracellular microorganisms, especially viruses, Tc lymphocytes cause their degradation. MHC class II molecules play role in the processing and presentation of external molecules that have entered the cell by endocytosis or phagocytosis. These are antigen presenting cells-dendritic cells, monocytes, macrophages, and B-lymphocytes. After processing antigens in the endolysosome fragments, they bind to the MHC II molecules. Such a complex is transported to the cell surface and is subsequently recognized by CD4+ T lymphocytes. Subsequently, Th lymphocyte precursors are produced, which further develop into the next subpopulations. If the precursors develop in the presence of cytokine IL-12, they differentiate into the Th1 subpopulation. The Th2 subpopulation arises in the presence of IL-4 and the Th17 subpopulation is formed in the presence of IL-1 or IL-6. Subsequently, formed Th1 cells mainly produce IFN-γ and IL-2, Th2 cells produce cytokines IL-4, IL-5, IL-10, IL-13, and also influence the maturation of B lymphocytes into plasma cells and memory B cells. Th17 cells produce IL-17 and influence the production of proinflammatory cytokines and chemokines. Protection against intracellular microorganisms is ensured by Th1 cells [54,55]. Mechanism of Anti-HPV Properties and Vaccination Support Patients affected by pre-cancerous changes of the cervix can also benefit from the use of C. versicolor products. A retrospective observational study evaluated the efficacy of C. versicolor-based vaginal gel in 183 high-risk HPV-positive women with normal or abnormal cytology. The patients applied vaginal gel for three months and were HPV DNA tested after six months. HPV negativity was confirmed in 67% of patients who applied the gel versus 37.2% of the control group. Furthermore, cytology improvement was observed in 78.5% of the treated patients versus 37.7% of controls [56]. Another study enrolled 91 HPVpositive women with low-grade Pap smear lesions. Normal Pap smears performed three months after treatment were obtained in 78% of patients in the treated group, compared to 54.8% in the control group. At their six-month visits, the high-risk HPV group showed 62.5% HPV clearance in those who applied the gel versus 40% in the control group [57]. Both studies demonstrated higher cytology improvement and HPV clearance in patients who applied C. versicolor-based vaginal gel. The effect of C. versicolor on HPV clearance was also confirmed for oral HPV infection; 61 patients underwent oral swabs for gingivitis and were positive for HPV16 or HPV18. They took capsules containing Mycelia extract from medical mushrooms Laetiporus sulphureus (Bull.) Murrill and a combination of extracts from T. versicolor and G. lucidum for two months. HPV was cleared in 87.8% of patients who took T. versicolor and G. lucidum extract while it was cleared in only 5% of the patients treated with L. sulphureus [58]. The immune system is one of the basic systems important for maintaining the homeostasis of the organism and its defense against environmental factors. Organisms, molecules or parts of molecules represent antigens that the immune system recognizes and triggers an immune response. The immune response is stimulated after the interaction between the antigen and the receptor, while the innate immune mechanisms are the first involved in defense reactions [54]. Infectious agents release PAMPs, which are recognized by receptors on the surfaces of the epithelium, that lead to the activation of the cellular and humoral mechanisms of innate immunity [59]. To recognize PAMPs, the innate immunity uses PRR, which are coded in the genome, and no further modification is required for their use. The PRRs recognize the patterns found on pathogens while such patterns are not found on the body's own cells, so innate immunity can distinguish its own structures from foreign ones. From a functional point of view, PRRs are divided into several groups while the best known are Toll-like receptors (TLRs). These are divided into 10 groups according to the ligands they can recognize [54]. Canella F. et al. [60] quantified TLR-2, 3, 4, 7, and 9 transcripts in HPV-positive and HPV-negative cervical samples from 154 women. Higher expression of TLR-9 was proved in HPV-positive samples, and extremely higher levels of this receptor were observed in patients with persistent HPV infection in this study [60]. On the other hand, oncoproteins E6 and E7 are able to block TLR-9 induced cytokine production in keratinocytes. The mechanism of this inhibition was demonstrated by in vitro infection of keratinocyte cells with HPV16 virions. After 24 h, the expressed oncoprotein E7 caused the formation of a nuclear complex consisting of estrogen receptor 1 (ESR1 also ERα) and a dimer of two members of the NF-κB family of transcription factors (NFKB1 and RELA or p50 and p65) under the influence of IκB kinase (IKK). This complex binds to the DNA region of the TLR-9 promoter, thereby preventing the initiation of gene expression for this protein. In addition, the NF-κB family member RELA (p65) together with ERα interaction with histone deacetylase 1 (Histone Deacetylase 1-HDAC1) and lysine specific demethylase (Lysine (K)-Specific Demethylase 5B-KDM5B also JARID1B) caused histone modification of the TLR-9 promoter. These processes caused the suppression of TLR-9 transcription with a subsequent impact on weakening the function of innate immunity, mainly by reducing the production of IFN1 [61]. Macrocybe lobayensis (R. Heim) Pegler & Lodge, from the Tricholomataceae family, has been used for centuries in traditional medicine as well. A heteroglycan protein with a strong antitumor and immunomodulatory effect was isolated from this mushroom [62]. Such extract rich on polysaccharides from this mushroom is able to upregulate the expression of TLR-2 and TLR-4 [63]. Canella at al. [60] did not demonstrate a higher expression of these two receptors in low-risk and high-risk HPV positive cervical cells collected with a cytobrush from both ectocervix and endocervix samples while a study by Daud I. et al. [64] showed in endocervical specimens 80-fold greater TLR-2 the median positive change in women who cleared HPV16 infection than women who persisted this infection [64]. Although the overexpression of specific TLRs in HPV infection is disputable, the immunomodulatory effect of the M. lobayensis, caused by augmented macrophage activity and the TLR signalled modulated expression of immunomodulation-related genes including NF-κB, COX-2, IFN-γ, TNF-α, and Iκ-βα, stimulates the immune system in a fight against pathogens causing the infection [63]. HPV vaccination is an effective method of primary prevention, but its sufficient effect on already developed HPV-associated cancer has not been confirmed. On the other hand, anticancer immunotherapies have presented great development in recent years. In gynecology cancer, the two main ways of immunotherapy are promising-monoclonal antibodies in function of immune check-point blockers and T cell-based immunotherapy [65]. Dendritic cells are used in antitumor vaccines, mainly due to their ability to activate naive CD4 and CD8 T cells [66]. The positive effect of P. ferulae polysaccharides on the antitumor therapeutic HPV dendritic cells-based vaccine was proved in an animal model. HPV dendritic cells-based vaccine supported by P. ferulae polysaccharides significantly inhibited tumor growth with the increased activation of CD4+ and CD8+ T cells. Polysaccharides of this mushroom improved the antitumor efficacy of therapeutic vaccine [67]. Roopngam et al. [68] proved higher amounts of T-lymphocytes in the group of T-lymphocytes cocultured with the dendritic cells pulsed by the HPV16-E7 proteins and treated with Pleurotus sajor-caju-β-glucan polysaccharides in comparison with T-lymphocytes without this treatment. This work suggests that P. ferulae polysaccharides is a suitable tool for the effective improvement of vaccines in cervical cancer [68]. Another work analysed P. ferulae water extract effect on the maturation and function of dendritic cells. Authors observed the induction of antigen-specific CD8+ T cell responses in HPV E6 and E7 peptides pulsed dendritic cells while cells treated with P. ferulae water extract showed higher level of CD8+ T cell responses and caused higher tumor growth inhibition [53]. Another mushroom used in traditional Chinese medicine, Flammulina velutipes (Curtis) Singer, showed immunomodulating effect in a mice model. Fungal protein isolated from this mushrrom stimulates maturation of dendritic cells and induce antigen-specific CD8+ Tcell immune responses. This study used the HPV16 E7 oncoprotein as an antigen and finally suggests F. velutipes fungal protein as a suitable adjuvant for cancer immunotherapy [69]. Mechanism of Anti-Cancer Properties The in vitro study of C. Versicolor PSK's anti-tumour activity evaluated its effect on various tumour cell lines, including human cervix adenocarcinoma (HeLa) cells. Tumour cell lines were cultured with PSK or in medium alone. Inhibition of proliferation was demonstrated in tumour cell lines. In the case of HeLa cells, the inhibition rate (57%), in correlation with the control, was higher at a lower concentration of PSK (50 µg/mL vs. 100 µg/mL). Cell cycle phase distribution analysis showed partial accumulation of HeLa cells in the G0/G1 phase and a decreased number of cells in the S phase and G2/M phase. In human gastric cancer cells, detectable active caspase-3 protease was present in 36% of PSK-treated cells; this effect was not found in HeLa cells [70]. Knežević et al. demonstrated the antitumour effect of C. versicolor on HeLa cells [71]. This work showed a stronger effect from mycelium extracts than basidiocarp extract on HeLa, human colon carcinoma, and human lung adenocarcinoma cell lines. The HeLa cells were the most sensitive to the extracts [71]. G. lucidum is a medicinal mushroom known as lingzhi in China and reishi in Japan. It has been used for many years in traditional Chinese medicine due to its many beneficial effects on human health [72]. Among other benefits, it has been used as an alternative adjuvant therapy for cancer [73]. G. lucidum consists of several components; polysaccharides and triterpenes are responsible for its antitumour effect [74,75]. Polysaccharides composed of α/β-glucans, glycoproteins, and water soluble heteropolysaccharides show antitumour effects by various mechanisms; these include immunomodulation and antioxidation, as well as anti-proliferative, pro-apoptotic, and anti-angiogenic functions [76,77]. G. lucidum extract also showed antitumour activity in cervical cancer cells, especially with the inhibition of proliferation and induction of apoptosis. Aqueous extracts from Chinese and Mexican G. lucidum samples were incubated with HeLA, SiHa, and C-33A cancer cells. Inhibition of proliferation was confirmed in all tested cell lines. SiHa cells treated with G. lucidum from Mexico showed the highest cytotoxic effect. An analysis of the effects of 320 µg/mL aqueous extract from this mushroom on the cell cycle showed cell cycle arrest at the G2/M phase in HeLa and C-33A cancer cells while SiHa cells arrested the cell cycle in the G0 phase. G. lucidum induced growth inhibition of cells transformed by HPV can be reached via apoptosis. HeLa, SiHa, and C-33A cells treated with this extract showed the formation of DNA laddering, so the antitumour effect of G. lucidum might also be caused by the induction of apoptosis [78]. In addition to polysaccharides, triterpenoids are also involved in the antitumour effect of G. lucidum. In one study, the separation of triterpenoid enriched extract was performed, and individual triterpenoids ganolucidic acid E, lucidumol, ganodermanontriol, 7-oxoganoderic acid Z, 15-hydroxy-ganoderic acid S, and ganoderic acid DM were obtained. The cytotoxic effects of these triterpenoids were tested on three tumour cell lines, including HeLa cells. All six isolated triterpenoids were able to reduce cell growth while 15-hydroxylganoderic acid S exhibited the most cytotoxicity in HeLa cells. All six compound treatments showed sub-G1 accumulations in HeLa cells [79]. When in the process of apoptosis, the execution pathway is initiated by caspase-3 cleavage; the degradation of chromosomal DNA occurs while fragmented DNA multimers leak out of the cell. This results in a DNA content reduction in cells, which can be detected with special staining; these apoptotic cells are represented by a sub G0/G1 population [80]. This is how the induction of apoptosis by G. lucidum triterpenoids was observed in HeLa cells [79]. The tumour suppressor function of PSK in C. versicolor is similar to the effect of G. lucidum polysaccharides, as shown in cervical tumour-bearing mice. After treatment with enzymatically hydrolysed G. lucidum polysaccharide, they showed decreased expression of Bcl-2 and COX-2 and increased expression of Bax and cleaved caspase-3 [81]. Jin et al. [82] also demonstrated G. lucidum polysaccharide's antitumour effect on cervical cancer cells. A polysaccharide from this mushroom promoted the apoptosis of cervical cancer cells and attenuated their invasion and migration abilities. Western blot assay analysis of these cells showed a higher expression of pro-apoptotic proteins Bax and caspase-3 and a lower expression of anti-apoptotic protein Bcl-2 [82]. The phosphorylation of STAT5 protein increased with the severity of cervical intraepithelial neoplasia (CIN) while higher levels of phosphorylated STAT5 were observed in HPV16 and HPV18 positive cancer cells than in HPV-negative cancer cells [83]. HPV oncoprotein E6 induces the phosphorylation of the JAK2-activating STAT5 and STAT3. The increased severity of CIN also increases activation of both these proteins. The opposite relation is also described, where the silencing of STAT5 and STAT3 leads to the decrease in the viral oncoproteins E6 and E7 expression [84]. Jin et al. [82] proved the decreased expression of phosphorylated-JAK and phosphorylated-STAT5 in cervical cancer cells, which were treated with G. Lucidum polysaccharide [82]. Another medicinal mushroom, Cordyceps sinensis (Berk.) Sacc., has been used in Chinese traditional medicine for the prevention or treatment of many diseases, including cancer. One study described its beneficial effect in uterine cervical cancer in mice [85]. In this work, selenium enriched C. sinensis was used, as selenium administered to laboratory animals shows a protective effect against tumour formation [86]. The study showed significantly longer survival of animals receiving selenium enriched C. sinensis in comparison with animals receiving just selenium or C. sinensis. The shortest survival time was observed in the no treatment group [85]. Medicinal mushrooms can also improve oncological treatment, not only by their own effects but also by increasing the effects of radiotherapy or chemotherapy itself. Pleurotus ostreatus (Jacq.) P. Kumm. is widely used in the prevention of many diseases and in meat product correction as a novel ingredient [87]. Ergosterol peroxide isolated from P. ostreatus showed a loss of viability in HeLa and CaSki cervical cell lines with its increased dose. This work suggests that ergosterol peroxide isolated from P. ostreatus can serve as a radiosensitiser in cervical cancer treatment [88]. Lung cancer cells pretreated with another medicinal mushroom, Lentinus squarrosulus Mont., showed amplified cisplatininduced apoptosis. Some downstream signals, which lead to changes in Bax, Blc-2, and p53 expression, showed higher levels of apoptosis in lung cancer cells preincubated with peptide from L. squarrosulus. This suggests use of this medicinal mushroom may be a suitable supplement to chemotherapy with cisplatin in lung cancer treatment [89]. Selected Medicinal Mushrooms and Bioactive Compounds Polysaccharide-protein complex (PSPC) is a heteropolymer isolated from the culture filtrates of M. lobayensis. It is a protein-bound polysaccharide whose protein part is made up mainly of acidic amino acids, such as aspartic and glutamic acids [62]. In addition, P. ferulae water extract improves the maturation and cytokine production. This extract enhances the proliferation of CD8+ T-cells and antigen presentation through dendritic cells [53]. Moreover, the major fruiting body protein of F. velutipes is an acetylated protein consisting of 114 amino acid residues, which is similar to G. lucidum bioactive compounds [69]. In particular, P. ferulae ergosterol peroxide leads to loss of viability in HeLa and CaSki cervical cell lines according to its dose in vitro and may be effective as a radiosensitizer in treating cervical cancer [88]. Additionally, purified peptides from aqueous extracts of L. squarrosulus increase cisplatin-induced cytotoxicity in human lung cancer [89]. The genus Cordyceps consists of many compounds, such as proteins, cyclic peptides, polyamines, nucleosides, polysaccharides, and sterols, while the major bioactive compounds are nucleosides cordy-cepin and its analogues, polysaccharides, and sterols [90]. Table 2 provides an overview of these compounds. Turkey tail PSK consists of mixture of glycoproteins whose main element is beta glucan and polysaccharopeptide (PSP) [91]. Other small molecular weight components like flavonoids are present in C. versicolor composition, but these are not the main parts. The principal monosaccharide of PSP and PSK is D-glucose while they contain individual variabilities in sugar compositions [98]. Mishra et al. [99] described a positive effect of Pleurotus spp. on breast, colorectal, cervical, and hepatocellular carcinoma cells. The spectrum of molecules within Pleurotus spp. Contain, such as α-glucans, β-glucans, lentinan, lipopolysaccharides, resveratrol, Cibacron Blue affinity purified protein, concanavalin A, and others, can affect various signalling cascades responsible for inhibition of growth, proliferation, angiogenesis, and metastasis in cancer cells [99]. Anticancer effects can be provided by cell cycle arrest in the pre-G0/G1 phase, higher production of nitric oxide by macrophages, and increased cytotoxicity of NK cells [100]. Another bioactive compound, C. versicolor PSP ( Figure S1) [101] is mainly composed of β-glucans responsible for immunomodulation by its effect on cytokine release, enhanced dendritic and T-cell infiltration into tumours, overexpression of cytokines and chemokines, and NK cells activation. PSP also shows antitumor, anti-inflammatory, and antiviral effects [102]. C. versicolor proteoglycan PSK ( Figure S2) [103] is also composed mainly of β-glucans with similar activities as PSP. Many articles are available describing the benefit of these two proteins, but the exact mechanisms of their function are not fully understood [104]. Additionally, the antiproliferative effect of G. lucidum extracts containing triterpenoids is well known; however, the detail function depends on the cell type and treatment method. The chemical structure of G. lucidum triterpenoids is an oxygenated lanostane. According to structure, they can be divided into roughly ten groups. Wu et al. [105] described seven anticancer effects of G. lucidum triterpenoids ( Figure S3). These can affect the cell cycle, downregulate gene expression of proteins responsible for proliferation signaling, deactivate telomerase and DNA topoisomerases, inhibit inflammation, induce apopto-sis and autophagy, suppress cell migration and invasion, and provide anti-angiogenetic activity [105]. L. edodes is another medicinal mushroom used extensively in eastern countries. Its β-1,3-D-glucan called lentinan ( Figure S4) [106].plays a role in immunomodulation. By binding with pattern recognition receptors, it affects immunity response [107]. Lentinan also shows cytotoxic effects by inducting apoptosis through intracellular reactive oxygen species [108]. The bioactive compound of C. sinensis is called cordycepin (Figure S5), also known as 3-deoxyadenosine, and is responsible for its anticancer effect [95]. According to its structural similarity with adenosine monophosphate, where cordycepin lacks the 3 -hydroxyl group of the ribose moiety [109], it can be used by DNA and/or RNA polymerases [110] and cause the termination of nucleic acid elongation [111]. Several studies point to the immunomodulating and antiproliferative effects of medicinal mushrooms, which suggest their use as adjuvant treatment for some cancer diseases and can also increase the effectiveness of vaccination. On the other hand, most of the published papers were performed in vitro or on animal models while in vivo studies are lacking. Many of the published studies also point out the effectiveness of polysaccharide mixtures obtained from mushroom extracts, but defining a specific pure molecule responsible for the beneficial effect of medicinal mushrooms is often difficult. Conclusions Medicinal mushrooms have been used for thousands of years in the traditional medicine of many countries due to their curative and preventive effects on various diseases. Today, a number of works describe the functional components of fungi in the fight against many diseases, including cancer. In the case of cervical cancer, the beneficial effects of medicinal mushrooms on hindering the development of the disease, mainly due to cell cycle arrest and induction of apoptosis, have been proven. In the case of cervical precancerous lesions, an increased HPV clearance and improved cervical cytology were demonstrated in patients applying vaginal gel during the watchful waiting period. Medicinal mushrooms appear to be a suitable adjunct to the treatment of many types of cancer, and patients with diagnosed precancers can also benefit from their use. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
7,722.4
2023-01-01T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
Process Parameter Optimization in Metal Laser-Based Powder Bed Fusion Using Image Processing and Statistical Analyses : The powder bed fusion additive manufacturing process has received widespread interest because of its capability to manufacture components with a complicated design and better surface finish compared to other additive techniques. Process optimization to obtain high quality parts is still a concern, which is impeding the full-scale production of materials. Therefore, it is of paramount importance to identify the best combination of process parameters that produces parts with the least defects and best features. This work focuses on gaining useful information about several features of the bead area, such as contact angle, porosity, voids, melt pool size and keyhole that were achieved using several combinations of laser power and scan speed to produce single scan lines. These features are identified and quantified using process learning, which is then used to conduct a comprehensive statistical analysis that allows to estimate the effect of the process parameters, such as laser power and scan speed on the output features. Both single and multi-response analyses are applied to analyze the response parameters, such as contact angle, porosity and melt pool size individually as well as in a collective manner. Laser power has been observed to have a more influential effect on all the features. A multi-response analysis showed that 150 W of laser power and 200 mm/s produced a bead with the best possible features. Introduction The powder bed fusion (PBF) additive manufacturing process uses an electron or laser beam to fuse metallic powders over a build platform to print one layer of the build dictated by a computer-aided design (CAD) software. The machine takes design instruction from the CAD software and creates the part by adding layers. After the first layer is created, another layer of metal powder is distributed over the build plate using a powder hopper. This layer is then melted and solidified based on the CAD design. The process continues until the final product is built. Additive processes have the ability to produce lightweight materials with a complicated design, as well as to reduce the tooling cost, which gives them an edge over conventional manufacturing processes, such as machining in aerospace and medical industries [1,2]. The laser powder bed fusion process has been the subject of extensive research lately [3][4][5][6]. Melt pool physics is one of the most crucial and complicated phenomena during the process. Many factors impact the final quality of the melt, including the energy balance, thermo-physical properties of materials and the types of heat source used for the process. Researchers showed that melt pool and bead features, such as contact angle, porosity, voids and melt pool size, can change depending on the intensity of input energy that is supplied to the powder bed [7][8][9][10][11]. The energy input in turn depends on the process variables, such as laser power, scanning speed, and thickness of the layers, etc. Due to the variability of these parameters, the molten region possesses distinguished features. These features, such as lower value of contact angle, are desirable as they ensure proper adhesion with previous layers [12], and some features, such as high amount of porosity that results in distortion of the part, are not very desirable. Researchers have tried to analyze the characteristics of different aspects of the build to optimize the parameters that can influence the process. Geometrical features such as contact angle between the present and the preceding layers, which dictates the wetting behavior of the melt pool, have been studied by several scientists. Fateri et al. [12] investigated the effect of temperature and viscosity towards the evolution of contact angle using hot stage microscopy. The study showed that contact angle decreases as the powders start to sinter at higher temperature points. Haley et al. [13] used a computational fluid dynamics (CFD) simulation technique to observe the influence of particle size, melt pool shape and surface tension on wetting dynamics. They found out that powder particle residence time, which is termed as the time between the interaction of the powder and heat source, and complete melting are dependent on particle size and surface tension, and contact angle varies inversely with residence time. Triantafyllidis et al. [14] experimentally established a relationship between power and contact angle during a laser surface treatment of Al2O3based refractory ceramics and deduced that contact angle increases with reduced power. Hu et al. [15] used a computational model to show that contact angle decreases with increasing number of tracks and decreasing scan speed during selective laser melting. Process defect in melt pool is another important feature that has been a major concern in additive manufacturing (AM) processes. It will not be feasible to move towards largescale production without addressing these defects. Extensive research has been dedicated towards the physics behind the formation of these defects. Brennan et al. [16] discussed different defects, such as porosity, voids, lack of fusion defects and how they can be reduced using a hot isostatic process (HIP). Other papers [17,18] have also tried to investigate defects from a different perspective. These papers mainly focused on the formation of defects, such as the lack of fusion, porosity, surface roughness, etc., on the build direction. An analysis of these defects along the bead cross section based on process parameters such as laser power and scan speed is largely missing from the literature. The specific features of the beads discussed above can be utilized to optimize the process parameters in the powder bed process. Traditionally, process parameter optimizations are implemented using experimental and computational methods [8,19,20]. Although computational modeling can reveal important information about melt pool, microstructure, temperature history, etc., that change with the input variables, due to the complications in the process, these models possess a lot of simplified assumptions, which result in a deviation from actual experimental results [21][22][23][24]. To solve the issue, many researchers have recently opted to use machine learning algorithm techniques to optimize the process parameters. Kwon et al. [25] used a convolutional neural network (CNN) to forecast laser power from images of the molten pool taken during the experiment and built a model with 96% accuracy. Caiazzo et al. [26] built a three-layer cascade forward propagation artificial neural network (ANN) to predict the process parameters needed to print the optimum part dimension. They produced a result with 2% error for laser power and 5.8% for scan speed. Although machine learning models have become increasingly popular as they can predict data with high accuracy, these techniques are still not good enough to predict process parameters with smaller datasets [27]. These techniques require a large number of experimental data set to train [25,28], which is both time-consuming and expensive. Statistical analysis techniques have also been employed to identify patterns in additive manufacturing. Sanaei et al. [29] analyzed the defects in an AM part based on specific locations, such as the narrow section and at the perimeter of the dog-bone samples. They used a K-S statistical test to show that the distribution of defects are different in the neck and perimeter region. Casalino et al. [30] investigated the impact of laser power and scan speed on mechanical properties, such as hardness and tensile strength of the final build. They found out that increasing energy density decreases surface roughness and increases hardness. Whip et al. [31] used an analysis of variance method to observe the effect of process parameters in melt pool and surface roughness. They found out that increasing laser power increases the melt pool size, which facilitates in a smoother surface due to proper wettability. The effect of process parameters on the evaluation of bead formation has been discussed in several works [32][33][34]. Although there is a handful of research discussions about the defects of AM parts, a comprehensive analysis of different features of the bead cross section is missing. Moreover, most of this research focuses on an analysis of defects on the surface and beneath. This work attempts to provide a detailed analysis of different features of the bead cross section for a nickel-based Inconel 718 sample, which has a high strength over a wide range of temperatures. Contact angle, porosity, void, melt pool area and keyhole formation are quantified. Individual significance of each parameter is analyzed using a full factorial design of experiment and analysis of variance (ANOVA). Process parameter optimization in terms of multiple response parameters is largely missing from the literature as well. A multi-response analysis is conducted in this work, including all the features as a part of process parameter optimization. Experimental Setup An EOSINT M280 machine was used to fabricate 24 base blocks of 25.4 mm × 25.4 mm × 4 mm using 285 W of laser power and a scanning speed of 960 mm/s to ensure that microstructure was uniform throughout all the samples [35]. Evenly spaced parallel lines were built on top of the base blocks while changing laser power and scan speed on each specimen, according to a factorial Design of Experiment (DoE). Inconel 718, an alloy based on nickel, was chosen due to its superior properties over a wide temperature range and high corrosion resistance. After the samples were built, they were cut into several sections to expose the bead cross sections. A Meiji Techno optical microscope equipped with a Nikon DS-Fi1 camera was used to take the images of the solidified beads. A magnification of 200× was used for beads with smaller dimensions. Magnifications were reduced to 100× to incorporate larger bead sizes. Samples were encased in resin, polished and etched to prepare it for micrography observation. A total of 8-12 images from each sample were taken for proper representation of the samples. Table 1 contains the laser power and scan speeds that were used during the experiments. Samples are numbered as T1, T2 and so on. ImageJ was used to quantify the features. ImageJ is a widely used open access processing and analysis software. As it was mostly a manual process, measurements were taken multiple times for the same sample to ensure precision. Contact Angle Contact angle is the angle between the bead section and the layer beneath it [36]. It determines the wettability of the molten powder particle with the previous layer. Proper wettability ensures there is a good adhesion between the layers. Inadequate adhesion between layers can result in a warped build [12], as the surface tension forces become dominant over the adhesive forces. A high contact angle (>90 • ) can result in a balling phenomenon, which distorts the material. Thus, it is desirable to have a low contact angle to ensure proper wetting and adhesion between layers. Image analysis software ImageJ was used to measure the contact angle of the experimentally obtained bead. The bead images were obtained using optical microscopy. Pixel units were converted to micron Figure 1 shows two yellow straight lines making the contact angle between the substrate and the surface of the bead. tween layers can result in a warped build [12], as the surface tension forces become dominant over the adhesive forces. A high contact angle (>90°) can result in a balling phenomenon, which distorts the material. Thus, it is desirable to have a low contact angle to ensure proper wetting and adhesion between layers. Image analysis software ImageJ was used to measure the contact angle of the experimentally obtained bead. The bead images were obtained using optical microscopy. Pixel units were converted to micron units for convenience. Figure 1 shows two yellow straight lines making the contact angle between the substrate and the surface of the bead. Porosity Gas-entrapped pores can have both a spherical and irregular shape. They are characterized by a size of around 5-20 microns for the powder bed fusion (PBF) process and greater than 50 microns for direct energy deposition (DED) [37]. These defects can be attributed to the manufacturing of powders using a gas atomization process that can carry some gases entrapped within the powders [17]. In addition, process parameters that create strong marangoni flow can trap some of the pores within the melt pool. The presence of porosity can induce a damaging impact on material fatigue life, as well as mechanical properties. ImageJ was used to identify the porosities in the melt pool using a color threshold. The scale bars in the images were used as a reference to ensure accurate measurement of the pores by converting the pixel size to a micron unit. For ease of measurement, the color images were converted into an 8-bit binary image. Afterwards, a fast Fourier transform (FFT) using a bandpass filter was used to maintain a uniform background throughout the image. A threshold was used afterwards to separate the pores from the rest of the image. The darker regions with 0.4-1 circularity were selected as porosities having a size of 5-30 microns. The long grey regions were considered as impurities and, as such, were ignored. The borderline thresholds were removed during analysis. Figure 2a shows the cross section of the bead obtained using the optical microscope, and Figure 2b shows the color threshold used to find the porosities. Porosity Gas-entrapped pores can have both a spherical and irregular shape. They are characterized by a size of around 5-20 microns for the powder bed fusion (PBF) process and greater than 50 microns for direct energy deposition (DED) [37]. These defects can be attributed to the manufacturing of powders using a gas atomization process that can carry some gases entrapped within the powders [17]. In addition, process parameters that create strong marangoni flow can trap some of the pores within the melt pool. The presence of porosity can induce a damaging impact on material fatigue life, as well as mechanical properties. ImageJ was used to identify the porosities in the melt pool using a color threshold. The scale bars in the images were used as a reference to ensure accurate measurement of the pores by converting the pixel size to a micron unit. For ease of measurement, the color images were converted into an 8-bit binary image. Afterwards, a fast Fourier transform (FFT) using a bandpass filter was used to maintain a uniform background throughout the image. A threshold was used afterwards to separate the pores from the rest of the image. The darker regions with 0.4-1 circularity were selected as porosities having a size of 5-30 microns. The long grey regions were considered as impurities and, as such, were ignored. The borderline thresholds were removed during analysis. Figure 2a shows the cross section of the bead obtained using the optical microscope, and Figure 2b shows the color threshold used to find the porosities. Keyhole and Voids Although a keyhole is more dominant in welding due to high laser power and low welding speed, it can be present during additive manufacturing as well, as high laser powers are being used lately in this process. High energy density on the powder material can cause evaporation, creating recoil pressure, which depresses the melt pool, creating a Keyhole and Voids Although a keyhole is more dominant in welding due to high laser power and low welding speed, it can be present during additive manufacturing as well, as high laser powers are being used lately in this process. High energy density on the powder material can cause evaporation, creating recoil pressure, which depresses the melt pool, creating a narrow and deep keyhole shape [38]. The keyhole needs to be controlled, otherwise it can leave voids inside the melt pool containing vapor. Metals that have low thermal conductivity facilitate the formation of a keyhole, as they help accumulate enough heat to start evaporation. Each keyhole image was measured 5 times to remove measurement error as much as possible. Figure 3 shows the keyhole formation due to excessive energy input that creates a large void inside the melt pool. Keyhole and Voids Although a keyhole is more dominant in welding due to high laser power and low welding speed, it can be present during additive manufacturing as well, as high laser powers are being used lately in this process. High energy density on the powder material can cause evaporation, creating recoil pressure, which depresses the melt pool, creating a narrow and deep keyhole shape [38]. The keyhole needs to be controlled, otherwise it can leave voids inside the melt pool containing vapor. Metals that have low thermal conductivity facilitate the formation of a keyhole, as they help accumulate enough heat to start evaporation. Each keyhole image was measured 5 times to remove measurement error as much as possible. Figure 3 shows the keyhole formation due to excessive energy input that creates a large void inside the melt pool. Melt Pool Melt pool shape is one of the most crucial features in additively manufactured parts. Geometry of the melt pool is extremely important. If the area of the melt pool is too large, then it repeatedly melts and solidifies 4-5 layers beneath the current layer, which can create residual stress in those layers. Residual stress can result in distortion of the part of the build. On the other hand, a shallow melt pool can cause inadequate adhesion with the Melt Pool Melt pool shape is one of the most crucial features in additively manufactured parts. Geometry of the melt pool is extremely important. If the area of the melt pool is too large, then it repeatedly melts and solidifies 4-5 layers beneath the current layer, which can create residual stress in those layers. Residual stress can result in distortion of the part of the build. On the other hand, a shallow melt pool can cause inadequate adhesion with the previously solidified layer. Therefore, choosing the optimum process parameter is of paramount importance to create the standard melt pool. Each of the melt pool areas was measured 5 times to reduce the measurement error. Single Response Analysis As part of the statistical analysis, the features were identified and quantified. An analysis of variance (ANOVA) was used to detect if the process variables such as laser power and scan speed have any significant impact on the bead cross section, such as contact angle, porosity and melt pool size. Each combination of laser power and scan speed had 6-10 images of the bead, and each response parameter was measured five times for each image to make sure the measurement errors were reduced as much as possible. Groups are defined according to the design of experiments provided in Table 1. ANOVA starts with the assumption that there is no significant influence of the input variables on the output, termed the null hypothesis. The F-ratio or F statistic is calculated using the mean of different groups to observe if the output changes significantly based on the change in the input. If the value of the F-ratio is sufficiently large, then it can be concluded that there is a Sum o f squares within, Here, Y ij denotes the jth observation of the ith group. Equations (1)-(5) provide the steps to calculate the F-ratio. Here, between refers to the value of an output between the groups of a variable, and within means the value of an output within a specific group of a variable. For example, when we try to find out the F-ratio of the contact angle in terms of laser power, the sum of squares between refers to the variation of means of the contact angle between different laser powers (40 W, 100 W, 150 W and so on) compared to the average of all the contact angles, which can be calculated using Equation (5). On the other hand, the sum of squares within refers to the variation of the contact angle within a specific group. For instance, there are multiple values of contact angle for a laser power of 40 W. The sum of squares within calculates the variation of the contact angle within this group of 40 W, does the same for all the other groups, such as 100, 150, 200 and 300 W, and sums it all up to get the total sum of squares between, as shown in Equation (4). We get the mean of squares by dividing the sum of squares by the degrees of freedom. Finally, the F-ratio is calculated by dividing the mean of squares between by the mean of squares within. So, if the F-ratio is larger than the critical value of F provided in the F distribution table [39], then the variability of the contact angle for different laser powers is large enough to ascertain that laser power has a significant effect on contact angle. p-value is another way to determine if the null hypothesis is true. The null hypothesis states that the mean value of the contact angle for different laser powers is the same. The p-value is the probability of accepting the null hypothesis. A smaller p-value indicates a small chance for the null hypothesis being true. For example, if the p-value is 0.05, then there is only a 5% chance that the mean contact angle value for all the laser powers will be same, which is really low. Thus, we can reject the null hypothesis. A Pareto chart is another tool that demonstrates the significance of the process parameters on the response parameter. It is a bar chart that shows the relative effect of different parameters on a specific response or output parameter. It also provides a reference line at a 5% significant level. The process parameters are considered significant if they exceed that reference line. Figure 4 shows the effects of laser power and scan speed on the contact angle. The values of the contact angle varied between 29 degrees at 300 W/200 mm/s to 135.2 degrees at 40 W/2500 mm/s. Although literature with similar process parameters was unavailable, the trend of contact angle agrees well with the work of Triantafyllidis et al. [14] and Hu et al. [15]. The contact angle decreased with higher laser power and lower scan speed because of the high energy input. At a lower energy input, the angle was more than 90 degrees. At this level, balling phenomena occured that could create delamination and distort the part. It can be observed from both Figure 4 and the F-value of Table 2 that both laser power and scan speed played a significant role on contact angle. The F-value for laser power was larger than scan speed, indicating that power had a more significant effect than scan speed on contact angle. An ANOVA for the interaction between laser power and scan speed was not possible to conduct, as we did not have contact angle data for all combinations of laser power and scan speed because some of the beads were broken due to balling and other defects. Contact Angle values of the contact angle varied between 29 degrees at 300 W/200 mm/s to 135.2 degrees at 40 W/2500 mm/s. Although literature with similar process parameters was unavailable, the trend of contact angle agrees well with the work of Triantafyllidis et al. [14] and Hu et al. [15]. The contact angle decreased with higher laser power and lower scan speed because of the high energy input. At a lower energy input, the angle was more than 90 degrees. At this level, balling phenomena occured that could create delamination and distort the part. It can be observed from both Figure 4 and the F-value of Table 2 that both laser power and scan speed played a significant role on contact angle. The F-value for laser power was larger than scan speed, indicating that power had a more significant effect than scan speed on contact angle. An ANOVA for the interaction between laser power and scan speed was not possible to conduct, as we did not have contact angle data for all combinations of laser power and scan speed because some of the beads were broken due to balling and other defects. It can also be observed from the Pareto chart in Figure 5 that both power and speed effects were higher than the threshold detected by the 95% confidence interval (red dotted line), with laser power being the most dominant influencing factor. The values in the x axis denote deviation from the overall mean for each process parameter. The more the It can also be observed from the Pareto chart in Figure 5 that both power and speed effects were higher than the threshold detected by the 95% confidence interval (red dotted line), with laser power being the most dominant influencing factor. The values in the x axis denote deviation from the overall mean for each process parameter. The more the deviation, the more likely it is that the specific parameter is more influential on the output value. deviation, the more likely it is that the specific parameter is more influential on the output value. [37]. Due to improper melting, a lack of fusion occurred at lower laser powers and higher scan speeds, resulting Porosity The pores in the melt pool varied in the range of 5 to 30 microns in diameter. This is similar to the average size found by Everton et al. (5 to 20 microns) [37]. Due to improper melting, a lack of fusion occurred at lower laser powers and higher scan speeds, resulting in a higher number of pores. Although there was less porosity at higher laser powers due to proper wetting, some pores still existed due to the larger melt pool size. Excluding the 40 W power samples, which did not have enough power to melt the particles, porosity ranged from 1 to 8 percent (Figure 6), which is similar to the 1-5 percent range found in the literature [17]. Porosity The pores in the melt pool varied in the range of 5 to 30 microns in diameter. This is similar to the average size found by Everton et al. (5 to 20 microns) [37]. Due to improper melting, a lack of fusion occurred at lower laser powers and higher scan speeds, resulting in a higher number of pores. Although there was less porosity at higher laser powers due to proper wetting, some pores still existed due to the larger melt pool size. Excluding the 40 W power samples, which did not have enough power to melt the particles, porosity ranged from 1 to 8 percent (Figure 6), which is similar to the 1-5 percent range found in the literature [17]. Laser power had a considerable influence on pore percentage, as is evident from the ANOVA analysis (Table 3) and Pareto chart (Figure 7). Although scanning speed had ) Figure 6. Individual impact of laser power and scan speed on the mean of porosity. Laser power had a considerable influence on pore percentage, as is evident from the ANOVA analysis (Table 3) and Pareto chart (Figure 7). Although scanning speed had much less significance, it was still above the threshold level, and hence could not be ignored while optimizing the process to minimize porosity in the melt pool. much less significance, it was still above the threshold level, and hence could not be ignored while optimizing the process to minimize porosity in the melt pool. Melt Pool Power and speed had similar, but opposite effects on the melt pool area that spanned across the melt pool depth, width and height of the bead (Figure 8). Melt pool size can be extremely small and shallow for a lower energy input, while high energy can create a large enough molten pool that melts 5-6 layers of previously solidified layers. An ANOVA analysis (Table 4) showed that the p-value for power and speed was 0.007 and 0.010, indicating a significant effect on melt pool area. Contrary to contact angle and porosity, scan speed and laser power had a similar impact on the size of the melt pool. Melt pool size took a sharp increase when the power was shifted to 200 W from 150 W, and the change was rather insignificant at a scan speed beyond 1700 mm/s (Figure 9). Keyhole and Void Keyholes were found only in three sets of beads: 300 W of laser power with scan speeds of 200 and 700 mm/s and200 W of laser power with a 200 mm/s scan speed due to high energy input. The size of the keyhole was 18,036 square microns on average. The diameter of the voids within the keyholes due to gas entrapment was around 81 microns on average. Keyhole and Void Keyholes were found only in three sets of beads: 300 W of laser power with scan speeds of 200 and 700 mm/s and200 W of laser power with a 200 mm/s scan speed due to high energy input. The size of the keyhole was 18,036 square microns on average. The diameter of the voids within the keyholes due to gas entrapment was around 81 microns on average. Table 5 shows the regression model for the melt pool features. This is an important tool, as it can be used to predict the output value for unknown values of laser power and scan speed. The model provides the regression equation for the contact angle with an R 2 value of 86.2% based on the process variables. This can be explained by the strong correlation between the parameters, laser power and speed with the output contact angle. Although porosity varied in a consistent manner with laser power, it was somewhat scattered for scanning speed. Melt pool size had a reasonable correlation as well. These models can be used to play with the process parameters and achieve the best R 2 value for each individual output parameter. However, more experimental data are needed to get a more realistic value of the regression model. Multi-Response Analysis and Optimization To observe the combined effect of both the process parameters on the all the outputs or responses simultaneously, the response optimizer was used in Minitab, which is a statistical analysis software. Response optimization enables the identification of the optimum values of the variables to achieve the desired set of output values. Table 6 shows the optimization parameters for each output. The target for the contact angle was set as 50 degrees. As a lower contact angle can produce higher surface roughness [14], a lower limit of 30 was chosen. On the other hand, a higher angle of contact with the substrate can facilitate a balling formation, which is why an upper bound of 80 degrees was selected. As porosity is not desirable in additively manufactured parts, a minimum value was set as the target value. For melt pool area, a range was chosen which ensured proper adhesion with the previous layer, as well as made sure that repeated solidification and melting was prevented to avoid residual stress. The keyhole area and void were avoided during the multi-response analysis, as we did not have enough data for these features. After examining all the combinations of input, 150 W of laser power and a 200 mm/s scan speed were selected as the optimum values of the process parameters to provide the desired output target values. A regression model and process parameters were used to get the best fit out of all the combination settings. The standard error of the fit (SE fit) was used to calculate the variation from the mean value for a specific set of process variables. The smaller the standard error, the more precise the predicted mean response [40]. The standard error along with the fit could be used to calculate the confidence interval for the responses. The SE fit for all the responses is provided in Table 7, along with the confidence interval. One of the most important parameters in a multi-response analysis is composite desirability. Composite desirability indicates how effectively the settings have reached the target values. For multiple response parameters, it is difficult to get all the optimum response parameters for a single combination of process inputs. Composite desirability combines all the individual desirabilities for each of the response parameters and combines them to get the overall desirability. The individual desirability for a target response is defined as [41]: Here, Y is the predicted value, L is the lowest acceptable value, U is the highest acceptable value, T is the target value and r is the importance of the ith response. Composite desirability is defined as: Here, n is the number of responses or outputs. For example, it can be observed that the composite desirability of a 150 W/200 mm/s combination of laser power and scan speed is 0.7647. This combination of power and speed has produced the values of the response optimizer (Fit column in Table 7) that are closest to the target values compared to other combinations of speed and power. Therefore, the composite desirability of other combinations of parameter speeds are less than 0.7647. Based on these composite desirability values, 150 W of laser power and 200 mm/s of scan speed were chosen as the optimum process parameters; that is, the closest to the target value set by the user. • This paper discusses the different features of a melt pool, i.e., contact angle, porosity, melt pool size, keyhole area and void that were found in additively manufactured samples for different combinations of laser power and scan speed using optical micrography. • ImageJ was used to measure and quantify the size of the features. The measured values were plotted against laser power and scan speed. Contact angle and porosity decrease with increasing laser power and declining scanning speed, while the process parameters had the opposite effect on melt pool size. • A single response statistical analysis was conducted to assess the impact of process variables. An ANOVA as well as a Pareto chart revealed that both the process parameters have a significant impact on the measured responses because of their effect on the energy input, with laser power being the most dominant factor between them. • A multi-response analysis was performed to optimize the process using Minitab. Composite desirability was used as the performance parameter to choose which process parameter yields the best features in terms of low porosity, lower contact angle and average melt pool size. Funding: This research received no external funding.
7,950.6
2022-01-04T00:00:00.000
[ "Materials Science" ]
Accessible and reproducible mass spectrometry imaging data analysis in Galaxy Abstract Background Mass spectrometry imaging is increasingly used in biological and translational research because it has the ability to determine the spatial distribution of hundreds of analytes in a sample. Being at the interface of proteomics/metabolomics and imaging, the acquired datasets are large and complex and often analyzed with proprietary software or in-house scripts, which hinders reproducibility. Open source software solutions that enable reproducible data analysis often require programming skills and are therefore not accessible to many mass spectrometry imaging (MSI) researchers. Findings We have integrated 18 dedicated mass spectrometry imaging tools into the Galaxy framework to allow accessible, reproducible, and transparent data analysis. Our tools are based on Cardinal, MALDIquant, and scikit-image and enable all major MSI analysis steps such as quality control, visualization, preprocessing, statistical analysis, and image co-registration. Furthermore, we created hands-on training material for use cases in proteomics and metabolomics. To demonstrate the utility of our tools, we re-analyzed a publicly available N-linked glycan imaging dataset. By providing the entire analysis history online, we highlight how the Galaxy framework fosters transparent and reproducible research. Conclusion The Galaxy framework has emerged as a powerful analysis platform for the analysis of MSI data with ease of use and access, together with high levels of reproducibility and transparency. The Galaxy framework for flexible and reproducible data analysis In essence, the Galaxy framework is characterized by four hallmarks: (1) usage of a graphical front-end that is web browser based, hence alleviating the need for advanced IT skills or the requirement to locally install and maintain software tools; (2) access to largescale computational resources for academic users; (3) provenance tracking and full version control, including the ability to switch between software and tool version and to publish complete analysis, thus enabling full reproducibility; (4) access to a vast array of open-source tools with the ability of seamless passing data from one tool to another, thus generating added value by interoperability. Multiple Galaxy servers on essentially every continent provide access to large computing resources, data storing capabilities, and hundreds of pre-installed tools for a broad range of data analysis applications through a web browser based graphical user interface [28][29][30]. Additionally, there are more than hundred public Galaxy servers available that offer more specific tools for niche application areas. For local usage, Galaxy can be installed on any computer ranging from private laptops to high computing clusters. So-called "containers" facilitate a fully functional one-click installation independent of the operating system. Hence, local Galaxy serves are easily deployed even in "private" network situations in which these servers remain invisible and inaccessible to outside users. This ability empowers Galaxy for the analysis of sensitive and protected data, e.g. in a clinical setting. In the Galaxy framework, data analysis information is stored alongside the results of each analysis step to ensure reproducibility and traceability of results. The information includes tool and software names and versions together with all parameters [31]. We propose that MSI research can greatly benefit from the possibility to privately or publicly share data analysis histories, workflows, and visualizations with collaboration partners or the entire scientific community, e.g. as online supplementary data for peer-reviewed publications. The latter step easily fulfills the criteria of the suggested MSI minimum reporting guidelines [6,16]. The Galaxy framework is predestinated for the analysis of multi-omics studies as it facilitates the integration of software of different origin into one analysis [32,33]. The possibility to seamlessly link tools of different origins has outstanding potential for MSI studies that often rely on different software platforms to analyze MSI data, additional MS/MS data (from liquid chromatography coupled tandem mass spectrometry), and (multimodal) imaging data. More than hundred tools for proteomic and metabolomics data analysis are readily available in Galaxy due to community driven efforts [34][35][36][37][38]. Increasing integration of MSI with other omics approaches such as genomics and transcriptomics is anticipated and the Galaxy framework offers a powerful and future-proof platform to tackle complex, interconnected data-driven experiments. The newly available MSI toolset in the Galaxy framework We have developed 18 Galaxy tools that are based on the commonly used open-source softwares Cardinal, MALDIquant, and scikit-image and enable all steps that commonly occur in MSI data analysis ( Figure 1) [20,21,24]. In order to deeply integrate those tools into the Galaxy framework, we developed bioconda packages and biocontainers as well as a socalled 'wrapper' for each tool [31,39]. The MSI tools consist of R scripts that were developed based on Cardinal and MALDIquant functionalities, extended for more analysis options and a consistent framework for input and output of metadata (Additional File 1). The image coregistration method uses scikit-image for image processing. All tools are deliberately build in a modular way to enable highly flexible analysis and to allow a multitude of additional functionalities by cleverly combining the MSI specific tools with already availably Galaxy tools. Typical MSI data analysis step include Quality control, file handling, preprocessing, ROI annotation, supervised and unsupervised statistical analysis, visualizations and identification of features. Due to the variety of MSI applications, tools of all or only a few of these categories are used and the order of usage is highly flexible. To serve a broad range of data analysis tasks, we provide 18 tools that cover all common data analysis procedures and can be arbitrarily connected to allow customized analysis. Data formats and data handling: We extended the Galaxy framework to support open and standardized MSI data files such as imzML, which is the default input format for the Galaxy MSI tools. Nowadays, the major mass spectrometer vendors directly support the imzML standard and several tools exist to convert different file formats to imzML [40]. Data can be easily uploaded to Galaxy via a web browser or via a built-in file transfer protocol (FTP) functionality. Intermediate result files can be further processed in the interactive environment that supports R Studio and Jupyter or downloaded for additional analysis outside of Galaxy [41]. To facilitate the parallel analysis of multiple files, the Galaxy framework offers so-called "file collections". Numerous files can be represented in a file collection allowing simultaneous analysis of all files while the effort for the user is the same as for a single file. MSI meta data such as spectra annotations, calibrant m/z, and statistical results are stored as tab-separated values files, thus enabling processing by a plethora of tools both inside and outside the Galaxy framework. All graphical results of the MSI tools are stored as concise vector graphic PDF reports with publication-quality images. Quality control and visualization: MSI Quality control: Quality control is an essential step in data analysis and should not only be used to judge the quality of the raw data but also to control processing steps such as smoothing, peak picking, and intensity normalization. Therefore, we have developed the 'MSI Qualitycontrol' tool that automatically generates a comprehensive pdf report with more than 30 different plots that enable a global view of all aspects of the MSI data including intensity distribution, m/z accuracy and segmentation maps. For example, spectra with bad quality, such as low total ion current or low number of peaks can be directly spotted in the quality report and subsequently be removed by applying the 'MSI data exporter' and 'MSI filtering' tools. MSI mz image: The 'MSI mz image' tool allows to automatically generate a publicationquality pdf file with distribution heat maps for all m/z features provided in a tab-separated values file. Contrast enhancement and smoothing options are available as well as the possibility to overlay several m/z features in one image. MSI plot spectra: The 'MSI plot spectra' tool displays multiple single or average mass spectra in a pdf file. Overlay of multiple single or averaged mass spectra with different colors in one plot is also possible. The Galaxy framework offers various visualization options for tab-separated values files, including heatmaps, barplots, scatterplots, and histograms. This enables a quick visualization of the properties of tab-separated values files obtained during MSI analysis. A large variety of tools that allows for filtering, sorting, and manipulating of tab-separated values files is already available in Galaxy and can be integrated into the MSI data analysis. Some dedicated tools for imzML file handling were newly integrated into the Galaxy framework. MSI combine: The 'MSI combine' tool allows combining several imzML files into a merged dataset. This is especially important to enable direct visual but also statistical comparison of MSI data that derived from multiple files. With the 'MSI combine tool', individual MSI datasets are either placed next to each other in a coordinate system or can be shifted in x or y direction in a user defined way. The output of the tool contains a single file with the combined MSI data and an additional tab-separated values file with spectra annotations, i.e. each spectrum is annotated with its original file name (before combination) and, if applicable, with previously defined annotations such as diagnosis, disease type, and other clinical parameters. MSI filtering: The 'MSI filtering' tool provides options to filter m/z features and pixel (spectra) of interest, either by applying manual ranges (minimum and maximum m/z, spatial area as defined by x / y coordinates) or by keeping only m/z features or coordinates of pixels that are provided in a tab-separated values file. Unwanted m/z features such as pre-defined contaminant features can be removed within a preselected m/z tolerance. MSI data exporter: The 'MSI data exporter' can export the spectra, intensity and m/z data of an imzML file together with their summarized properties into tab-separated values files. Region of interest annotation: For supervised analysis, spatial regions of interest (ROI) can be defined. However, annotation of these ROIs is infeasible on the MSI images. Therefore, the ROIs are annotated on a photograph or histological image of the sample. We extended and developed six new A multitude of statistical analysis options for tab-separated values files is already available in Galaxy, the most MSI relevant tools are from the Workflow4metabolomics project and consist of unsupervised and supervised statistical analysis tools [44]. For specific purposes of spatially resolved MSI data analysis, we have integrated Cardinal's powerful spatially aware statistical analysis options into the Galaxy framework. MSI segmentation: The 'MSI segmentation' tool enables spatially aware unsupervised statistical analysis with principal component analysis, spatially aware k-means clustering and spatial shrunken centroids [45,46]. MSI classification: The 'MSI classification' tool offers three options for spatially aware supervised statistical analysis: partial least square (discriminant analysis), orthogonal partial least squares (discriminant analysis), and spatial shrunken centroids [47]. Analyte identification: m/z determination on its own often remains insufficient to identify analytes. Compound fragmentation and tandem mass spectrometry are typically employed for compound identification by mass spectrometry. In MSI, the required local confinement of the mass spectrometry analysis severely limits the compound amounts that are available for fragmentation. Hence, direct on-target fragmentation is rarely employed in MSI. A common practice for compound identification includes a combinatorial approach in which LC-MS/MS data is used to identify the analytes while MSI analyses their spatial distribution. This approach requires assigning putative analyte information to m/z values within a given accuracy range. Join two files on a column allowing a small difference: This newly developed tool allows for the matching of numeric columns of two tab-separated values files on the smallest distance that can be absolute or in ppm. This tool can be used to identify the m/z features of a tab-separated values files by matching them to already identified m/z features of another tabseparated values file (e.g. from a database or from an analysis workflow). Community efforts such as Galaxy-M, Galaxy-P, Phenomenal, and Workflow4Metabolomics have led to a multitude of metabolomics and proteomics analysis tools available in Galaxy [34][35][36][37][38]. These tools allow analyzing additional tandem mass spectrometry data that is often acquired to aid identification of MSI m/z features. Databases to which the results can be matched, such as uniprot and lipidmaps, are directly available in Galaxy [48,49]. The highly interdisciplinary and modular data analysis options in Galaxy render it a very powerful platform for MSI data analyses that are part of a multi-omics study. Accessibility & training All described tools are easily accessible and usable via the European Galaxy server [29]. Furthermore, all tools are deposited in the Galaxy Toolshed from where they can be easily installed into any other Galaxy instance [50]. We have developed bioconda packages and biocontainers that allow for version control and automated installation of all tool dependencies -those packages are also useful outside Galaxy to enhance reproducibility [31,39]. For researchers that do not want to use publicly available Galaxy servers, we provide a pre-built Docker image that is easy to install independent of the operating system. For a swift introduction into the analysis of MSI data in Galaxy, we have developed training material for metabolomics and proteomic use cases and deposited it to the central repository of the Galaxy Training Network [51,52]. The training materials consist of a comprehensive collection of small example datasets, step-by-step explanations and workflows that enable any interested researcher in following the training and understanding it through active participation. The first training explains data upload in Galaxy and describes the quality control of a mouse kidney tissue section in which peptides were imaged with an old MALDI-TOF [53]. The dataset contains peptide calibrants that allow the control of the digestion efficiency and m/z accuracy. Export of MSI data into tab-separated values files and further filtering of those files is explained as well. The second training explains the examination of the spatial distribution of volatile organic compounds in a chili section. The training roughly follows the corresponding publication and explains how average mass spectra are plotted and only the relevant m/z range is kept, as well as how to automatically generate many m/z distribution maps and overlay several m/z feature maps [19]. The third training determines and identifies N-linked glycans in mouse kidney tissue sections with MALDI-TOF and additional LC-MS/MS data analysis [54,55]. The training covers combining datasets, preprocessing as well as unsupervised and supervised statistical analysis to find potential N-linked glycans that have different abundances in the PNGase F treated kidney section compared to the kidney section that was treated with buffer only. The training further covers identification of the potential N-linked glycans by matching their m/z values to a list of N-linked glycan m/z that were identified by LC-MS/MS. The full dataset is used as a case study in the following section. Case study To exemplify the utility of our MSI tools we re-analyzed the N-glycan dataset that was recently made available by Gustafsson et al. via the PRIDE repository with accession PXD009808 [55,56]. The aim of the study was to demonstrate that their automated sample preparation method for MALDI imaging of N-linked glycans successfully works on formalinfixed paraffin-embedded (FFPE) murine kidney tissue [54]. PNGase F was printed on two FFPE murine kidney sections to release N-linked glycans from proteins while in a third section one part of the kidney was covered with N-glycan calibrants and another part with buffer to serve as a control. We downloaded all four imzML files (two treated kidneys, control and calibrants) from PRIDE and uploaded them with the composite upload function into Galaxy. To obtain an overview of the files we used the 'MSI Qualitycontrol' tool. We resampled the m/z axis, combined all files and run again the 'MSI Qualitycontrol' tool to directly compare the four subfiles. Next, we performed TIC normalization, smoothing and baseline removal. Spectra were aligned to the stable peaks that are present in at least 80 % of all spectra [57]. Spectra, in which less than two stable peaks could be aligned, were removed. This affected mainly spectra from the control file. Peak picking, detection of monoisotopic peaks and binning was performed on the average spectra of each subfile. The obtained m/z features were extracted with Cardinal's 'peaks' algorithm from the normalized, smoothed, baseline removed and aligned file. Next, principal component analysis with four components was performed ( Figure 2). To find potential N-linked glycans, the two treated tissues were compared to the control tissue with the supervised spatial shrunken centroids algorithm. Spatial shrunken centroids is a multivariate classification method that was specifically developed to account for the spatial structure of the data (Figure 3a) [45]. The supervised analysis provided us with 28 m/z features that discriminated between the two PNGase F treated kidneys and the control kidney with a spatial shrunken centroids p-value < In Gustafsson's own terms from a recent publication, our results show that their results are reproducible, because we, as another group, have followed as closely as possible their data analysis procedure and arrived at similar results [16].The reproducibility of the results shows the capacity of our pipeline. To enable what Gustafsson has described as "methods reproducibility" we provide the complete analysis history and the corresponding workflow. With this in hand, any other researcher can use the same tools and parameters in Galaxy to obtain the same result as we did. Publishing histories and workflows from Galaxy requires only a few clicks and provides more information than requested by the minimum reporting guidelines MSI MIAPE (Minimum Information About a Proteomics Experiment) and MIAMSIE (Minimum Information About a Mass Spectrometry Imaging Experiment) [6,16]. The Galaxy software itself but also the shared histories and workflows fulfil the FAIR principles that stand for findability, accessibility, interoperability, and reusability [27]. Summary With the integration of the MSI data analysis toolset, we have incorporated an accessible and reproducible data analysis platform for MSI data in the Galaxy framework. Our MSI tools complement the multitude of already available Galaxy tools for proteomics and metabolomics that are maintained by Galaxy-M, Galaxy-P, Phenomal and Workflow4Metabolomics [34][35][36][37][38]. We are in close contact with those communities and would like to encourage developers of the MSI community to join forces and make their tools available in the Galaxy framework. We Abstract: Background: Mass spectrometry imaging is increasingly used in biological and translational research as it has the ability to determine the spatial distribution of hundreds of analytes in a sample. Being at the interface of proteomics/metabolomics and imaging, the acquired data sets are large and complex and often analyzed with proprietary software or in-house scripts, which hinder reproducibility. Open source software solutions that enable reproducible data analysis often require programming skills and are therefore not accessible to many MSI researchers. Findings: We have integrated 18 dedicated mass spectrometry imaging tools into the Galaxy framework to allow accessible, reproducible, and transparent data analysis. Our tools are based on Cardinal, MALDIquant, and scikit-image and enable all major MSI analysis steps such as quality control, visualization, preprocessing, statistical analysis, and image coregistration. Further, we created hands-on training material for use cases in proteomics and metabolomics. To demonstrate the utility of our tools, we re-analyzed a publicly available Nlinked glycan imaging dataset. By providing the entire analysis history online, we highlight how the Galaxy framework fosters transparent and reproducible research. Conclusion: The Galaxy framework has emerged as a powerful analysis platform for the analysis of MSI data with ease of use and access together with high levels of reproducibility and transparency. 3 Findings: Background: Mass spectrometry imaging (MSI) is increasingly used for a broad range of biological and clinical applications as it allows the simultaneous measurement of hundreds of analytes and their spatial distribution. The versatility of MSI is based on its ability to measure many different kinds of molecules such as peptides, metabolites or chemical compounds in a large variety of samples such as cells, tissues, fingerprints or human made materials [1][2][3][4][5]. Depending on the sample, the analyte of interest and the application, different mass spectrometers are used [6]. Due to the variety of samples, analytes, and mass spectrometers, MSI is suitable for highly diverse use cases ranging from plant research, to (pre-)clinical, pharmacologic studies, and forensic investigations [2,[7][8][9]. On the other hand, the variety of research fields hinders harmonization and standardization of MSI protocols. Recently efforts were started to develop optimized sample preparation protocols and show their reproducibility in multicenter studies [10][11][12][13]. In contrast, efforts to make data analysis standardized and reproducible are in its infancy. Reproducibility of MSI data analyses is hindered by the common use of software with restricted access such as proprietary software, license requiring software, or unpublished inhouse scripts [14]. Open source software has the potential to advance accessibility and reproducibility issues in data analysis but requires complete reporting of software versions and parameters, which is not yet routine in MSI [15][16][17]. [18]. Yet, many of these tools necessitate steep learning curves, in some cases even requiring programming knowledge to make use of their full range of functions [19][20][21][22][23]. To overcome problems with accessibility of software and computing resources, standardization, and reproducibility, we developed MSI data analysis tools for the Galaxy framework that are based on the open source software suites Cardinal, MALDIquant, and scikit-image [20,21,24]. Galaxy is an open source computational platform for biomedical research that was developed to support researchers without programming skills with the analysis of large data sets, e.g. in the field of next generation sequencing. Galaxy is used by hundred thousands of researchers and provides thousands of different tools for many different scientific fields [25]. Aims: With the present publication, we aim to raise awareness within the MSI community for the advantages being offered by the Galaxy framework with regard to standardized and reproducible data analysis pipelines. Secondly, we present newly developed Galaxy tools and offer them to the MSI community through the graphical front-end and "drag-and-drop" workflows of the Galaxy framework. Thirdly, we apply the MSI Galaxy tools to a publicly available dataset to study N-glycan identity and distribution in murine kidney specimens in order to demonstrate usage of a Galaxy-based MSI analysis pipeline that facilitates standardization and reproducibility and is compatible with the principles of FAIR (findable, accessible, interoperable, and re-usable) data and MIAPE (minimum information about a proteomics experiment) [26,27]. The Galaxy framework for flexible and reproducible data analysis In essence, the Galaxy framework is characterized by four hallmarks: (1) usage of a graphical front-end that is web browser based, hence alleviating the need for advanced IT skills or the requirement to locally install and maintain software tools; (2) access to largescale computational resources for academic users; (3) provenance tracking and full version control, including the ability to switch between software and tool version and to publish complete analysis, thus enabling full reproducibility; (4) access to a vast array of open-source tools with the ability of seamless passing data from one tool to another, thus generating added value by interoperability. Multiple Galaxy servers on essentially every continent provide access to large computing resources, data storing capabilities, and hundreds of pre-installed tools for a broad range of data analysis applications through a web browser based graphical user interface [28][29][30]. Additionally, there are more than hundred public Galaxy servers available that offer more specific tools for niche application areas. For local usage, Galaxy can be installed on any computer ranging from private laptops to high computing clusters. So-called "containers" facilitate a fully functional one-click installation independent of the operating system. Hence, local Galaxy serves are easily deployed even in "private" network situations in which these servers remain invisible and inaccessible to outside users. This ability empowers Galaxy for the analysis of sensitive and protected data, e.g. in a clinical setting. In the Galaxy framework, data analysis information is stored alongside the results of each analysis step to ensure reproducibility and traceability of results. The information includes tool and software names and versions together with all parameters [31]. We propose that MSI research can greatly benefit from the possibility to privately or publicly share data analysis histories, workflows, and visualizations with collaboration partners or the entire scientific community, e.g. as online supplementary data for peer-reviewed publications. The latter step easily fulfills the criteria of the suggested MSI minimum reporting guidelines [6,16]. The Galaxy framework is predestinated for the analysis of multi-omics studies as it facilitates the integration of software of different origin into one analysis [32,33]. The possibility to seamlessly link tools of different origins has outstanding potential for MSI studies that often rely on different software platforms to analyze MSI data, additional MS/MS data (from liquid chromatography coupled tandem mass spectrometry), and (multimodal) imaging data. More than hundred tools for proteomic and metabolomics data analysis are readily available in Galaxy due to community driven efforts [34][35][36][37][38]. Increasing integration of MSI with other omics approaches such as genomics and transcriptomics is anticipated and the Galaxy framework offers a powerful and future-proof platform to tackle complex, interconnected data-driven experiments. The newly available MSI toolset in the Galaxy framework We have developed 18 Galaxy tools that are based on the commonly used open-source softwares Cardinal, MALDIquant, and scikit-image and enable all steps that commonly occur in MSI data analysis (Figure 1) [20,21,24]. In order to deeply integrate those tools into the Galaxy framework, we developed bioconda packages and biocontainers as well as a socalled 'wrapper' for each tool [31,39]. The MSI tools consist of R scripts that were developed based on Cardinal and MALDIquant functionalities, extended for more analysis options and a consistent framework for input and output of metadata (Additional File 1). The image coregistration method uses scikit-image for image processing. All tools are deliberately build in a modular way to enable highly flexible analysis and to allow a multitude of additional functionalities by cleverly combining the MSI specific tools with already availably Galaxy tools. Figure 1: Typical MSI data analysis steps and associated Galaxy tools. Typical MSI data analysis step include Quality control, file handling, preprocessing, ROI annotation, supervised and unsupervised statistical analysis, visualizations and identification of features. Due to the variety of MSI applications, tools of all or only a few of these categories are used and the order of usage is highly flexible. To serve a broad range of data analysis tasks, we provide 18 tools that cover all common data analysis procedures and can be arbitrarily connected to allow customized analysis. Data formats and data handling: We extended the Galaxy framework to support open and standardized MSI data files such as imzML, which is the default input format for the Galaxy MSI tools. Nowadays, the major mass spectrometer vendors directly support the imzML standard and several tools exist to convert different file formats to imzML [40]. Data can be easily uploaded to Galaxy via a web browser or via a built-in file transfer protocol (FTP) functionality. Intermediate result files can be further processed in the interactive environment that supports R Studio and Jupyter or downloaded for additional analysis outside of Galaxy [41]. To facilitate the parallel analysis of multiple files, the Galaxy framework offers so-called "file collections". Numerous files can be represented in a file collection allowing simultaneous analysis of all files while the effort for the user is the same as for a single file. MSI meta data such as spectra annotations, calibrant m/z, and statistical results are stored as tab-separated values files, thus enabling processing by a plethora of tools both inside and outside the Galaxy framework. All graphical results of the MSI tools are stored as concise vector graphic PDF reports with publication-quality images. Quality control and visualization: MSI Quality control: Quality control is an essential step in data analysis and should not only be used to judge the quality of the raw data but also to control processing steps such as smoothing, peak picking, and intensity normalization. Therefore, we have developed the 'MSI Qualitycontrol' tool that automatically generates a comprehensive pdf report with more than 30 different plots that enable a global view of all aspects of the MSI data including intensity distribution, m/z accuracy and segmentation maps. For example, spectra with bad quality, such as low total ion current or low number of peaks can be directly spotted in the quality report and subsequently be removed by applying the 'MSI data exporter' and 'MSI filtering' tools. A large variety of tools that allows for filtering, sorting, and manipulating of tab-separated values files is already available in Galaxy and can be integrated into the MSI data analysis. Some dedicated tools for imzML file handling were newly integrated into the Galaxy framework. MSI combine: The 'MSI combine' tool allows combining several imzML files into a merged dataset. This is especially important to enable direct visual but also statistical comparison of MSI data that derived from multiple files. With the 'MSI combine tool', individual MSI datasets are either placed next to each other in a coordinate system or can be shifted in x or y direction in a user defined way. The output of the tool contains a single file with the combined MSI data and an additional tab-separated values file with spectra annotations, i.e. each spectrum is annotated with its original file name (before combination) and, if applicable, with previously defined annotations such as diagnosis, disease type, and other clinical parameters. MSI filtering: The 'MSI filtering' tool provides options to filter m/z features and pixel (spectra) of interest, either by applying manual ranges (minimum and maximum m/z, spatial area as defined by x / y coordinates) or by keeping only m/z features or coordinates of pixels that are provided in a tab-separated values file. Unwanted m/z features such as pre-defined contaminant features can be removed within a preselected m/z tolerance. MSI data exporter: The 'MSI data exporter' can export the spectra, intensity and m/z data of an imzML file together with their summarized properties into tab-separated values files. Region of interest annotation: For supervised analysis, spatial regions of interest (ROI) can be defined. However, annotation of these ROIs is infeasible on the MSI images. Therefore, the ROIs are annotated on a photograph or histological image of the sample. We extended and developed six new A multitude of statistical analysis options for tab-separated values files is already available in Galaxy, the most MSI relevant tools are from the Workflow4metabolomics project and consist of unsupervised and supervised statistical analysis tools [44]. For specific purposes of spatially resolved MSI data analysis, we have integrated Cardinal's powerful spatially aware statistical analysis options into the Galaxy framework. MSI segmentation: The 'MSI segmentation' tool enables spatially aware unsupervised statistical analysis with principal component analysis, spatially aware k-means clustering and spatial shrunken centroids [45,46]. MSI classification: The 'MSI classification' tool offers three options for spatially aware supervised statistical analysis: partial least square (discriminant analysis), orthogonal partial least squares (discriminant analysis), and spatial shrunken centroids [47]. Analyte identification: m/z determination on its own often remains insufficient to identify analytes. Compound fragmentation and tandem mass spectrometry are typically employed for compound identification by mass spectrometry. In MSI, the required local confinement of the mass spectrometry analysis severely limits the compound amounts that are available for fragmentation. Hence, direct on-target fragmentation is rarely employed in MSI. A common practice for compound identification includes a combinatorial approach in which LC-MS/MS data is used to identify the analytes while MSI analyses their spatial distribution. This approach requires assigning putative analyte information to m/z values within a given accuracy range. Join two files on a column allowing a small difference: This newly developed tool allows for the matching of numeric columns of two tab-separated values files on the smallest distance that can be absolute or in ppm. This tool can be used to identify the m/z features of a tab-separated values files by matching them to already identified m/z features of another tabseparated values file (e.g. from a database or from an analysis workflow). Community efforts such as Galaxy-M, Galaxy-P, Phenomenal, and Workflow4Metabolomics have led to a multitude of metabolomics and proteomics analysis tools available in Galaxy [34][35][36][37][38]. These tools allow analyzing additional tandem mass spectrometry data that is often acquired to aid identification of MSI m/z features. Databases to which the results can be matched, such as uniprot and lipidmaps, are directly available in Galaxy [48,49]. The highly interdisciplinary and modular data analysis options in Galaxy render it a very powerful platform for MSI data analyses that are part of a multi-omics study. Accessibility & training All described tools are easily accessible and usable via the European Galaxy server [29]. Furthermore, all tools are deposited in the Galaxy Toolshed from where they can be easily installed into any other Galaxy instance [50]. We have developed bioconda packages and biocontainers that allow for version control and automated installation of all tool dependencies -those packages are also useful outside Galaxy to enhance reproducibility [31,39]. For researchers that do not want to use publicly available Galaxy servers, we provide a pre-built Docker image that is easy to install independent of the operating system. For a swift introduction into the analysis of MSI data in Galaxy, we have developed training material for metabolomics and proteomic use cases and deposited it to the central repository of the Galaxy Training Network [51,52]. The training materials consist of a comprehensive collection of small example datasets, step-by-step explanations and workflows that enable any interested researcher in following the training and understanding it through active participation. The first training explains data upload in Galaxy and describes the quality control of a mouse kidney tissue section in which peptides were imaged with an old MALDI-TOF [53]. The dataset contains peptide calibrants that allow the control of the digestion efficiency and m/z accuracy. Export of MSI data into tab-separated values files and further filtering of those files is explained as well. The second training explains the examination of the spatial distribution of volatile organic compounds in a chili section. The training roughly follows the corresponding publication and explains how average mass spectra are plotted and only the relevant m/z range is kept, as well as how to automatically generate many m/z distribution maps and overlay several m/z feature maps [19]. The third training determines and identifies N-linked glycans in mouse kidney tissue sections with MALDI-TOF and additional LC-MS/MS data analysis [54,55]. The training covers combining datasets, preprocessing as well as unsupervised and supervised statistical analysis to find potential N-linked glycans that have different abundances in the PNGase F treated kidney section compared to the kidney section that was treated with buffer only. The training further covers identification of the potential N-linked glycans by matching their m/z values to a list of N-linked glycan m/z that were identified by LC-MS/MS. The full dataset is used as a case study in the following section. Case study To exemplify the utility of our MSI tools we re-analyzed the N-glycan dataset that was recently made available by Gustafsson et al. via the PRIDE repository with accession PXD009808 [55,56]. The aim of the study was to demonstrate that their automated sample preparation method for MALDI imaging of N-linked glycans successfully works on formalinfixed paraffin-embedded (FFPE) murine kidney tissue [54]. PNGase F was printed on two FFPE murine kidney sections to release N-linked glycans from proteins while in a third section one part of the kidney was covered with N-glycan calibrants and another part with buffer to serve as a control. We downloaded all four imzML files (two treated kidneys, control and calibrants) from PRIDE and uploaded them with the composite upload function into Galaxy. To obtain an overview of the files we used the 'MSI Qualitycontrol' tool. We resampled the m/z axis, combined all files and run again the 'MSI Qualitycontrol' tool to directly compare the four subfiles. Next, we performed TIC normalization, smoothing and baseline removal. Spectra were aligned to the stable peaks that are present in at least 80 % of all spectra [57]. Spectra, in which less than two stable peaks could be aligned, were removed. This affected mainly spectra from the control file. Peak picking, detection of monoisotopic peaks and binning was performed on the average spectra of each subfile. The obtained m/z features were extracted with Cardinal's 'peaks' algorithm from the normalized, smoothed, baseline removed and aligned file. Next, principal component analysis with four components was performed ( Figure 2). To find potential N-linked glycans, the two treated tissues were compared to the control tissue with the supervised spatial shrunken centroids algorithm. Spatial shrunken centroids is a multivariate classification method that was specifically developed to account for the spatial structure of the data (Figure 3a) [45]. The supervised analysis provided us with 28 m/z features that discriminated between the two PNGase F treated kidneys and the control kidney with a spatial shrunken centroids p-value < In Gustafsson's own terms from a recent publication, our results show that their results are reproducible, because we, as another group, have followed as closely as possible their data analysis procedure and arrived at similar results [16].The reproducibility of the results shows the capacity of our pipeline. To enable what Gustafsson has described as "methods reproducibility" we provide the complete analysis history and the corresponding workflow. With this in hand, any other researcher can use the same tools and parameters in Galaxy to obtain the same result as we did. We could identify 16 N-linked glycans by matching the m/z features of the MSI data (column 1) to the identified m/z features of the LC-MS/MS experiment (column 5). We allowed a maximum tolerance of 300 ppm and multiple matches. Only single matches occurred with an average m/z error of 46 ppm (column 6). Publishing histories and workflows from Galaxy requires only a few clicks and provides more information than requested by the minimum reporting guidelines MSI MIAPE (Minimum Information About a Proteomics Experiment) and MIAMSIE (Minimum Information About a Mass Spectrometry Imaging Experiment) [6,16]. The Galaxy software itself but also the shared histories and workflows fulfil the FAIR principles that stand for findability, accessibility, interoperability, and reusability [27]. Summary With the integration of the MSI data analysis toolset, we have incorporated an accessible and reproducible data analysis platform for MSI data in the Galaxy framework. Our MSI tools complement the multitude of already available Galaxy tools for proteomics and metabolomics that are maintained by Galaxy-M, Galaxy-P, Phenomal and Workflow4Metabolomics [34][35][36][37][38]. We are in close contact with those communities and would like to encourage developers of the MSI community to join forces and make their tools available in the Galaxy framework. We
8,871.8
2019-05-05T00:00:00.000
[ "Computer Science", "Chemistry", "Biology" ]
Multi-antigen avian influenza a (H7N9) virus-like particles: particulate characterizations and immunogenicity evaluation in murine and avian models Background Human infection with avian influenza A virus (H7N9) was first reported in China in March 2013. Since then, hundreds of cases have been confirmed showing severe symptoms with a high mortality rate. The virus was transmitted from avian species to humans and has spread to many neighboring areas, raising serious concerns over its pandemic potential. Towards containing the disease, the goal of this study is to prepare a virus-like particle (VLP) that consists of hemagglutinin (HA), neuraminidase (NA) and matrix protein 1 (M1) derived from the human isolate A/Taiwan/S02076/2013(H7N9) for potential vaccine development. Results Full length HA, NA, and M1 protein genes were cloned and expressed using a baculoviral expression system, and the VLPs were generated by co-infecting insect cells with three respective recombinant baculoviruses. Nanoparticle tracking analysis and transmission electron microscopy were applied to verify the VLPs’ structure and antigenicity, and the multiplicity of infection of the recombinant baculoviruses was adjusted to achieve the highest hemagglutination activity. In animal experiments, BALB/c mice and specific-pathogen-free chickens receiving the VLP immunization showed elevated hemagglutination inhibition serum titer and antibodies against NA and M1 proteins. In addition, examination of cellular immunity showed the VLP-immunized mice and chickens exhibited an increased splenic antigen-specific cytokines production. Conclusions The H7N9 VLPs possess desirable immunogenicity in vivo and may serve as a candidate for vaccine development against avian influenza A (H7N9) infection. Electronic supplementary material The online version of this article (doi:10.1186/s12896-016-0321-6) contains supplementary material, which is available to authorized users. Background Human infection with avian influenza A (H7N9) was first reported in China in March 2013, and since then hundreds of human cases have been confirmed. The disease is associated severe respiratory illness, and the high mortality rate has garnered increasing attention globally [1,2]. The ability of the virus to transmit from avian species to humans has contributed to its spread to neighbouring areas of China and raises serious concerns over its pandemic potential, particularly in areas with poultry farming industry. With the growing need for countermeasures against the infectious threat, development of effective vaccines is of increasing importance. In addition to ongoing efforts on developing human vaccines against avian influenza virus A/H7N9, formulations that can effectively mount immunity in avian models are of significant public health and economic considerations as they may contain interspecies transmissions and improve overall poultry health. With the aim to prepare a vaccine candidate towards both human and avian applications, we prepared and characterized an A/H7N9-mimicking virus-like particle (VLP). The immune-potentiating effect of the VLP was evaluated in both a mammalian (mouse) and an avian (chicken) model to examine the particles' potential as an anti-viral vaccine. Similar to other influenza A viruses, H7N9 viral capsids are comprised of three primary proteins, including hemagglutinin (HA), neuraminidase (NA), and matrix protein 1 (M1). Upon co-expression, these three proteins self-assemble into nanoparticles, which are responsible for packaging viral genomes for disease transmission [3]. Previously, VLPs, which are non-infectious particles devoid of any viral genes, have been shown to induce protective immunity against influenza and other viral diseases [4][5][6][7][8][9]. The VLPs' morphological and antigenic semblance to native virions facilitate effective immune processing, making them a compelling alternative to free viral antigens as vaccine candidates. In the present study, VLPs consisting of hemagglutinin (HA), neuraminidase (NA) and matrix protein 1 (M1) derived from the human isolate A/Taiwan/S02076/2013 (H7N9) were prepared for vaccine development. A combinatorial baculoviral system was applied to generate VLPs coexpressing the three viral proteins, which have been recognized as major antigenic targets of the H7N9 virus [10,11]. The resulting VLPs were assayed for their hemagglutination activity. Nanoparticle tracking analysis was applied to verify the size, surface charge, and concentration of the VLPs, and transmission electron microscopy following immunogold staining was performed to validate the VLPs' multi-antigenic nature. A mouse model was first used to validate the VLPs' immunogenicity, and a particular emphasis was placed on the humoral and cellular immune responses in a chicken model following vaccination with either VLPs or free protein antigens. Results observed from the study offer hope that the formulation may find applications in both clinical and agricultural settings. Recombinant baculoviruses were prepared by using the Bac-to-Bac baculovirus expression system (Invitrogen). Briefly, three separate recombinant plasmids were constructed by inserting full HA, NA, and M1 genes of A/Taiwan/S02076/2013(H7N9) (accession no. KF018045, KF018047, and KF018048) into the pFastBac-1 vector using primers listed in Table 1. The recombinant pFastBac-1 shuttle vectors were then transposed to the bacmid in E. coli strain DH10Bac, and recombinant bacmid was purified using the HiPure Plasmid Midiprep kit (Invitrogen). Sf9 cells were used for transfection with the recombinant bacmid. Right before transfection, 8 μl of the Cellfectin II Reagent (Invitrogen) was diluted in 100 μl of Grace's medium (without antibiotics and serum), and 1 μg of bacmid DNA was diluted in 100 μl of Grace's medium (without antibiotics and serum). Subsequently, the diluted bacmid DNA and diluted Cellfectin II were combined and incubated for 30 min at room temperature. The DNA-lipid mixture was then added onto the cells dropwise and incubated at 27°C for 4 h. Following removal of transfection mixture, fresh cell medium was added to the cells. After 72 h, recombinant baculoviruses were harvested from the supernatant and designated as rBac-H7, rBac-N9, and rBac-M1, respectively. The recombinant baculoviruses were subsequently amplified in Sf9 cells, and virus titers were determined by plaque assays in Sf21 cells. Immunofluorescence assay (IFA) Sf9 cells were infected with rBac-H7, rBac-N9, and rBac-M1 at a multiplicity of infection (MOI) of 2. Three days later, the monolayer of cells was washed and fixed with 80% acetone at −20°C for 20 min. Rabbit polyclonal antibody against H7/HA peptide (Sino Biological, Beijing, China), rabbit polyclonal antibody against N9/ NA peptide (ProSci, Poway, CA), and mouse polyclonal antibody against a recombinant M1 protein were applied at 1:10,000, 1:2,000, and 1:1,000 dilutions respectively and incubated for 1 h. After PBS wash, cells were further Production and purification of H7N9 recombinant proteins and VLPs H7, N9 and M1 recombinant proteins were harvested from lysed Sf9 cells that were infected with rBac-H7, rBac-N9, or rBac-M1 (MOI = 2). After 72 h, the cells were washed and lysed with the I-PER insect cell protein extraction reagent (Thermo Scientific). Recombinant proteins were purified using the Glycoprotein Isolation Kit, ConA (Thermo Scientific) according to the manufacturer's instructions. H7N9 VLP was harvested and purified from the culture supernatant of Sf9 cells co-infected with rBac-H7, rBac-N9, or rBac-M1. Briefly, 72 h after co-infection, the cell culture supernatant was centrifuged at 3,000 × g for 20 min, and the VLPs were pelleted by centrifugation at 70,000 × g for 2 h at 4°C. The pellet was resuspended in TEN buffer (10 mM Tris-base, 1 mM EDTA, and 100 mM NaCl). The resultant solution was layered onto a sucrose gradient solution (20-50% in TEN buffer) and centrifuged at 100,000 × g for 2 h. Particles from each gradient fraction were separately collected for the HA test and TEM analysis (described below), and the fractions demonstrating the highest HA activity and appropriate morphology were pooled as purified VLPs. The protein concentration was quantified by the Bradford protein assay (Bio-Rad, Richmond, CA) according to the manufacturer's recommendations. H7N9 recombinant proteins and VLPs were analyzed by 10% SDS-PAGE, and probed with anti-H7/HA, anti-N9/NA, and anti-M1 antibodies in the IFA section. To detect the protein signals, the membranes were incubated with HRP-conjugated anti-rabbit or anti-mouse IgG (Jackson ImmunoResearch Laboratories) at a 1:2,000 dilution for another hour and developed using ECL Western blotting detection kit (Bio-rad). Immunogold labeling of VLPs Purified VLPs were absorbed onto a plasma-discharged copper grid for 2 min and fixed with 4% paraformaldehyde for 5 min. After PBS washes and blocking with 1% BSA (Sigma, St. Louis, MO), the grid was incubated with either anti-H7/HA or N9/NA antibodies (1:200) for 1 h followed by incubation with 6 nm gold-conjugated goat anti-rabbit antibodies (1:20) (Jackson ImmunoResearch Laboratories). After PBS washes, 2% phosphotungstic acid was applied for negative staining. Particles were observed under a transmission electron microscope (TEM) (JEOL JEM-1400). Nanoparticle tracking analysis Particle concentration, size distribution, and surface zeta potential of the expressed H7N9 VLPs were measured by nanoparticle tracking analysis using Nano-Sight NS-500 (Malvern Instruments Inc., UK) based on the manufacturer's instructions. The purified influenza virus A/PuertoRico/8/34(H1N1) [12] was included as a reference. Optimization of co-infection of recombinant baculoviruses Four different conditions of co-infection were tested. rBac-HA (at an MOI of 2, 3.6, 4, 5), rBac-NA (at an MOI of 2, 5, 5, 6), and rBac-M1 (at an MOI of 2, 5, 10, 20) were combined respectively and used for coinfection in Sf9 cells. After 72 h, the cell culture supernatant was centrifuged at 3,000 × g for 20 min, and the VLPs were pelleted by centrifugation at 70,000 × g for 2 h at 4°C. The pellet was resuspended in TEN buffer, and the resultant solution was layered using a sucrose gradient solution (20-50% in TEN buffer) via centrifugation at 100,000 × g for 2 h. Particles from each gradient fraction were separately collected for a hemagglutination test (described below). Mice and chickens immunization Female 6-week-old BALB/c mice were purchased from BioLASCO (Taipei, Taiwan). Two-week-old specificpathogen-free (SPF) chickens were obtained from JD-SPF Biotech (Miaoli, Taiwan). Animals were randomly divided into different experimental groups (n = 6 per group), receiving H7N9 VLPs, HA/NA/M1 free proteins (only for chickens), or PBS. Briefly, 10 μg of VLPs or pre-mixed HA/NA/M1 proteins at a 1:1:1 ratio were emulsified with the complete Freund's adjuvant and used for the primary immunization using an intramuscular route. For the booster dose, the same amount of the antigen was mixed with the incomplete Freund's adjuvant. Mouse serum was collected before immunization and 14 days post-immunization (dpi), and mice were sacrificed at 28 dpi. Chicken serum was collected before immunization and at 14, 19, and 26 dpi, and all the chickens were sacrificed at 40 dpi. Mice and chickens were sacrificed by CO 2 inhalation. Hemagglutination (HA) test and hemagglutination inhibition (HI) test H7N9 VLPs and inactivated H7N9 virions were used as HA antigens in this study. Inactivated H7N9 virions were prepared as previously described [13]. HA and HI tests were performed with standard protocols provided by WHO [14]. Briefly, the HA activity of purified H7N9 VLPs or inactivated H7N9 virions was tested against red blood cells (RBCs), and HA titers were recorded as the highest dilution exhibiting complete hemagglutination. For the HI test, the receptor-destroying enzyme was used for treating animal sera, and HI titers were recorded as the highest serum dilution exhibiting complete hemagglutination inhibition. Antigen-specific cytokine expression analysis Mice spleens were harvested at 28 dpi, and splenocytes were isolated for intracellular cytokine staining assays. Briefly, spleens were minced and passed through 70-μm cell strainers (Corning) to obtain single-cell suspensions. RBCs were lysed using an RBC lysis buffer (eBiosciences), and cells were resuspended in RPMI 1640 medium (Gibco, Grand Island, NY) containing 10% FBS. Viable cells were determined by trypan blue staining. 10 6 splenocytes were plated in 96-well U-bottom plates (Corning) and were either mock-stimulated or stimulated with 10 μg of H7N9 VLPs in the presence of brefeldin A (GolgiPlug, BD Biosciences, San Diego, CA) for 5 h at 37°C. Cells were then washed, incubated with 2.4G2 antibody, and labeled with anti-CD3e-FITC, anti-CD4-PerCP-Cy5.5, and anti-CD8a-PerCP-Cy5.5. The cells were then fixed and permeabilized using a Cytofix/ Cytoperm Kit (BD Biosciences) and stained with anti-IL4-PE, anti-IFNγ-APC and anti-TNF-PE. All antibodies for mouse experiments were purchased from BD Biosciences. Samples were read on the BD FACSCalibur and analyzed using FlowJo software (Tree Star, CA). Chicken spleens were harvested at 40 dpi, and splenocytes were isolated and stimulated with H7N9 VLPs in the presence of brefeldin A as described above. For the flow cytometric analysis, cells were washed and labeled with anti-chicken CD4 or CD8 antibodies (AbD Serotec, Raleigh, NC). The amount of CD4+ or CD8+ T cells gated on 2 x 10 4 lymphocytes was determined. For the quantification of cytokine expression, the stimulated splenocytes were lysed, and total RNA was isolated by TriSolution reagent (GeneMark, Taipei, Taiwan). Realtime reverse transcription polymerase chain reaction (RT-PCR) was performed using iScript (Bio-rad) and iQ SYBR Green Supermix Kit (Bio-rad) with previously described primers for chicken IL-4, IFN-γ and GAPDH [15]. Melting curve analysis following real-time PCR was conducted to verify the specificity for each primer set. All obtained Ct values were normalized to GAPDH. The relative expression of chicken IL-4 and IFN-γ (fold change of PBS-vaccinated control) was determined by a 2 -ΔΔCt method [16]. Enzyme-linked immunosorbent assay (ELISA) An amount of 100 ng purified NA or M1 protein was coated onto the flat-bottomed 96 well microplate (Nunc, Roskilde, Denmark) overnight at room temperature. After blocking with 5% skim milk (BD Difco, Sparks, MD), the test chicken sera were serially diluted and incubated for 1 h. Following washes, 100 μl of peroxidaseconjugated goat anti-chicken IgY (H + L) (Jackson ImmunoResearch Laboratories) diluted 1:2,000 in blocking buffer was dispensed into each well and incubated for another 1 h. After additional three washes, the wells were incubated with 100 μl of SureBlue Reserve TMB Microwell Peroxidase Substrate (KPL, Gaithersburg, MD), and color was allowed to develop in the dark for 10 min. The reaction was stopped by the addition of 100 μl of TMB stop solution (KPL). All the incubation steps were performed at room temperature. The optical density (OD) at 450 nm was read using an automated plate reader (Thermo Scientific). Statistical analyses Data were analyzed by unpaired t tests or ANOVA followed by Dunnett's multiple comparison tests using GraphPad Prism (GraphPad Software, San Diego, CA). The p values smaller than 0.05 were considered significant. Production of H7N9 VLPs The full length HA (1,683 nt), NA (1,398 nt), and M1 (759 nt) genes of A/Taiwan/S02076/2013(H7N9) were cloned into three separate recombinant baculoviruses. As indicated in Fig. 1, Sf9 cells were co-infected by the three recombinant baculoviruses, rBac-H7, rBac-N9, rBac-M1, to generate H7N9 VLPs. The titer of each recombinant baculovirus was determined as 6 × 10 6 plaque-forming unit (PFU)/mL, 3 × 10 7 PFU/mL, and 1 × 10 8 PFU/mL, respectively, by plaque assays (Fig. 2a, upper panel). The expression of HA, NA, and M1 proteins in Sf9 cells was identified by IFA using FITClabelled antigen-specific antibodies. In Sf9 cells nucleistained with DAPI, FITC signal was observed in the cytoplasm (Fig. 2a, lower panel), indicating successful protein expression by the respective recombinant baculovirus. In contrast, no FITC signal was observed from the uninfected Sf9 cells following staining by any of the antigen-specific antibodies (data not shown). As HA is a primary protein that determines the antigenic signature and virulence of influenza viruses, we applied an HA activity assay using chicken red blood cells to optimize the co-infection condition and the VLP collection protocol. The three recombinant baculoviruses of different combinations of multiplicity of infection were prepared to co-infect Sf9 cells. The culture supernatants of the resulting co-infected Sf9 cells were subsequently collected and purified using sucrose gradient centrifugation. Among four different combinations of MOI, the highest HA titer (1: 2 8 ) was obtained from the co-infection condition at an MOI of 2 for each of the recombinant baculovirus. This heightened HA activity was particularly pronounced in fractions containing 30-40 wt.% of sucrose (Fig. 2b). Based on the observation, all VLPs in the remainder of the study were prepared using Sf9 cells co-infected with the three recombinant baculoviruses with an MOI of 2. Plaque assays were applied to ensure undetectable titer of residual baculovirus for every batch of the VLPs. Characterization of H7N9 VLPs Upon collection of purified VLPs, Western blotting analysis was applied to confirm the presence of all three of the HA (66 kDa), NA (68 kDa), and M1 (25 kDa) proteins. Each protein on the VLPs was observed to be largely identical in size to the protein from cells infected individually by rBac-H7, rBac-N9, and rBac-M1 (Fig. 2c). This result indicates the three proteins self-assemble without undergoing further modifications. To examine the formation and antigen display of the H7N9 VLPs, immunogold staining was performed. Under the TEM visualization, particles approximately 120 nm in diameter were observed, confirming successful preparation of VLPs that resemble the morphology of H7N9 viruses. Labeling by immunogold further confirmed the presence of HA proteins (Fig. 3a, left) and NA proteins (Fig. 3a, right) on the surface of the particles. Examination by nanoparticle tracking analysis revealed a unimodal particle size distribution for the VLPs with an average particle diameter of 113.9 ± 0.6 nm, which is similar to that of a native H1N1 virus (122.1 ± 2.9 nm) observed using the technique (Fig. 3b). The surface charge of the VLPs is −24 ± 0.2 mV, which is also similar to that measured from the H1N1 virus (−37 ± 0.1 mV) (Fig. 3c). In addition, the nanoparticle tracking analysis provided insight to the mass of each individual VLP. Based on the observed particle concentration and protein quantification by Bradford assay, it was approximated that there are 1.11 × 10 9 particles per 1 μg of the VLPs, yielding a molecular weight of roughly 5.42 × 10 8 Da per particle. Protein quantification showed that the HA protein contributed to approximately 10% of total proteins in collected H7N9 VLPs (Additional file 1: Figure S1), translating tõ 800 HA proteins per VLP. The number of HA proteins per VLP is in line with the estimated number in native virion [17]. The purified H7N9 VLPs agglutinate RBCs at a minimum total protein amount of 0.098 μg (Fig. 3d), indicating proper display of the receptor-binding domain of HA protein in the VLPs. H7N9 VLP immunization elicited hemagglutinationinhibition antibody response in mice To validate the VLPs' immunogenicity, mice were immunized twice with 10 μg of the VLP (approximately 1 μg of HA content) with a two-week interval (Fig. 4a). Blood was collected right before each vaccination for analysis. The level of serum HI antibody was evaluated by an HI test. As compared to the PBS-immunization control group, VLP-immunized mice exhibited a higher HI antibody response after a primary and a booster vaccination. On day 28, the average HI antibody endpoint dilution titer reached 1:80 and 1:160 against 4 hemagglutination units of H7N9 VLPs and inactivated H7N9 virions (Fig. 4b, c). The study validates the H7N9 VLP's ability to elicit an anti-HA humoral response in mice. H7N9 VLP immunization promotes virus-specific T cell immunity in mice To examine splenic virus-specific T cell response in H7N9 VLP-immunized mice, intracellular cytokine staining was conducted on the harvested splenocytes on the day of sacrifice. In VLP-vaccinated mice, production of IL-4 and TNF were shown to be elevated in splenic CD4 and CD8 T cells respectively (Fig. 4d and f ). Although statistical significance was not reached, the mean percentage of IFN-γ-producing CD4 and CD8 T cells in mice spleens was observed to be greater than the control group following VLP vaccination ( Fig. 4e and g). The results suggest that the VLP is capable to raising viral antigen-specific T cell immunity in mammals. H7N9 VLP immunization induced HI, anti-NA, and anti-M1 humoral responses in chickens To examine the immune-potentiating effect of the H7N9 VLPs in avian species, two-week-old SPF chickens were administered with a primary and a booster vaccination of 10 μg of VLPs on day 0 and day 14 (Fig. 5a). To further evaluate the VLPs' effectiveness, an additional group of chickens were inoculated with 10 μg of free antigens consisting of HA, NA, and M1 at an 1:1:1 molar ratio for comparison. A group administered with PBS was prepared in parallel as control. Blood was collected on day 0, 14, 19, 26, and 40 to assess the serum HI titers. In experiments using inactivated H7N9 virions as HA antigens, changes in HI serum titers could be observed after day 14 (Fig. 5b). Even though chickens receiving free proteins also showed increased HI titers as compared to the PBS control, the VLP-vaccinated group had significantly higher HI titers. A low HI activity was observed for the PBS control. Over the observation period, HI titers for both VLP and free protein groups increased over time, reaching a peak at 173.33 and 83.33 respectively. Anti-NA and anti-M1 titers in the serum of the vaccinated chickens were also quantified by ELISA with the blood samples collected on day 26 post immunization. For the anti-NA titer, immunization with free protein was found to elevate the titer to a median value of 2,434, whereas immunization with the VLP resulted in a median titer of 3,957 (Fig. 5c). Similarly, VLP immunization elicited higher anti-M1 titer (median = 685) as compared to free protein immunization (median = 221) (Fig. 5d). The experimental results confirm the presence of HA, NA, and M1 on the VLPs. The VLPs' are also demonstrated to possess superior capability in eliciting antigen-specific humoral responses as compared to free protein antigens. H7N9 VLP elicited virus-specific T cell response in chickens To analyze antigen-specific T cell responses in chickens, chicken spleens were harvested on day 40 post immunization. Following splenocyte isolation and The titer of recombinant baculovirus-HA (rBac-HA) was approxmiately 6 × 10 6 PFU/ml (left), the rBac-NA 3 × 10 7 PFU/ml (center), and the rBac-M1 1.1 × 10 8 PFU/ml (right). The protein expression of HA, NA, and M1 was identified respectively by immunofluorescence assay (lower panel). b Sf9 cells were co-infected with rBac-HA, rBac-NA, and rBac-M1 under various MOI combinations, and the VLPs were purified via sucrose gradient centrifugation. Formulation derived from the combinations of rBac-HA (MOI 2), rBac-NA (MOI 2), and rBac-M1 (MOI 2) co-infection exhibited the highest HA activity. c H7N9 VLP, H7/HA protein, N9/NA protein, and M1 proteins were detected by Western blot stimulation with H7N9 VLPs, the splenocytes were lysed and real-time RT-PCR was performed to quantify IFN-γ and IL-4 mRNA expression levels ( Fig. 6a and b). It was observed that immunization with free protein formulation and VLPs increased the IFN-γ and IL-4 levels, indicating elicitation of both Th1-a b c d Fig. 3 The formation of VLP and proper antigen display of viral proteins were verified. a The HA (left) and NA (right) proteins were detected by antibodies labeled with 6 nm gold under a transmission electron microscope. Scale bars = 50 nm. b Size distribution and (c) surface zeta potential of the VLPs were analyzed using nanoparticle tracking analysis. Influenza virus A/PuertoRico/8/34(H1N1) was analyzed under the same settings as a reference. d Hemagglutination activity of H7N9 VLPs was assessed by a hemagglutination assay. The total protein contents of H7N9 VLP are indicated based on a two-fold serial dilution. PBS was included as the negative control and Th2-directed immune responses by both formulations. Upon statistical analysis, however, only the VLPs induced significantly higher IFN-γ and IL-4 expression, whereas enhancement by the free protein formulation did not achieve statistical significance owing to a high degree of variability. The results indicate that the immune responses induced by the free proteins can be highly variable. In contrast, immunization with the VLPs facilitated a more consistent and robust induction in both Th1 and Th2 responses. CD4/CD8 ratio was also analyzed using flow cytometric analysis (Fig. 6c). On day 40 post immunization, VLP immunization was shown to significantly increase the CD4/CD8 ratio from a mean value of 0.56 to 0.64 (p = 0.03). While the immunization with the free protein a b c d f e g Fig. 4 The immunogenicity of H7N9 VLP in a mouse model. a Mice (n = 6 per group) were immunized with VLPs (10 μg per mouse intramuscularly on day 0 and 14) or PBS, and blood samples were collected. Mice receiving H7N9 VLPs showed significantly increased serum HI titer against the H7N9 inactivated virions (b) or H7N9 VLPs (c). Mice splenocytes were isolated on day 28 and restimulated ex vivo with the H7N9 VLPs. The antigen-specific cytokine responses were detected by intracellular cytokine staining. Comparing with the PBS control, mice receiving the H7N9 VLPs showed elevated CD4 IL-4 and CD8 TNF production (d, f), whereas the production of CD4 IFN-γ or CD8 IFN-γ revealed no statistical difference (e, g). Error bars are mean ± SEM. *p < 0.05, **p < 0.01, and ***p < 0.001 formulation also increased the mean value of CD4/CD8 ratio, no statistical significant was observed (p = 0.31). The results corroborate a more robust and consistent immune response upon exposure to H7N9 virions following VLP immunization. Discussion Since the identification of the avian influenza A (H7N9) infection in human in March 2013, the viral threat has prompted a global effort to develop vaccine candidates as humans have little immunity to this reassorted virus [18][19][20]. This leap of the virus from birds to humans was attributed to multiple reassortment events, and continuing reassortment events may lead to emergence of more infective strains. Disease management at both the human and animal levels are thus of major importance. In this study, A/Taiwan/S02076/2013(H7N9), a strain isolated in Taiwan [13], was used for the development of a vaccine candidate through the preparation of VLPs. Co-infection of recombinant baculoviruses in insect cells yielded influenza A/H7N9 VLPs that coexpressed HA, NA, and M1 proteins, which have molecular weights consistent with previously published studies [10,11]. The VLPs are monodisperse and contain approximately Fig. 5 The immunogenicity of H7N9 VLP in a chicken model. a SPF chickens (n = 6 per group) were immunized with VLPs, HA/NA/M1 free proteins (10 μg per chicken intramuscularly on day 0 and 14) or PBS, and blood samples were collected. b Chickens receiving H7N9 VLPs showed significantly increased serum HI titer against the H7N9 inactivated virions. Error bars are mean ± SEM. c, d On day 26 post-immunization, chickens receiving VLPs exhibited higher serum ELISA titers against purified NA and M1 proteins as compared to the PBS control. Lines and boxes represent upper extreme, 25th, 50th, 75th percentile, and lower extreme. *p < 0.05, **p < 0.01, and ***p < 0.001 800 HA proteins per particles. These VLPs were found to induce proper humoral and cellular immunity against the H7N9 virus in animal models. In particular, the VLPs were found to be superior in eliciting viral-antigenspecific humoral and cellular immune responses as compared to a free protein formulation in an avian model, which has not been investigated previously. VLPs are a versatile system that presents a compelling alternative to conventional vaccine formulations as they possess the morphological semblance to natural viral particles. In the present study we applied nanoparticle tracking analysis and transmission electron microscopy to validate the structure and antigenicity of the VLPs. These analyses showed virus-like features consistent with other VLP formulations reported in previous studies [6,10,11,21]. It is worth noting that although an Sf9 insect cell system was adopted for the production of the H7N9 VLPs in this study, the VLP platform is highly versatile and may be produced using other culture systems. For instance, a previous study by Chang et al. employed an alternate High Five insect cell culture system for VLP production [22,23], demonstrating high production yield upon High Five system optimization. In the present study, we showed that varying the MOI of the different recombinant baculoviruses impacted the VLP production, offering information toward further optimization of the Sf9 system for VLP generation. We first validated the potency of the VLPs in a mouse model. Given that neutralizing antibodies against the globular head of hemagglutinin protein are the primary mediators of most vaccine-induced protection against influenza, a hemagglutination inhibition assay was used to examine the VLPs as a vaccine candidate against H7N9. An HI antibody titer of 1:40 is the accepted correlate of protection for human HA split inactivated vaccines [24]. Immunization with the VLPs resulted in a mouse serum HI titer of 1:160 against the H7N9 virions. Although future animal studies with viral challenges are warranted, the present result validates the immunopotentiation effect of our VLPs, which is on par with those in previous reports. Despite previous studies that examined different formulations of H7N9 VLPs in mouse models [9-11, 25, 26], no examination of H7N9 vaccination in avian models has been reported to the best of our knowledge. Given that the management of the avian influenza virus would benefit from both human and poultry vaccinations, we investigated the VLPs' vaccination effect in SPF chickens. As compared to a free protein formulation, the VLP vaccination yielded increased HI serum titers, anti-NA titers, and anti-M1 titers. It is also worth noting that upon examination of cellular immune responses, we observed that the VLP immunization resulted in elevated splenic IFN-γ and IL-4 upon subsequent viral exposure. The cellular immune responses elicited by VLPs in SPF chickens are consistent with previous studies examining VLP immunizations in other animal models [21]. In the present study, we purified our VLPs and made sure that no detectable baculovirus titer was present in any of our batches. The purification is critical as residual baculoviral proteins were previously shown to trigger innate immune responses through TLR9 and other pattern recognition receptors [27,28]. Towards future clinical translation, the elimination of residual baculovirus contamination is of high importance. Even though the free protein formulation, which is a mixture consisting of free HA, NA, and M1 proteins at a 1:1:1 ratio, also elicited humoral and cellular responses in our study, the responses were either weaker or more variable as compared to those elicited by the VLPs. The a b c Fig. 6 Cell-mediated immunity response induced by H7N9 VLP in chickens (n = 6 per group). Splenocytes isolated from chickens on day 40 after immunization were stimulated ex vivo with H7N9 VLPs, and the cell-mediated response was evaluated. a Splenic IFN-γ and (b) IL-4 mRNA levels were investigated using real-time PCR. By using flow cytometry, chickens receiving VLPs exhibit significantly higher ratio of splenic CD4 + / CD8 + cell count (c), indicating stronger T helper cell response induced by H7N9 VLP vaccination. Error bars are mean ± SEM. *p < 0.05, **p < 0.01, and ***p < 0.001 finding fortifies the notion that the H7N9 VLPs may serve as a compelling vaccine candidate in poultry farming, which typically employs inactivated or subunit vaccines for disease management [29][30][31]. Conclusions Our study has prepared H7N9 VLPs co-expressing HA, NA, and M1 as a vaccine candidate against avian influenza. The platform elicited more potent humoral immune responses and more consistent cellular immune responses as compared to the free protein formulation. Although future studies involving viral challenges are warranted, the platform presents a viable candidate for avian influenza management in both human and animal settings. Additional file Additional file 1: Figure S1.
7,138.6
2017-01-07T00:00:00.000
[ "Biology", "Medicine" ]
Scale factor and punch shape effects on the expansion capacities of an aluminum alloy during deep-drawing operations – The effects of punch geometry and sample size on forming limit diagrams in expansion are investigated in the case of a 2024 aluminium alloy. Four configurations were selected: flat punch (Marciniak test) or hemispheric punch and decimetric vs. centimetric tooling dimensions. Both decimetric and centi-metric deep-drawing devices are associated with an image correlation tool that allows identifying without any contact the deformation on the surface of planar or non-planar specimens. Strains on the surface of the samples are observed by means of a double numerisation in three dimensions of the sample before and after deformation by using stereoscopic vision and triangulation. Finally, deep-drawing limit of the four configurations are compared in expansion state and with literature. Results mainly show that hemispherical punch allows measuring higher strains and is less sensitive to size effect than Marciniak test. Introduction Forming of metal sheet is studied in many technologic fields especially in automotive industry. Determining the forming limit of metallic alloys is then a priority. Several methods can be used to identify such forming limits. The most widely used method is to draw a forming limit diagram from deep drawing tests. In this paper, interest is given to the establishment of such forming limits at room temperature in the case of 2024 Al alloy thanks to deepdrawing tests allowing the measurements of deep-drawing limits in expansion. The forming limit diagram is drawn from the major and minor strains observed at the surface of the sample. This diagram generally corresponds to the boundary between uniform strain and diffuse necking [1,2] but the choice of the limit criteria remains an open question. Data related to Forming Limit Diagram (FLD) of 2024 Al alloy have been reported in the literature (see for instance [3]). Ghosh et al. [4] and Hsu et al. [5] have underlined that the maximal strains are depending on the punch geometry. In the present work, to investigate formability limits of Al 2024 in expansion, major and minor strains data were generated using both Marciniak tests and hemispherical punch tests and both centimetric and decimetric punch dimensions. Moreover, since industry deals increasingly with forming of small components (i.e. mini or micro forming), it is of interest to know if results deduced from a conventional macroscopic deep-drawing device are still suitable for tiny samples formed with a mini forming device? A way to get information about this question is to investigate the influence of dimensions on the strain limit for two devices, decimetric and centimetric. Experimental devices 2.1 Mechanical devices Description of macroscopic (decimetric) tooling was detailed elsewhere [6]. The macroscopic sheet metal testing system (decimetric device) is composed of an upper die, a lower blank holder and a punch with a maximal load of 500 kN. The clamping force between the die and the blank holder is applied by eight hydraulic jacks and a manual pump. The punch is moving at constant displacement rate (1 mm.min −1 ) and automatically stopped when the load drops in order to measure the strain to draw a FLD. In the studied case, failure appears suddenly before the machine stops the test. At centimetric scale, an original deep-drawing device shown in Figure 1 test machine INSTRON (50 kN). This system allows testing small samples with a punch diameter between 1 mm and 10 mm. Experiments from 20 • C to 250 • C can be performed thanks to heating sticks inserted into the die. On this device, the punch is fixed; the sample is clamped between the blank holder and the die by the spring supported by a ring guided around the punch. This assembly, linked to the crosshead of the machine by a ball joint, is moved up at a constant velocity of 1 mm.min −1 . This tooling has two interesting features: first, it works in tension allowing better alignment (no buckling effect); secondly it keeps constant the distance between the punch and the fixed camera. This last point is essential if strains are measured by using only one camera (for flat punch applications) but optional for 3D digitization. The deep-drawing effort is calculated by the difference between the global effort and the spring stiffness, the frictions on the device columns being neglected (Fig. 2). For both systems, a layer of PTFE is inserted between the specimen and the punch in order to limit frictions. Figure 3 shows the two shapes and dimensions of punches that were used. The initial thickness of the sheet is 2 mm, value used for decimetric tests. For the centimetric test, the thickness was reduced by using electro-spark cutting techniques. For Marciniak tests, to impose the localisation on the flat surface of the sample, two different methods were undertaken. On the decimetric tool, classical method with a holed spacer between the sample and the PTFE sheet is used. The carrier bank must be centred on the punch in order to provide uniform straining on the flat surface. On the centimetric tool, a new centred local reduction of thickness is obtained again by using an electro-spark cutting process (Fig. 4). This technique permits to get rid of the holed spacer. Strain measurement tools Strain fields on the surface were obtained by using images correlation techniques and stereoscopic visions. With these two devices allowing access to bi-axial stretching, it will be shown in the following that various strain situations exist. 3D measurement is adapted to determine the whole range of these strain situations. The chosen comparisons were realised by exploiting these strain fields calculated between the state of the sample before deformation and after the apparition of the first crack. The set of measured points is reported on the strain diagram. To analyse deformation on a flat surface, a unique camera is enough for a 2D correlation if this camera is located at a constant distance of the sample, perpendicularly to the flat surface. Displacement field measurements and calculation of strain fields along non planar surfaces require a double 3D digitization of this part before and after the forming operation. A correlation software "7D" developed by Vacher et al. [7] was used. Acquisition of images was carried out via two synchronised numerical reflex cameras (Nikon D200, 3872 × 2592 pixels and macroscopic objective 105 mm). Beforehand, the stereoscopic system was calibrated, the relative orientation of the camera was obtained and the optical distortions were corrected [8]. In that case, the angle between the two cameras was close to 15 • . The stereoscopic bench is shown in Figure 5. In this study, the extensometric dimension (grid step for correlation approach) was chosen close to the thickness of the tested sample: 2 mm (about 64 pixels) for the decimetric tests with both punches, 0.4 mm (48 pixels) for the centimetric hemispherical punch and 0.2 (32 pixels) for the centimetric flat punch. The correspondence between the pixel size and the millimetric dimensions was conditioned by the magnification used during the digitization (Table 1). Defining the displacement measurements uncertainties by this correlation approach is difficult since various parameters can disturb the results such as the maximal strain amplitude, strain gradients, intensity and type of lighting, rigid body rotations, noise of the camera, quality of the random pattern, displacements occurring out of the planar surfaces. Several authors have shown that these errors in terms of displacement are close to 0.01 pixel for small strains (<5%) if these ones are homogeneous [9]. With large strains, these errors are rather close to a tenth of a pixel. In this study, the deformation uncertainties were close to 5×10 −3 [10]. In the present investigation, the Green-Lagrange tensor was used with E min corresponding to the minimal principal strain and E max related to the maximal principal strain. Material state and characteristics The material studied in an aluminium 2024 in the T351 state (thermo-mechanically hardened) that is well known in the aeronautical industry. Its chemical composition is described in Table 2. In this alloy, copper and magnesium are the main alloying elements and are providing a significant raise in the mechanical properties of the alloy. Concerning the metallurgical state of the specimens, microscopic observations have shown that the microstructure is homogeneous in the thickness of the material and that no recrystallization phenomena are provoked by electro-spark cutting techniques. The grain size is closed to 20 μm and the anisotropy parameter called Lankford coefficient r is closed to two: r = ε witdh/ε thickness = 2. Lankford coefficient is a key factor for deep-drawing capacities. When Lankford coefficient is high (>2), the thickness reduction is lowered and rapid fracture more easily avoided. Results Measurements of 3D fields allow observations of local strains for each configuration. Two stereoscopic images couples were recorded before and after deformation by using a stereoscopic bench (Fig. 6) and the correspondences for each point between the four pictures were obtained by image correlation. Only points gathered from the four images of the specimen can be reconstructed in 3D in the initial and final configurations. Principal strains and their orientations were then calculated. In this paragraph, two tests are described: one using a flat punch with a decimetric size and the other one using a spherical punch with centimetric dimensions. Specificities related to the flat punch tests Marciniak test displays the advantage of exhibiting a flat zone that does not undergo any friction with the punch. In addition, if necking occurs on the planar surface, the analysis does not require the use of 3D approach. In that case, a 2D correlation with one camera is sufficient as shown in Figure 1. In Figure 7, a comparison of the two types of analysis (2D and 3D) on the same planar surface is presented, the first one using only the data from the camera located perpendicularly to that surface, the second one exploiting the stereoscopic reconstructions. From Figure 7, it can be concluded that the 3D approach leads to similar results than the 2D one. For all the presented results below, 3D correlation techniques are used, this technique allowing reaching strain situations located on non planar surfaces. Marciniak tests exhibit nevertheless some drawbacks. Localising the deformation on the planar surface requires the use of a holed spacer or the machining of a local lower sheet thickness, both centred on the punch. Despite that, fracture on the punch entry radius can still occur in particular when using thick sheet or quite brittle alloys. Moreover, for materials displaying anisotropic mechanical behaviours, strain situations are not in pure expansion due to frictions occurring on the edge of the punch which promotes strain along particular directions. Uniaxial tension is occurring in the lateral zones (E min = −0.06, E max = 0.1). For the two sizes of flat punch, fracture appeared suddenly. In the case of the centimetric flat punch, fracture is fully localised in the planar zone whereas, for the decimetric punch, the fracture crosses the sample. It is then impossible to know the exact localisation of the fracture first appearance. Information extracted from these analyses is reported on Figure 9 showing E min and E max . The two clouds of points correspond to all the calculated points (black points) or only the ones located on the flat zone of the punch (grey points). Usually, analysis of Forming Limit Diagrams (FLD) obtained by using Marcinaik tests is deduced only from the observation of the flat zone. Figure 9 clearly shows that 3D measurements give access to more deformed zones and more various strain distributions in particular on the radius of the punch. Specificities related to the spherical punch tests Despite the fact that Marciniak test is a well accepted method for constructing Forming Limit Diagrams, it is not the only way to obtain results describing the behaviour of a material in deep-drawing conditions. Hemispheric punch is often used [2], to get the strain limits of a material in expansion. Figures 10 and 11 display information collected in the case of the centimetric device experiments using a spherical punch. From a mechanical point of view, this test is easy to perform since it does not need the use of a holed spacer or a located reduction of the thickness because the strain only concentrates on the spherical zone of the punch. The drawbacks of this method are mainly the influence of bending, normal pressure, and frictions that lead to results which are not very dependent on material defects [4]. During the test, three sample images pairs were recorded: the first one before the test, the second one during the test in which strains are mostly in a bi-axial stretching state, and the last one after fracture. It can be seen in Figures 11 a and 11b that strains are relatively gathered along an expansion strain situation. As expected, despite the PTFE layer between the punch and the specimen, friction remains significant and reduces the deformation amplitudes on the top part. As a consequence, the central zone is not the most deformed and the maximal strain zone is located within a crown separating the punch contact and the no-contact zone. After fracture, strains in the crack presented in Figure 10b cannot be considered since, in the crack, strains are infinite and the results depend thus hardly on the grid step. In the following, strains calculated in the zones crossing the crack are filtered and the corresponding graphs then only represent the safe zones of each test. Comparative analysis One test supposed in biaxial stretching leads actually to various strain situations and then it is difficult to define only one point summarizing the forming limit of the material. The objective is not to establish an answer in terms of forming limit criteria. Many criteria have been used and the choice of the one that fits best is always hazardous. Criteria concerning the thickness strain can be found, or dealing with the maximal strain observed around the necking zone. . . [11]. Image correlation techniques have shown that results around the necking are delicate to calculate in order to obtain a proper limit strain. The position of the forming limit curves is strongly dependent on the chosen criterion. In consequence, only the clouds of points will be shown in the following diagrams. The various tests are compared in Figures 12 and 13, regarding the deep-drawing conditions: geometry of the punch (flat or spherical) and scale variations (decimetric or centimetric). The information resulting from the different sizes of flat punches were taken only from the flat surface in order to compare with literature. When using the decimetric tooling, the spherical punch provides a higher strain capacity in the expansion field (Fig. 12a) and allows to reach a maximal strain E max = 0.35 with E min = 0.23. In the case of the flat punch, one obtains smaller maximal strains (E max ≈ 0.22 with E min ≈ 0.1) following a strain distribution between expansion and plane strain. These results are in agreement with the conclusions previously suggested by Ghosh et al. [4]. The observed differences between the two punches in the decimetric tooling are also relevant for the centimetric device: the spherical punch offers higher strain limits than the flat punch: E max = 0.25 (with E min = 0.23 which means that E max ≈ E min ) with the spherical punch whereas E max = 0.15 (with E min = 0.13 which means again that E max ≈ E min ) with the flat punch. It is noteworthy to see that flat punch tests lead to various strain situations depending on the working scale (Fig. 13a). Comparison of the strain distributions on the flat zones is interesting: at decimetric scale, the ratio E max /E min between the principal strains close to the crack is about 4 whereas at centimetric scale this ratio remains close to 1.5. At centimetric scale, strong strain heterogeneities are observed. Moreover, defects on the surface or edges of the sample could lead to faster cracking in the case of thinner sheets because the relative sizes of the cracks are more important when the sample is thinner. The spherical centimetric and decimetric tests comparison shows an obvious superposition of the points. It can be observed that the expansion domain is much more extended. To our knowledge, no article is dealing with Al2024 in the T351 state. Reyes et al. [3] have reported some results on a T3 Aluminium 2024. T3 and T351 treatments lead to a similar hardness of the material, one by submitting the material to mechanical hardening and the other by thermo-mechanical treatment. As a result, the observations concerning the conventional decimetric Marciniak test are reliable to compare with the results of the present work: Marciniak homogeneous strains are quite similar (close to 5%) and most points are located in this uniform zone (Fig. 14). Some differences can be observed concerning the maximal strains, probably due to the used extensometric pattern. Image correlation allows observing very local strains and then giving access to higher strains in the necking zone of the material. Nevertheless, spherical punch test offers much higher strains even in the homogeneous zone but for the spherical [3] results. It can be observed that uniform strain capacities are relatively similar for both works. punch, no comparison with literature has been found for the studied alloy. The fact that spherical punch provides higher surface strains can be explained by complementary facts. The surface observed is the external surface solicited in expansion. The use of a spherical punch provides bending with the external surface solicited in multi-axial tension that adds strain to the observed initial expansion. Moreover, friction prevents the internal surface from expanding and then allows the external surface to exhibit higher strains. These points are the main reasons for observing higher strains with the spherical punch compared to the flat one. Finally, the huge difference between centimetric and decimetric flat punch (Fig. 13) may be due to two main causes: on the centimetric test, defects have more impacts due to their relative size (towards the sample dimensions) which prevent strain to reach a high maximal level (0.2 for the decimetric punch). One of the other reasons for disparities is the fact that the holed spacer placed on the decimetric test is not solidar with the sample and then the strain situation is deviated from the expansion one. The differences between the two types of punch are also related to the sample manufacturing. The centimetric one owns a specific shape (lower thickness in the centre of the sample) that is well-controlled whereas the decimetric one needs an external holed spacer for the flat punch that influences the strain distribution. For the use of spherical punches no artefacts are added and so the test is more repetitive when changing the scale of the operation. Conclusions The aim of this study is to investigate the differences between two deep-drawing tests (Marciniak test and hemispheric punch test) and their capacities to be repetitive when changing the tooling and sample dimensions. Results show a significant influence of the punch shape for both device dimensions. Those results are in agreement with the previously reported observations, confirming that larger maximal strains in bi-axial stretching can be obtained with a spherical punch. Concerning the impact of a scale variation, the results obtained with the spherical punch seem to be more reproducible than those obtained with the flat punch. The hemispheric punch is more interesting as it provides various strain situations with only one test. So, with that kind of punch, one can do a limited number of tests at the chosen scale (staying in the range of non-influence of grain size and defect dimensions) in order to obtain a complete forming limit diagram. As a consequence, the use of spherical punch appears as a particularly attractive device to analyse the formability of metals. In future, in order to extend these conclusions to metallic alloys with relatively low ductility, magnesium alloys will be studied and in this framework, an attention will be paid to the effect of temperature on the forming behaviour.
4,683
2014-01-01T00:00:00.000
[ "Materials Science" ]
Insurance Fraud Detection using Spiking Neural Network along with NormAD Algorithm - General automobile insurance in recent years, has seen a huge escalation of fraud cases. The requirement of utilizing well organised and coherent technique to check on or determine user those are potential frauds. Thus, the deployment of the NormAD algorithm with less delay to enhance the safety and authorized in the operative process. The paper here describes attribute extrication method and Spiking Neural Network structure to resolve the issue of identification of automobile insurance fraud. The attribute second-level extrication algorithm coined in this paper can efficiently derive key attributes and enhance the identification accuracy of succeeding algorithms. So as to achieve to resolve the issue of unstable simulation allotment in the automobile insurance fraud identification scheme, an exemplary distributed method established on the plan of small unit proportion balance is presented. Formulated on the above techniques of attributes extrication and sample division, a model established on Spiking Neural Network with NormAD Algorithm is proposed. This method utilizes the complete goal of implementation of the Spiking Neural Network model algorithm that rely on Spiking Neuron, and ultimately accomplishes in enhancing the exactness of the detection of Automobile Insurance Fraud. Introduction In various domains, we encounter fraud regularly.It is found in many various moulds and models coming from yesteryears fraud or scams e.g., simpleton like tax fraud, to be explicit, in which whole lot of people in group come together to perform such scams.So, these organised groups are readily found in the automobile insurance domain.Scamster creates accidents in traffic and apply for false insurance claims to profit (injudicious) currency from their general or vehicle insurance.Sometimes it is observed that there are no accidents in actual but the vehicles are located on the road to create false claim for insurance money.However, many insurance claims are unplanned but mere opportunity to make increased claim for covering past car expense.Fake accidents have various common features.These accidents happen in near to midnight and areas related to rural where there are no one to witness the accident or staged people can be used.Usually younger males are the divers, as many passengers are present in the vehicles, excluding children or elders.To validate the whole scene police are present to create the substantial credibility for making false claim easily.The common thing in all this is that total people have several wounds (not serious), whereas this is found mostly vehicles are undamaged.Plenty other sceptical features exist, not defined here.The insurance companies are most fond of groups of scamsters that are organised are as such drivers, chiropractors, garage mechanics, lawyers, police officers, insurance workers and others.These categories are related to major leakage in financial loss.In works in the literature, different techniques are reflected for determining false claims in automobile insurance domain.However, in various domain key factor is the database, a thin line of work or research is done in the domain of fake claim in insurance fraud detection is present in the databases.This fraudulent behaviour in automobile insurance is determined by using Latent Dirichlet Allocation (LDA) based text analytics as proposed in [1].In work [2], genetic algorithm based fuzzy c-means clustering has been coined to standardised the automobile insurance scam in which different supervised classifier structures are used for identification.Another system of multiple classifier [3] established on principal component analysis, random forest and mighty nearest neighbourhood method has been put forward to evaluate the fraudulent tasks in the automobile insurance scam which yields good proficiency than the state-of-the-art models.Various feature selection techniques based on correlation and genetic algorithm [4] has been utilized and employed on the fraud insurance database by Decision Tree and Bayesian algorithm for identification.Other like Nearest Neighbourhood established on pruning rules [5] and association rules has been employed on automobile insurance dataset for building of training model and evaluate the efficiency [6][7].However, various methods have come in to light for developing an efficient automobile insurance fraud identification system, although, nearly all systems used earlier depicts more deflection in relation to accuracy as these systems require mostly all attributes that exists in the automobile fraud insurance database. The remaining part of the paper is arranged as follows: Section 2 comprises the research objective related insurance fraud in automobile.Section 3 explains working of Attribute Extrication using Discrete Wavelet Transform.Section 4 evaluates methods which are utilized for feature selection by Principal Component Analysis elaborately.Section 5 explains method of identification by the use of Spiking Neural Network (SNN) and Normalised Approximate Descent (NORMAD) algorithm.The proposed technique is elaborated or explained in details properly using flow diagram.The results and discussions inferred from the experiments are evaluated and then required discussion were made in section 6.Finally, section 7 reflects present work-related conclusions and opened a new direction for extension in future. Data preparation for the expert system Although the key motive of this paper is to coin a algorithm on standard statistical learning for fraud identification well apt mainly for any highly asymmetrical information from an insurance company for research work, we procure simple and quite a sample database from Data Preparation for Data Mining book [8] to reflect the well coined algorithm.The database comprises of 15,420 observations which indicates 32 predictor variables and a diverging reaction variable for fraud detection.The data were collected over a three-year period from 1994 to 1996.30 definite variables, one continuous variable, and detection variable exist in the database.However, the algorithm proposed is precision for scam identification (classification) other than prediction of insurance premium, neither were time-sensitive variable.Every definite variable was converted into simpleton variables.The binary reaction variable explains in case the claim was classified as false or true.Out of 15420 observations, 923 (6.4%) claims sorted as scam within the database.In the database, there is no missing value vectors.The information is generally unbalanced in insurance fraud identification and it is demanding to create a classification structure with such a unbalanced database.Nearly 3 years of cases were utilized in this database.The scarcity stage changes that effects these vectors would not be shown in tiny period of 3 years.Although, even since only 3 years of information were utilized in the selected database, the proposed algorithm does not relay on the years of data made due to proposed algorithm takes the years as an ordinal variable.Our algorithm can be utilized to current insurance data made for an escalated number of years with observations in million practically.The identifier variables include various population related variables like age, gender, marital status, etc. Various variables explain the automobile implicated in the claim such as type, make, price, age of vehicle, etc.Further variables explain the claim like time of year, filing of police report, witness present, etc.The remaining of the variables explain the kind of insurance policy such as deductible, policy type, etc.The variables are concluded in Table I The foremost variable shading of the data, Policy Number (the detection variable) was removed because it represents no meaning to the process.Collinearity in many folds within the predictor variables was evaluated by the variance inflation factor (VIF).When the VIF for a variable was more than 10, then that variable was meant to be as highly corresponding with other predictor variables and was removed from further evaluation.The succeeding variables were subsequently deducted from hypothesis based on their VIF: Base Policy, Vehicle Category, Age of Policy-Holder, Month, and Address Change Claim.Thus, there were 26 rest variables available for respective evaluations.These variables were explained as the initial 26 variables to be studied.There were nil observations removed from the database. From the original database, learning set and testing set were created.In the paper, learning set was utilized to create total systems.The test set was utilized to examine and process the final outcomes of all the system.Although the whole dataset was steadily unbalanced with 14,927 non-scam cases and 923 scam cases, the learning set was built to provide steadiness to the data for more accurate outcomes.The learning set was inconstantly chosen for 1000 observations by veneered uncertain sampling.Five hundred of 1000 observations were inconstantly chosen from the 14,497 non-fraud cases and the rest 500 observations were inconstantly chosen from the 923 scam cases. The test set included the remaining of 13,997 non-scam cases and 423 scam cases, so therefore the smaller dimension of the test set was 14,420. Discrete Wavelet Transform As the traditional method of Fourier transform is used very often in the analysis of the insurance data it proved to be less efficient due to its trade off among temporal resolution and frequency resolution.An alternative solution to this problem is wavelet transform, which has been comparatively current advancement in area of digital signal processing by [9], though it has been found to have been invented separately in diverse fields of mathematics, quantum analysis and in electrical engineering [10].Application of wavelets has been in various domains, such as time series data compression, filtering of noise from data and detection of features [11].The representation of the signal in the Fourier transform is delineated into a fundamental wave of sine and cosine.The wavelet transform also utilizes a logic, that is elements are defined scale-invariant, as is clearly understood that the basis seems to be the same at all scales, and the basis is space localized.The outcome is that in the wavelet representation, the signal at separate resolutions seen in different window sizes as it can be visualized just as a building and its windows at the same instant of time.On a large scale, the group of buildings can be seen and this can be viewed to get the global features.To look for the window of the building, closer focus is needed and to get local features.A closer look can be made to view hooks on the window.Different scales can be used to view all groups of buildings, building, window and even hook on the window.The major dissimilarity between Fourier and wavelet analysis is that flexible size of window is sufficient for wide spectrum stationary signals such as database (for low frequencies, large windows are used and for higher frequencies, small windows are used).Mother wavelets Ψm referred to as the basic feature and various choices to be obtained experimentally for the particular application.For instances of some mother wavelets include the simplest Haar wavelet, that is discontinuous step function.One of the disadvantages of discontinuity in a few domains like audio data, video data or data matrix is not suitable, whereas its advantage lies in random transitions like the failure of the machine [12].The apt wavelets are the Daubechies wavelets (dbN) reflects on fact that the evenness of the wavelets rises as N rises (db) for the database.To build other parts of the wavelet standards, the mother wavelet is increased and converted by factors i and j using: The measure of stretching or compression of the mother wavelet is based on parameter i ≠ 0 (depending on whether i is greater than or less than 1).Thus, high-frequency components that are introduced to the wavelet family, as i is small as a result of wavelets, can capture high frequencies of the signals.Similarly, to get low-frequency signals, the introduction of the low-frequency component to the family of the wavelet is done.The amount of shifting of the wavelet along the horizontal axis is determined by parameter j.If j>1 that makes wavelet shifts to the right then shifting it to the left with j<1.As a result, the onset of that wavelet specifies parameter j.Subsequently, the scaled wavelet is defined as daughter wavelets whereas main wavelets are called the wavelet function (mother wavelet) and function of scaling (also called the father wavelet).Wavelet Packet Decomposition Wavelet-based features are also used in fraud event detection in [13].First, wavelet packet decomposition trees of each signal are derived.Then, the features such as spectral centroid, sparsity, node energy, and spectral spread are extracted from the child nodes of the wavelet tree.The application of wavelets to the digital signal gives rise to separate the data into an approximation (high frequency) part and a detail (low frequency) part of the signal into the matrix.Due to this, wavelets can be used as low-pass and high-pass filters.Analyses of this filtered segment can be performed by wavelet again with the scale with shorter value typically half of the scale giving rise to daughter wavelet.Usually, the approximation parts of the signal contain actual information, that's why this part has to be analyzed again instead of both the detail part and approximation part(coefficients).But this cannot take place separately, so another technique can be used like the wavelet packet decomposition [13].This method produces a tree of wavelet decompositions, where there are M levels at the tree which is again starting with the head at 2M that produces a rich spectral analysis. The levels of decomposition are measured with the requirement of the application of wavelet packet decomposition.The method is applied to the data matrix for analysis again even after filtering for segmentation. The segmentation and feature extraction of the signal after filtering data can be achieved by this approach to get the approximation part at each section.The experiment or any convention through which can be estimated is by using information theory.The information contained in the approximation part of the signal as well as the detail parts contains noise that can be removed.In the area of information theory, the amount of uncertainty or disorder in a system is defined as Shannon entropy [13].The amount of information retained in the provided signal is due to this Shannon entropy.The concept of entropy for using wavelet to evaluate accuracy at an optimum level for the use of selective wavelet is high.We use this computation at each node to choose whether or not to retain a node and stopped creating the tree up to the point where all of the nodes contained noise were removed by this computation, meaning that the signal was fully described. Principal Component Analysis applied to the proposed Features In order to reduce the data features, this work uses PCA as described in [14].PCA is used so that the overall Where, is the variance of the ℎ feature (i goes from 1 to 64000), is the value of the feature, and are the total number of feature values for the given feature in the training set.Once the variance for each of the 64000 features is obtained, then eigen values are obtained for these vectors.The variance is plotted on a XY axis, which X axis being the feature number, and Y axis being the variance of the feature.For simplicity, consider that there are 6 features, for which the variance is plotted as shown in the following figure, Figure 2 Sample 6 features plotted against variance The value 'A' is the mean between these points.Now, we evaluate the best fit line between these points, which aims to reduce the distance between these points, and showcase the line as follows, Figure 3 Best fit line between these points The axis is shifted on the point A, so that the point A and the origin of the axis are coinciding with each other.This helps in evaluating the Eigen values, Figure 4 Axis coinciding with the mean value Find the distances d1, d2, d3, d4, d5 and d6 using Pythagoras theorem, and mark the values d1 to d6 as the eigen values of the features.Now shift the axis making it orthogonal to d1, and evaluate the other feature vectors from d11, d12, d13, d14, d15 and d16, as shown in the following figure (the red line is the orthogonal line), Figure 5. Orthogonal axis to the feature vector d1 Similarly, the axis is made orthogonal to each of the features, and the following matrix is evaluated, 1 11 1 2 12 2 1 The matrix basically consists of all the eigen values (or principal components) of the features.A singular value decomposition (SVD) is applied to these features, and a single decomposed value is found for the given matrix. 𝑆𝑉𝐷 = ∑ 𝑑 𝑖 * 𝜎 𝑖 * 𝑓 𝑖 … (3) Where, is the eigen value, is the variance of the feature, and is the feature vector value.All the positive values from these SVD values are considered for feature evaluation, while negative or zero values are removed.For our training set, the total number of features got reduced from 64000 to approximately 8000 when using PCA, thereby improving the system speed and accuracy of classification.4. Spiking Neural Network A spiking neural network (SNN) is a neurocomputing recognition method that is motivated by how the human brain works with the information.In literature, SNN is established as " enormously parallel interrelated networks of basic (usually adaptive) elements and their heterarchical establishments which are deliberately to interchange with the elements of the real world in the similar way as nerve system in human brains do".Human brain is presumed to be comprised of billions of interrelated neurons of many layers.Human brain neurons have capability to learn data.Due to this reason, humans are surprisingly efficient at analyzing the world that they visualize.A simple example of the task is Handwriting recognition.Adapted by various other handwritings through many years, an average human brain is ability of apprehend the handwritings of different people promptly.For practical use, it is difficult to employ human brain activity of understanding letters and characters into a program to make model.Even though, it is easier to think then to apply and employ model.The cause is that the differences in handwritings of various people makes it really rigid to detect accurate models.This results in majorly lesser success rate for computers than humans.SNNs point of view to this issue is in a same manner with human brain.It directs each attribute (or, basically, training vectors) as input, towards an input layer, an output layer and (optional) hidden layer(s) of artificial spiking neurons and begins to adapt the network with each training vectors.A cost function is selected to determine the error between the desired output and the estimated output.The task of training is to minimize this cost function iteratively.Let's discuss the basics of spiking neural network.The key components of SNN are Spiking Neurons and the synapses that interconnect them.As we can recall from biology, the unit of nervous system is nerve cell which transmits information by passing action potential to another neuron connected to it.So, a spiking neuron in fact spiking artificial neuron does incorporate this key aspect while modelling this behaviour.figure 6 Neuronal integration and Spike Communication Consider here a system of two neuron input neuron (Presynaptic neuron) and output neuron (postsynaptic neuron) both are interconnected by synapses having strength w as shown in figure 6.This synapse is modelled with double decay exponential kernel.These are some mathematical modelling features that people incorporate.To understand this system, a presynaptic neuron issues a stream of spikes which then get translated to postsynaptic current when spikes are passed to synaptic kernel and this post synaptic neuron then integrate the incoming current that is reflected in the membrane potential which rises as per the incoming current.So as when the membrane potential of the neuron exceeds the threshold potential then it issues spikes.The neurons used in my work is simple leaky integrate and fire neurons.The LIF neuron captures the key aspect of integrating the incoming current and issuing spikes whenever the threshold is exceeded.That is well described by the differentiation equation of membrane potential. Whenever V(t) exceeds Elm , spikes is issued and V(t) =Elm it is reset.The simulation in this work uses, Cm = 300pf is the capacitance of the membrane, glm = 30ns is the leak conductance, threshold voltage VT=20mv and Isy is integrated synaptic current input to the neuron.An SNN prototype is given in Figure 7.The membrane potential is reset to its stable position Elm= -70mv after the issue of spike.When the voltage V (t) is greater than VTH, generation of spike occurs and is transmitted to downstream synapses.The current potential V (t) is equal to Elm remains in it for small period after issue of spike.The small period Trf =3ms where next spike is not issued.The leaky integrate and fire model acts like a nonlinear spatial filter with w1, w2, w3, w4…wn as synaptic weight as shown in figure 5.The model of the synaptic current kernel ki(t) is a framework of the variables such as rising time constant τ1 = 5 ms and decay time constant τ2 = 1.25 ms, respectively.The input received by neuron is from n synapses and time of spike arrival at the j th synapses is denoted as tj 1 , tj 2 , tj 3 …tj nj .Then the input at the j th synapse is converted in to post synaptic current Isy is given by following equation where k() is synaptic kernel () = × () (6) As the neuronal integration is highly nonlinear due to abrupt resetting of membrane potential.Synaptic Kernel is modelled by double decay exponential kernel and that when it is weighted with factor w give rise to post synaptic current.Let me discuss the learning algorithm that is the spike based supervised learning as called as NormAD (Normalised Approximate descent).So the meaning of supervised learning in SNN is to make the neuron issues spikes at desired instant of time at given a set of input signal.And to do so we consider a parameter that is typical synaptic weight w so that neuron issues spikes at utter desired instant of time.As part of learning Algorithm, we define this error function as the difference between desired spikes and observed spikes that is the one issued by the neuron.That is used to feedback term to update Synaptic weight.error (t) = Sd(t) − So(t) (7) In order to do that we define a cost function as w.r. t. to synaptic weight w as the integrated difference between the desired to the observed membrane potential.And applying gradient descent rule, that is typically used is as optimization of neural network we can find the instantaneous weight update term as derivative of the cost function which can be written in term of difference of derivative of the membrane potential.∆w(t) = ƞ r (t)∇ w J(ws, t) Where d ̂(t) = k(t) * h ̂(t), h ̂(t) = exp (−t τ L ) u s (t) ⁄ k(t) represented as synaptic kernel us (t) is the Heaviside step function τ L = leak time constant of the membrane=10ms Now the membrane potential as I discussed in my previous slide is represented by difference equation.Membrane potential equation is highly nonlinear due to spike occurring whenever V(t) exceeds Elm.So, considering the time interval between spikes we can solve that differential equation in closed form manner so that we obtained an expression we clearly see the dependency of synaptic weight on membrane potential in this manner.So, it is easier to calculate the derivative of membrane potential and also on further approximation that is by reducing the leak time constant of membrane potential.Basically, what we do to make spike issue by neuron that is spike coming to the neuron sparse so that there is no dependence of derivative term on the synaptic weight.There is further Normalization done on the voltage derivative term so that this dependence on Vdesired is completely eliminated.All we know about supervised learning task is the time instant of desired spike Sdesired term we have no knowledge of Vdesired term.So finally, our expression of weight update is the term depended on error and normalised voltage derivative term.Hence, this gives us the closed form expression of synaptic weight using the incoming desired spike trains. Simulation results In this section of the paper, there is a detailed description of how the proposed technique is simulated.For simulation, the experiments are implemented in MATLAB 2015 b software.The test platform is Intel core 3i 8th generation, 2.2-GHz CPU, 4-GB RAM processor with Windows 7 operating system.As illustrated in figure 8, the training data is taken as the input which is the first module of the system, which is then given to the feature extraction module after pre-processing.2. The features are given to the PPF generation matrix, wherein feature variation was evaluated and distinctly identifiable matrices are generated.3. Matrices are found for every instance and applied to the SNN training layer.4. The performance parameters of SNN are evaluated, and upon satisfactory performance, the SNN configuration was finalized.5.For any new data, steps 1 to 3 are repeated and the DWT matrix was applied to the trained SNN. 6.The obtained class was evaluated and system accuracy was checked.7. If the accuracy was lower than expected, then the SNN configurations are modified, and the process is repeated. Evaluation of Proposed Algorithm In order to evaluate the proposed algorithm, we decided to test the entire dataset on the algorithm.And then evaluate the results in terms of both accuracy and computation time.The following formulas were used, Accuracy = In the case of figure 10, the computation time delay of SNN along with wavelet is between ANN along with Wavelet and SNN along with permutation pair frequency.Though in SNN along with wavelet is best in classification accuracy but the computation time delay of SNN along with Wavelet is moderate in performance, therefore it is better than all other three techniques performed here. Conclusion This paper focusses on the competitive identification issue of automobile insurance fraud were analysed and talked through.Owing to the issue situation, comprehensive and extensive analysis and extraction are executed through data mining and analysis, and two bands of attribute extrication are carried out depended on the traditional attribute extrication mode.Focussing at the issue of imbalanced category distribution in the automobile insurance fraud recognition situation, the SNN model with NormAD algorithm was coined.This algorithm was utilized to resolve the issues of inadequate sample usage, easy overfitting, and low classification rate in the class distribution problem.Ultimately, through the extensive study and experimental evaluation reflected in this article, the results demonstrated that SNN model is the best for now in comparison to other conventional methods. In the days to comes, we intend to improvise results by using the adaptive attribute ranking semantic algorithm based on natural language understanding (NLP) to enhance the problem of attribute importance screening and analysis. feature vector can be reduced, and only optimum features are available for classification.This assists in reducing the training delay, and improving the overall accuracy of classification.The following data features are evaluated in this work,• Wavelet features that represent the wavelet domain data (majorly spatial features)• Permutation Pair Frequency Matrix (PPFM) used to represent sound in terms of adjacent values of data samples The following table demonstrates the number of features evaluated for each of the feature extraction techniques, total of 64000 features are evaluated for each sound sample.Most of the feature values are repetitive, and can be reduced to a much lower number.Thereby initially a variance calculation is done for these features.The variance is not calculated within the features of the same sound, but across different sounds of the entire training set.The following formula is used to evaluate variance for each of the 64000 samples,X[n] Figure 1 Figure 1 Discrete Wavelet Transform Sub-Band Decomposition Figure 7 Figure 7 SNN prototypeThe model of the synaptic current kernel ki(t) is a framework of the variables such as rising time constant τ1 = 5 ms and decay time constant τ2 = 1.25 ms, respectively.The input received by neuron is from n synapses and time of spike arrival at the j th synapses is denoted as tj 1 , tj 2 , tj 3 …tj nj .Then the input at the j th synapse is converted in to post synaptic current Isy is given by following equation Figure 8 Figure 8 Flowchart for the design of automobile Insurance fraud detection system Extrication of Attributes: Attribute extrication techniques used are the Discrete Wavelet Transform (DWT) and permutation pair frequency matrix (PPFM) in this proposed system.Ultimately in DWT, the approximate coefficient of attribute vector is 40 in dimension with the usage of Daubechies 4 and decomposition level of 10 which gives a optimum result.In permutation pair frequency matrix PPFM with classifier is efficient due to the permutation window is 5 samples with a time lag as 1 just as in[15][16].Delay time taken is more in SNN with permutation pair frequency than in SNN with wavelet For classification purposes, the Spiking Neural Network (SNN) and Artificial Neural Network (ANN) Model are used to classify the system.The system is divided into two main models, first is the training model where the data are input and attributes are calculated and stored in the database which can be used later to generate prototype vectors for specific data.Then the ANN model is included after attribute extrication for clustering vectors together in the feature space.This ANN model is only included in the training phase of the proposed technique.Once the system is trained it is then tested by implementing the same techniques.The system begins by taking the insurance data as input and feature extraction was calculated by different methods like Wavelet and Permutation Pair Frequency Matrix (PPFM), which are classified by using SNN & ANN with the features available in the database.A traditional method like the attribute extrication method along with ANN is used for classification comparison.The following algorithm steps are used for the development of the classification 1.The data was applied for feature extraction to the wavelet-based and PCA algorithm.2. The features are given to the PPF generation matrix, wherein feature variation was evaluated and distinctly identifiable matrices are generated.3. Matrices are found for every instance and applied to the SNN training layer.4. The performance parameters of SNN are evaluated, and upon satisfactory performance, the SNN configuration was finalized.5.For any new data, steps 1 to 3 are repeated and the DWT matrix was applied to the trained SNN. 6.The obtained class was evaluated and system accuracy was checked.7. If the accuracy was lower than expected, then the SNN configurations are modified, and the process is repeated. Figure 10 Figure 10 Figure 10 Graphical representation of GMM and SNN Results Where, time stamp is the standard Unix epoch timestamp which is the number of seconds since 1 st January 1970 Based on these evaluations, the results were evaluated which are mentioned below.
6,981.8
2021-05-10T00:00:00.000
[ "Computer Science" ]
NHEG Mechanics: Laws of Near Horizon Extremal Geometry (Thermo)Dynamics Near Horizon Extremal Geometries (NHEG) are solutions to gravity theories with $ SL(2,R) \times U(1)^N $ (for some N) symmetry, are smooth geometries and have no event horizon, unlike black holes. Following the ideas by R. M. Wald, we derive laws of NHEG dynamics, the analogs of laws of black hole dynamics for the NHEG. Despite the absence of horizon in the NHEG, one may associate an entropy to the NHEG, as a Noether-Wald conserved charge. We work out entropy and entropy perturbation laws, which are respectively universal relations between conserved Noether charges corresponding to the NHEG and a system probing the NHEG. Our entropy law is closely related to Sen's entropy function. We also discuss whether the laws of NHEG dynamics can be obtained from the laws of black hole thermodynamics in the extremal limit. Introduction Constructing and analyzing solutions to theories of (Einstein) gravity with various kind of matter fields in diverse dimensions has been a very active area of research since the conception of General Relativity. Black holes, stationary solutions with a regular event horizon, has been a class of solutions of particular interest. We now have classification (not necessarily a complete one) and in some case uniqueness theorems [1] for specific gravity theories. This classification is usually based on the choice of asymptotic behavior and horizon topology, the charges like mass, angular momenta and electric or magnetic (or possibly dipole) charges and, if there are "moduli" in the theory, on the asymptotic values of these moduli scalar fields. 1 Based on the seminal works of Hawking [4] and Bekenstein [5], it was argued that black holes behave like thermodynamical systems and the four laws of black hole (thermo)dynamics was proposed [6]: black hole is a thermodynamical system at the Hawking temperature T H (the temperature of the Hawking radiation as seen by the asymptotic observer) and chemical potentials, the horizon angular velocities Ω i and horizon electric/magnetic potentials Φ p . One can then associate conjugate charges to these, the angular momenta J i , the electric/magentic charges q p and the (ADM) mass M. These parameters and charges satisfy first law of thermodynamics, if we associate an entropy S BH to the black hole, as Bekenstein and Hawking did; explicitly, 2 (1.1) The remarkable feature of thermodynamical description is its universality, that it is independent of the theory and the specific class of solutions in consideration; it stems from very deep connections between gravity and thermodynamics. The next conceptual step in the thermodynamical description of black holes appeared in a series of papers by R. Wald et al. [8,9,11]. It was argued that not only the charges J i , q p and M, but also the entropy S BH may be viewed as a Noether conserved charge, associated with the Killing vector field which becomes null (and actually vanishes) at the horizon. Within this approach the first law of black hole thermodynamics was proved. Since our analysis will be based on [8,9], we will review these works in appendix B. Among many novel features, Wald's approach clarified (1) how the charges J i , q p , M and S BH depend on the theory (action), as well as the solution; (2) the significance of gravity equations of motion and dealing with "solutions" for having the thermodynamic description (recall that Noether charges are defined on-shell) and; (3) the meaning of "perturbations" δX's appearing in the first law (1.1): The first law is not only about some relations among the parameters defining the class of black hole solutions, the δX's are associated with the corresponding charges of a (non-stationary) system probing the black hole background specified by T H , Ω i and Φ p ; the black hole is seen as a thermodynamical system by the probe. In search for the micro/statistical mechanical system underlying black holes, the class of extremal black holes, those with T H = 0, proved very useful. Extremal black holes may be viewed as the ground state of a system with the same values of J i and q p and have generically non-zero entropy, while at zero temperature. It was noted in [12,13,14] and then rigorously proved in a series of papers [15,16,17] that focusing on a region close to the horizon of extremal black holes we obtain a new class of solutions to the same theory of gravity. This class of solutions, the Near Horizon Extremal Geometries (NHEG's) have the same conserved charges, J i and q p as the original black hole, while have no horizon and have a different asymptotic region. As the near horizon limit has been taken, these geometries have no horizon and no singularity. The project of classification and uniqueness theorems for NHEG has been actively pursued in the last decade or so and we have several theorems in four and five dimensions (see [17] for a recent review). We will briefly review these in section 2. In this work we focus on the NHEG and construct three laws of NHEG (thermo)dynamics. We argue one may associate an entropy to the geometry as the Noether charge associated with a (class of) Killing vector field(s) which become null at specific points of spacetime, very similar to what Wald did for black holes. We then work out universal relations among the entropy and other Noether charges of the system. We also work out what resembles first law of (thermo)dynamics for black holes, i.e. a universal relation which governs the relation between perturbations in the entropy and other charges associated with the stationary or non-stationary perturbations of the NHEG. The rest of this work is organized as follows. In section 2, we review some facts about the NHEG. In section 3, we compute all Noether charges associated with the symmetries of NHEG. In section 4, we present the three laws of NHEG mechanics. In section 4.1, we present zeroth law of NHEG mechanics. In section 4.2, work out the "entropy law" for the NHEG dynamics, i.e. a universal relation between entropy, which as we argue, itself is a Noether charge, and other Noether charges of the NHEG. The entropy law formula is closely related to Sen's entropy function [18]. In section 4.3, we construct "entropy perturbation law" for the NHEG. In section 5, we discuss whether the laws of NHEG dynamics can be constructed from those of black hole dynamics when the black hole becomes an extremal one. We end with discussions and concluding remarks. In the appendices we have gathered some useful relations about the sl(2, R) algebra, a review of Wald-Iyer formulation of the entropy and the first law of black hole thermodynamics, details of the computation of the symplectic form used in section 4.3, and discuss the "inner-outer horizons permutation symmetry," used in section 5. Near Horizon Extremal Geometries (NHEG) As mentioned in the introduction a generic black hole solution is determined by two class of parameters: those appearing in the thermodynamical description and those associated with the asymptotic values of moduli. There is a largely held idea that all thermodynamical black hole quantities is encoded only in the near horizon data. This viewpoint has been proved for the class of supersymmetric or BPS black holes where it has been shown that the value of the moduli fields at the horizon is independent of their asymptotic values and is completely determined by the (thermodynamical) conserved charges. This observation was called "attractor mechanism" [19]. It was then realized that [13,14,18,20] extremal black holes (which are not necessarily BPS) also exhibit attractor behavior. This means that all the information for "thermodynamical" description of black holes 3 is already included in the NHEG. This prompted the study of extremal horizons and exploring the possibility of NHEG uniqueness theorems, which we will review in this section. For further details the reader is referred to the recent comprehensive review [17]. Extremal horizons and near horizon limits Extremal black holes are solutions with vanishing surface gravity and hence they do not have a bifurcate horizon. Therefore, it is useful to describe them in a null Gaussian coordinate system [17]: where the horizon is at r = 0, andγ ab computed at r = 0 is the metric on the horizon which is taken to be a smooth, non-degenerate, compact codimension two spacelike surface. One can then readily take the near horizon limit by expanding around r = 0, setting r = ǫρ and v =ṽ/ǫ, ǫ → 0 to obtain The near-horizon limit has fixed all the ρ dependence. Metric (2.2) has translation symmetry along v coordinate, as well as scaling (v, ρ) → (v/λ, λρ). Next, one should require (2.2) to also satisfy equations of motion. Depending on the theory and its matter content we have some different possibilities for the h a and F functions and hence the symmetries of the (v, ρ) space. In particular, for "static" cases with dh a = 0 and when the matter content satisfies strong energy condition the isometry of (v, ρ) part enhances to SL(2, R). For stationary cases, with four and five dimensional Einstein-Maxwell-Dilaton (EMD) theory where metric on the space of U(1) gauge fields and dilatons is positive definite (they have non-negative kinetic term) and when the potential of the dilatons is nonpositive again we are dealing with a background with SL(2, R)×U(1) N symmetry. Here we do not intend to review in detail the extremal horizon uniqueness theorems. For more detailed and precise discussion see [17]. As we see for physically interesting cases the symmetry of the extremal black hole geometry generically enhances to SL(2, R) and some other U(1) factors. Therefore, here we only focus on the geometries with such symmetry. Explicitly, We define NHEG as the most general geometry with local SL(2, R)×U(1) N symmetry group. Here, we consider a generic diffeomorphism and gauge invariant theory without specifying the explicit form of the action. (Note that EMD is a special class of such models.) In general, at most d − 3 U(1) factors are associated with rotations of the d dimensional spacetime while the rest of them (up to N) is the number of gauge fields. For a generic NHEG we adopt a coordinate system which makes the SL(2, R)×U(1) N symmetry manifest: supplemented by a set of gauge fields A (p) In the above i, j = 1, · · · , n and p = n+1, · · · , N, and n ≤ d−3. Γ, Θ αβ , γ ij , f (p) i are functions of the polar coordinates θ α whose explicit form may be fixed upon imposing equations of motion. k i , e p are constants, the constancy of which is a direct consequence of SL(2, R) symmetry. A full solution may also involve a number of scalars φ A = φ A (θ α ), however, due to the attractor behavior (see [13] and references therein) the parametric dependence of the scalar fields is completely fixed by the other charges. So, while these scalars can affect the value of charges, we need not consider them separately in this paper. We take the constant r, t surfaces, denoted by H, to be compact, smooth and non-degenerate. Moreover, we take the metric on ϕ i space, γ ij , to be non-degenerate and positive definite. Relation between SL(2, R) and U(1) generators Let us define the SL(2, R) vector n a , a = 1, 2, 3 as the unit normal vector to AdS 2 in the R 2,1 embedding space, i.e. n a n a = −1. In the basis we have used for writing the metric (2.3) n a are (see appendix A for more discussions): Using n a , one has the following relation between the SL(2, R) isometries and U(1) symmetry generators: n a ξ a = k i m i . (2.10) Note that we have used SL(2, R) metric (A.3) for raising a index on n a . To show this recall that the Killing vector ξ 3 is Multiplying by r and rewriting the above equation in terms of Killing vectors yields: 12) or More detailed analysis and useful identities about the SL(2, R) structure is gathered in the appendix A. NHEG conserved charges Given a geometry which is (a part of) a solution to a diffeomorphism invariant gravity theory, in the same spirit as the Noether theorem, one may associate a conserved quantity to each Killing vector field. A given solution may also be invariant under some "internal" symmetries, like in Maxwell theory, to which one may associate the corresponding Noether charges too. This general argument implies that with the NHEG with SL(2, R)×U(1) N symmetries one can associate N + 3 conserved Noether charges. In this section we work out those charges. As reviewed in the appendix B, however, there are always ambiguities in defining Noether charge densities (specially when we are dealing with a symmetry associated with diffeomorphisms). These ambiguities are usually fixed by giving a reference point (e.g. asymptotic ADM charges). Here, we also discuss how those ambiguities may be dealt with in the NHEG case where we do not have a maximally symmetric asymptotic space. Here, following conventions of [8,9], we use boldface for spacetime forms. Noether charge density of non-Abelian symmetries Obtaining Noether charge density Q from the Noether current J associated to a diffeomorphism generator (cf. appendix B) is not generally an easy task, but when we are dealing with non-Abelian symmetry groups, this will become straightforward due to construction we discuss below. Consider a set of Killing vectors ξ a which satisfy the following Lie bracket relations where f c ab are the structural constants of the symmetry Lie algebra G. Let K ab be the metric of the algebra. Then, noting that where C 2 is the second rank Casimir of the algebra in the adjoint representation, we have (Note that the indices on the structure constant tensor is raised and lowered by metric K ab .) Next, recalling the definition of the Lie bracket, In the second line we have used the Killing property ∇ ν ξ ν = 0. Consequently, the Noether current J (introduced in (B.4)) may be written as In the second line we have dropped Θ ξa term because it is a linear function of δ ξa Φ and for Killing fields δ ξ Φ = L ξ Φ = 0. In our notations Φ stands for all the fields we have in our theory. One can further simplify (3.5) using the chain rule and the fact that ξ a 's are isometries of L, i.e. ξ ν a ∇ ν L = 0, to obtain in which In the presence of (internal) gauge symmetries one should revisit the above analysis: In this case δ ξ Φ is not necessarily zero, δ ξ Φ should be zero up to internal gauge transformations, i.e. generically δ ξ Φ = δ Λ Φ , for some Λ = Λ(ξ) . µ are subject to the above discussion. So, let us revisit Θ term for them: . Assuming that the action is local and invariant under the gauge A → A + dΛ, it can only be a function of F µν = ∂ [µ A ν] and the second term vanishes due to the field equations for gauge fields in the absence of source 4 . Therefore, This is the term that should be added to (3.7) in the presence of gauge fields and hence the complete form of the Noether charge density for the generator ξ a is 5 (3.12) SL(2, R) conserved charges Applying the method of previous subsection, one can compute the conserved charges corresponding to SL(2, R) isometry of NHEG spacetime. It can be seen from (2.4) that and hence Λ (p) ξ 3 is the one appearing in (3.9)). For the sl(2, R) algebra, C 2 = 2 and the Noether charge density for generator ξ a becomes (3.14) Using this we can obtain conserved charges corresponding to sl(2, R) Killing vectors by integrating it over the closed surface H, which is any of (d − 2)-dimensional t, r = const surfaces in (2.3): Replacing Q µν a from (3.14) and using (A.9) we obtain where we have used the fact that any function of r can be taken out of the integration, as the integration is on the constant r surface H. Noting (A.8) and recalling the definition of the electric charge It will be more useful to consider the SL(2, R) invariant linear combinations of charges Q a by multiplying both sides with n a , to obtain The above analysis, which is based on Noether's theorem, makes it apparent that despite explicit t, r dependence, Q a 's are conserved. Moreover, in writing SL(2, R) charges (3.18) we have already fixed the ambiguities associated with Noether-Wald charges discussed in appendix B. This point will be discussed further in section 4.2. NHEG entropy as a conserved charge Despite the fact that the NHEG does not have a (Killing) horizon as black holes do, recalling that they can be obtained as the near horizon limit of extremal black holes, one may formally associate an entropy to them. To this end, we note that instead of the horizon, the NHEG have surfaces H (i.e. surfaces of constant time and radius in the coordinates used to represent the NHEG metric (2.3)). As discussed in the appendix A, SL(2, R) invariance facilitates defining an (SL(2, R) invariant) binormal 2-form (which is dual to the volume form on H). Given these, we can readily write the analogue of Iyer-Wald entropy [9] for the NHEG: Definition. Entropy of the NHEG as a solution of the e.o.m is defined as 10), and E µναβ ≡ δL δR µναβ . One of the key steps in Wald formulation of "entropy as a Noether charge" [8] is the realization that Killing horizon is associated with a null Killing vector whose dual one-form vanishes on the horizon. In the NHEG we do not have the Killing horizon, however, recalling discussions in section 2.2, we indeed have an infinite family of such Killing vector fields: where n a H =n a (t=t H , r=r H ) and n a is given in (2.9). We will prove the following proposition: Conserved charge corresponding to Killing vector ζ H is the NHEG Entropy, defined in (3.20). Proof. We first note that ζ H is a linear combination of Killing vector fields with constant coefficients (n a H and k i are constants), and hence ζ H is a Killing vector field. Next, we note that according to the proposition 4.1 of the Iyer-Wald paper [9] (see appendix B), the Noether conserved charge corresponding to ζ H can be decomposed as where E µναβ = δL δR µναβ and W and Y and Z are covariant quantities which are locally constructed from fields and their derivatives. Y is linear in δ ζ H Φ and Z is linear in ζ H (recall (2.9) and (2.10)). As discussed in the previous section, δ ζ H Φ = 0 up to internal gauge transformations. In our case, that is, all δ ξ Φ = 0, except for δ ξ 3 A (p) which is a pure gauge. We fix the Y ambiguity requiring physical charges to be gauge independent. The W and dZ ambiguities are removed, noting that the Killing vector field ζ H has been constructed such that ζ H | t=t H ,r=r H = 0. Therefore, To determine ∇ α ζ β H , we take covariant derivative of both sides of the identity (2.10), where in the second equation we have (A.10). The LHS of the above equality may be computed at any r, t. In particular, when computed at r = r H , t = t H we obtain With the above (3.23) takes the form Laws of NHEG dynamics In this section we derive three laws of NHEG mechanics. The first two are describing the NHEG geometry itself, but the third one governs perturbations (or probes) over the NHEG background. The first and third laws resemble the laws of black hole mechanics [6], while "entropy law " has no counterpart for generic black holes. Zeroth law of NHEG dynamics Demanding (2.3) to be SL(2, R) invariant, restricts k i and e p parameters, while imposing equations of motion will determine other functions there. In particular, ξ 3 is a Killing vector (p) and L ξ 3 denotes the Lie derivative w.r.t. the Killing vector ξ 3 , leads to ∂ θ α e p = 0. That is, k i 's and e p 's should be constants with respect to the coordinates θ α . The constancy of k i and e p can be treated as the zeroth law of NHEG dynamics. In section 5, we discuss the relation between the NHEG and (near) extremal black holes and show the close connection between the NHEG zeroth law and the constancy of Hawking temperature and horizon angular velocities. This makes the analogy of NHEG zeroth law and the black hole zeroth law. NHEG entropy law In this section we prove the "NHEG entropy law": where k i and e p are constants appearing in the NHEG solution (A.15) and (2.4), J i and q p denote the corresponding N U(1) charges and Derivation: We start by taking covariant derivative from (3.21) and integrating both sides over 2 H dΣ µν E µν αβ : Next, we note that as discussed in the appendix B, there is a Noether conserved charge associated each of the Killing vector fields ζ H , ξ a and m i , but these conserved charges come with three kind of W, Y, dZ ambiguities Computed "at the horizon" where ζ H is zero, the W and dZ terms in Q ζ H vanish. Similarly, in the following linear combination of other charges a n a the W and dZ terms also vanish. Therefore, (4.3) becomes The RHS of the above equation is zero because δ ξ Φ is linear in ξ (or in ∇ξ) as well as in Φ (or in ∇Φ), and hence In summary, all the three W , Y and dZ type ambiguities cancel out from the two sides of the equality and we obtain With a similar reasoning one can show that the above equation holds when we replace Q ζ H by S/(2π) (cf. (3.26)), Q m i by physical angular momenta J i , and n a H Q ξa from (3.19). We hence obtain the desired entropy law expression (4.1). Before closing this section some comments are in order: 1. Eq.(4.1) is universal, meaning that it is the relation between conserved charges associated with any NHEG solution to any diffeomorphism invariant theory (of gravity). 2. In the above we have used the fact that the LHS of (3.19) is SL(2, R) invariant and hence can be computed at any arbitrary constant t, r surface. 3. The entropy law (4.1) is a manifestation of the fact that the SL(2, R) and U(1) generators mix with each other, as is manifest, e.g. from (2.5). Explicitly, the ξ 3 Killing vector also involves a k i ∂ φ i term (2.5). 4. The entropy law (and also the entropy perturbation law (4.5)) are invariant under permutation of N U(1) symmetries. 5. We stress that such a universal relation between entropy and other thermodynamical quantities/conserved charges does not exist for generic black holes. As we will discuss further in following sections, the "first law" of black hole thermodynamics deals with perturbations of these parameters and not themselves. Note also that Smarr-like formulas which may resemble our entropy law, are not universal and are solution and/or theory dependent. 6. The reason why our derivation of entropy law (or in other words, Wald's derivation) does not hold for generic black holes is presence of ambiguities we discussed in some detail, and in particular the fact that these ambiguities should be computed and compared at different locations in the black hole geometry. In our case, unlike the black hole case, we have vanishing Killing vector ζ H for any t H , r H . We will elaborate on this point further in the next sections. 7. Our derivation is based on Noether conserved charges and hence makes clear the role of being on-shell. In particular, in the last term in (4.1), the Lagrangian L should be computed on the NHEG solution. i dϕ i term). As expected, the magnetic and electric flux (denoted through e p ) appear asymmetrically in our entropy law; magnetic flux appears only through the Lagrangian term. 10. In our derivation it is clear that the terms in the RHS of the entropy law are associated with N U(1) symmetries of the system and the corresponding conserved charges. The dilaton-type scalar fields (or moduli) which are not associated with any symmetry can only appear through the Lagrangian term. This is a realization of the attractor behavior [13,14,20] in our setup. 11. Our entropy law is closely related to Sen's entropy function [13,18]. 6 However, our derivation is quite different; specifically we note that our derivation is completely based on the NHEG and not the extremal black hole. Therefore, we need not deal with the issues which may arise in the usage of Wald entropy formula which is derived for bifurcate horizons, for extremal horizons. Further discussion related to this point can be found in section 5. NHEG entropy perturbation law In the previous section we derived the NHEG entropy law, which is a relation among conserved Noether-Wald charges of the NHEG which is a solution to equations of motion for a given gravity theory with our desired SL(2, R)×U(1) N symmetry. As pointed out this relation has no universal analog for generic black holes. In this section we construct the analog of the first law of thermodynamics for the NHEG. To this end, let us denote the NHEG solution by the field configuration Φ 0 and consider a perturbation around it δΦ. The configuration Φ 0 + δΦ is not necessarily of the form of NHEG, however, we assume that the perturbations δΦ satisfy linearized equations of motion around the NHEG background solution Φ 0 . Therefore, δΦ can also be labeled by the same charges as the background. Let us denote these charges by δJ i , δq p and δS. Our discussions here are basically paralleling those in [9] for ordinary black hole. However, as we will see below, the case of NHEG has its own specific and novel features. Under specific conditions over field perturbations δΦ which are listed in the end of this section, we prove the "entropy perturbation law " relating different charges of the probe: δS 2π = k i δJ i + e p δq p (4.5) Derivation: Noether current corresponding to the diffeomorphism generated by ζ H is (see appendix B for notations): where ζ H is the Killing vector field defined in (3.21). We will use ξ · X to denote the contraction of the vector ξ with the first index of the form X, which is usually written as i ξ X. Let us now consider variations in (4.6) associated with Φ 0 → Φ 0 + δΦ: We assume that the variations do not alter the quantities attributed to the background. In particular, this means that δζ H , δξ a , δm i are all vanishing (as they do in the case of black holes [8,9]). In this sense these variations are considered as perturbations or probes over the NHEG. Let us start our analysis from the last term in (4.7): The first term vanishes due to the on-shell condition and the second term is simplified recalling the identity ξ · dΘ = δ ξ Θ − d(ξ · Θ) which is valid for any diffeomorphism ξ, therefore, Inserting the above into (4.7) we obtain where is the symplectic current, the (d − 1)-form associated with variations δ 1 , δ 2 , and is bilinear in its arguments [8]. This implies that for Killing vectors ξ with δ ξ Φ 0 = 0, the symplectic form vanishes. However, in presence of gauge fields δ ξ Φ 0 need not vanish for a symmetry, it may be non-zero up to gauge transformations. In particular, as we have already seen in previous section, this is the case for the third Killing vector ξ 3 and the corresponding symplectic current ω(Φ 0 , δΦ, δ ξ 3 Φ) does not vanish. This feature (which was not relevant for the discussions of black holes [8,9]) has an important role in our derivation of the entropy perturbation law. The current J ζ H is conserved on-shell, i.e dJ ζ H = 0, so one can associate a conserved charge d − 2 form Q ζ H , J ζ H = dQ ζ H , to the symmetry generated by ζ H . Moreover, when the solution is deformed by a perturbation which is a solution to the linearized equations of motion, the relation dJ ζ H = 0 still holds even if the perturbation is not symmetric under ζ H (i.e. δ ζ H (δΦ) = 0). In other words, one can take the variation of the relation J ζ H = dQ ζ H and arrive at [8] δJ ζ H = δdQ ζ H = dδQ ζ H . (4.12) From the above equation, we also learn that perturbations over a background can be labeled by the charges corresponding to the background symmetries, although they do not carry those symmetries. Using (4.12) in (4.10) yields We integrate the above "conservation equation" over a timelike hypersurface Σ bounded between two radii r = r H , r = ∞. The hypersurface Σ can be simply chosen as a constant time surface t = t H . The interior boundary r = r H is necessary, since AdS 2 does not have a compact interior. As discussed before, the surface H will play the role of horizon on which we define the entropy of NHEG. The r = ∞ choice for the other boundary, is a convenient choice because the extra terms appearing due to gauge transformations vanish (cf. appendix C, and in particular discussions around (C.10)). Following [8], we define the symplectic form associated with Σ as (4.14) Integrating (4.13) over Σ then yields: where in the first line we have used the Stokes theorem to convert the integral over Σ to an integral over its boundary ∂Σ and in the second line, we used the fact that ζ H = n a H ξ a − k i m i vanishes on H. Since the charge perturbation δQ ζ H is linear in the vector ζ H , one can expand the first term on RHS of (4. 15) m i is tangent to the boundary surface and hence the pullback of m i ·Θ over the surface r = ∞ vanishes, and we have where is the canonical generator of the symmetry ξ a in the covariant phase space [10]. Substituting this result into (4.17) yields where δJ i is the angular momentum corresponding to the rotational symmetry m i (Since pullback of m i ·Θ vanishes over any constant t, r surface on NHEG, one can show that in the above equation δJ i could be computed with the integral at ∞ replaced by any r = r H surface.) To show that the left side of (4.19) is actually the perturbation of entropy δS, we should discuss ambiguities of δQ ζ H . Any Noether charge can be decomposed as in (3.22) with W , Y and dZ ambiguities. The W and dZ ambiguities vanish since they are linear in ζ H , which vanishes at surface H. The δY ambiguity, which is proportional to variation of fields δ ξ Φ needs more attention. Since ζ H = 0, at surface H, δ ζ H Φ = 0. This implies that Y vanishes on background over H, and also that its perturbation is given by (4.21) In the above we have used the fact that since δζ H = 0, we can interchange δ ζ H and δ. Equation (4.21) is linear in the generator ζ H , does not contribute to the left hand side of (4.19) and therefore Analysis of [23] indicates that the NHEG background is stable for a class of field perturbation which satisfy certain boundary conditions. As we will show in our upcoming work [24], this stability condition implies δE a = 0. Dropping the last term in (4.23) by the choice of boundary conditions, we arrive at the desired entropy perturbation law (4.5). To end this section we summarize the assumptions over the field perturbations which resulted in the entropy perturbation law (4.5): • Perturbations should satisfy the linearized field equations. • Perturbations are restricted to those for which SL(2, R) charges vanish, i.e δE a = 0. This is typically done by choosing a set of boundary conditions. We also note that the variation δ does not affect the Killing vectors associated with the background, i.e δζ H = δξ a = δm i = 0. NHEG vs. extremal black hole So far we focused on NHEG as an interesting class of solutions to gravity theories and introduced and worked out three laws of NHEG dynamics. NHEG, as the name implies, is related to extremal black holes and one may wonder if laws of NHEG dynamics can be (directly) related to the laws of extremal black hole thermodynamics. This question has of course been discussed and studied in the literature from various different perspectives, see in particular [25,26]. This section is mainly meant to fill some gaps remaining in the literature about the connection of NHEG and extremal black holes. The most general form of the metric of a stationary and axisymmetric black hole possessing some U(1) gauge fields, can be written in the ADM form as where f, g ρρ ,g αβ , g ij , ω i and Φ (p) , µ (p) i are functions of ρ, θ α and i, j = 1, 2, · · · , n and p = n + 1, · · · , N. The horizons of black hole are at the roots of g ρρ , where we assume the function D to be analytic and nonvanishing everywhere. Due to the smoothness of metric on the horizons f can always be written in the following form: In four dimensions the black hole has at most two horizons (e.g. see [1]) and ∆ = (ρ − r + )(ρ − r − ). When there exist more than two horizons, we call the outermost two horizons as r − , r + (r + > r − ). The constants r + , r − are two parameters characterizing the black hole. We introduce r h , ǫ instead of r ± as: The above notation turns out to be useful since ǫ is a good measure of black hole temperature T H . Hawking temperature of the black hole can be found requiring the near horizon metric in the Euclidean sector to be free of conical singularity (e.g see [27]), leading to [28] where in the above C and D are computed at the horizon ρ = r + . Constancy of Hawking temperature on the horizon implies that C(r + , θ)D(r + , θ) is a constant on the horizon [28]. In the extremal limit, ǫ → 0 and ∆ in (5.2) will have a double root at ρ = r e . Near horizon limit of extremal black holes From now on we will focus on the extremal case, r + = r − = r e . To take the near horizon limit let us first make the coordinate and gauge transformations ρ = r e (1 + λr) , τ = αr e t λ (5.6) where Ω i = ω i (r e ) is the horizon angular velocity and Φ (p) | re is the horizon electric potential. In the first line we scale ρ − r e and τ inversely by a factor λ and α is a suitable constant to get the most simple form for the near horizon metric. λ is the parameter which we send to zero once we take the limit. The shift in ψ i takes us to the frame co-rotating with the black hole. In the last equation, we have used the gauge symmetry in order to remove the infinities resulting from the limit λ → 0. Upon these transformations the near horizon geometry (obtained in the λ → 0 limit) becomes where we used the fact that CD = const on the horizon and chose Recalling that Ω i = ω i | re , we arrive at the general form: The above is, as expected, the same as the NHEG ansatz (2.3) and (2.4). We first show that smoothness of black hole geometry (5.1) forces ∂ ρ ω i to be constant on the horizon, and k i are hence constants in the NHEG. A more detailed proof for this has appeared in [29] (see the appendix there). However, here we give an alternative argument. Analysis of finiteness of curvature invariants for solutions to field equations of the form (5.10) reveals that (∂ θ α ω i ) 2 ∼ (ρ − r e ) 2α , with α > 1. Therefore, ∂ ρ ∂ θ α ω i ρ=re = ∂ θ α ∂ ρ ω i ρ=re = 0. So, not only ∂ θ α ω i = 0 on the horizon (which means that angular velocity is constant on the horizon), but also ∂ ρ ∂ θ α ω i = 0 which means that ∂ ρ ω i is constant at the horizon of extremal black holes. Using the third equation of (5.12), we find that k i are θ independent and hence constants. This is a restatement of the zeroth law for NHEG geometries (cf. section 4.1). NHEG entropy perturbation law and near horizon limit Here we briefly review what was done in [25] (see also [26,30]): One can indeed derive "entropy variation law" of NHEG from taking the extremal limit, starting from first law of thermodynamics for near extremal black holes. To this end, we recall the first law of black holes stating how perturbation of entropy is related to the perturbations of mass and other conserved charges of any black hole: At the extremal point where T H = 0 the above reduces to δM = i Ω i δJ i + p Φ p δq p , which may in principle be integrated to get the BPS relation M = M(J i , q p ). In the near extremal case when T H ∼ ǫ, one may then make a low temperature expansion of all thermodynamics quantities in powers of ǫ. For black holes, we have the crucial relation that [25] δM − Ω i ext δJ i − Φ p ext δq p ∼ ǫ 2 , and hence to the leading order in ǫ the first law reduces to where Eq.(5.14) reduces to the NHEG entropy perturbation law (4.5), if we show that k i = − 1 2π That is what we will do next. Interpretation of k i , e p To relate Ω ′i and Φ ′p (which are constructed from thermodynamic chemical potential of black holes in the extremal limit) to the k i and e p which are parameters appearing in the NHEG, after taking the near horizon limit, we need to make a connection between process of taking the near extremal limit and the near horizon limit performed in section 5.1. Explicitly, we need to relate spatial derivatives of ω i to the derivative of Ω i (which is ω i computed at the horizon) with respect to temperature. (ω i are defined in (5.1).) Similar arguments may also be repeated for the electric charges and the corresponding potentials. To do so, we use the values of the chemical potentials at inner and outer horizons and the corresponding continuity conditions. Any function in the black hole solution (like metric components) has a spacetime and a parametric dependence. Here we choose T H and the conserved charges J i , q p as the basis for parameter space of a generic black hole; the subspace T H = 0 specifies the extremal black holes. In order to relate ∂ ρ ω and thermodynamic quantities of black hole, we use a novel symmetry of black holes pointed out in [31] based on ideas initiated in [32]. We call it horizons permutation symmetry (see appendix C for a proof) which states that under r + ↔ r − , where Ω i ± , Φ p ± , κ ± are respectively the angular velocity, gauge field potential, and surface gravity of outer/inner horizons. This symmetry takes a more convenient form in terms of r h , ǫ defined in (5.4), as Since for small ǫ temperature is proportional to ǫ, r h = r h (ǫ, J, ...), and r h → r e as we take ǫ → 0. As the first step we prove that corrections to r h as we move away from r e grow like ǫ 2 in the leading order. We should comment that in the above analysis, we started with T H ≥ 0 but extended the parameter space of black holes to the negative T H as well. The point (−T H , J) describes the inner horizon of the black hole with (T H , J) and the transformation T H → −T H reveals the inner horizon thermodynamics [31]. From the black hole geometry viewpoint, this is equivalent to moving from r + to r − and hence we have built the connection between moving in the radial direction in spacetime and moving in the parameter space of black holes, from which we can deduce our desired relations. We now prove that radial derivative of ω i (ρ) = g ij g tj can be related to the parametric derivative of horizon angular velocity Ω i ± w.r.t temperature, i.e Proof. The r + → r − ⇒ Ω i + → Ω i − symmetry, in the lowest order in ǫ yields where Ω i is the (outer) horizon angular velocity Ω i + . On the other hand, by definition of Ω i we have and hence Similarly one can show that This is an interesting identity because ∂ω/∂ρ is completely geometrical and concerns the change of ω by moving outside the horizon of an extremal black hole, but ∂Ω/∂ǫ is a quantity in the parameter space and measures the change of angular velocity by turning the temperature on, and has no geometrical meaning. We can now compute k i in (5.12): where we used (5.5). One may similarly work out e p , and with these in hand (5.14) takes the form That is, we have obtained NHEG entropy perturbation law as the appropriate near extremal limit of the first law of black hole thermodynamics. Concluding remarks In this work we focused on the NHEG as a well-studied and classified solution to gravity theories and worked out universal relations among the parameters defining these solutions and the corresponding conserved charges. In particular we pointed out three laws of NHEG dynamics: (1) k i and e p parameters defining the NHEG are constants. (2) We have the "entropy law" which relates entropy (as a Noether charge) associated with the NHEG to conserved charges angular momenta J i and the electric charges q p and the on-shell value of Lagrangian (integrated over H), and (3) the "entropy perturbation law," which relates entropy and other Noether charges associated with a probe (probing the NHEG background) to each other. The entropy and entropy perturbation laws, despite the similarity to laws of black hole thermodynamics do not indeed have a thermodynamical interpretation; in the NHEG case we are dealing with a system which cannot be excited (without destroying the SL(2, R) isometry) [25,23]. Among other points, we would like to stress that the entropy law does not have a correspondent in the black hole thermodynamics systems. Technically, this is due to the fact that in the Wald's derivation of the first law for black holes there are ambiguities defining the charge integrals which prevents one to draw a universal relation among the thermodynamical parameters of black holes, while such ambiguities does vanish when we consider variations of fields and the corresponding perturbations in the thermodynamical charges, as they appear in the first law of thermodynamics. It is worth also mentioning that the entropy and entropy perturbation laws are invariant under permutation of N U(1) symmetries. Under these permutations k i and e p and the corresponding charges are rotated into each other, while S and δS are only a function invariant under these permutations. It is interesting to explore this permutation symmetry further. Regarding the entropy perturbation law, as we discussed δS, δJ i and δq p are associated with a field configuration δΦ probing the NHEG background, given by the field configuration Φ 0 . As we argued, entropy perturbation law (4.5) is valid for δΦ satisfying equations of motion linearized around background Φ 0 . Moreover, δΦ should be such that δE a = 0. Given the discussions in [23] one may wonder if these two conditions can be satisfied. Our preliminary analysis [24] shows the answer is positive. In answering this question one may also explore if there is any relation between these δΦ and the set of perturbations and boundary conditions appearing in the Kerr/CFT proposal [33,30]. It is also desirable to understand better the connection of our derivations and the NHEG mechanics with the entropy function analysis. This is also postponed to future works. In general, especially when we deal (extremal) black holes of non-trivial horizon topology, it is possible to have solutions with non-zero "dipole charges". One such example is the neutral singly rotating dipole black ring [21]. The dipole charge in fact contributes to the energy of the system and appears both in first law or the Smarr-type relation for the dipole black ring [21]. Following Wald's derivation for the first law one can in fact prove that in general such dipole charges should appear in the first law [22]. In principle black holes/rings with dipole charges can become extremal. For example the five dimensional dipole black ring of [21] can become extremal while the dipole charge is still non-zero. One may study near horizon limit of extremal dipole rings and see that they exhibit SL(2, R)×U(1) 2 [15] and hence they fall into our definition of the NHEG. One then expects these dipole charges to appear both in our entropy law and in the entropy perturbation law [24]. One may wonder if the second law of thermodynamics has a correspondent in the NHEG case. Here we make a comment on that and postpone a more thorough analysis to the future publications. Let us for simplicity consider the NHEG ansatz (2.3). One may show that the angular momentum J i is given by the Noether integration where F (θ) is a positive definite function and γ ij is also a positive definite metric on the φ i part of the NHEG geometry. Therefore, k i J i is positive definite. Similar relation also holds for e p q p . We also discussed a derivation of NHEG mechanics laws from near extremal black holes, this latter amount to finding a relation between spatial derivatives of black hole metric functions and the parametric derivatives of the chemical potentials (horizon angular velocities or electric potentials). To this end we proved and used the inner-outer horizon exchange symmetry (see discussions in section 5 and appendix D). It is desirable to understand this symmetry better and study its further implications. A On sl(2, R) Lie algebra SL(2, R) is the group of all 2 × 2 real-valued matrices with determinant one. The sl(2, R) Lie algebra with generators ξ a , a = 1, 2, 3 is defined as where f c ab are structure constants. In this paper we have chosen the basis in a way that the commutation relations take the form In this basis, the Killing form (metric) of the algebra is and its inverse K ab = (K ab ) −1 has the same components as itself (in the chosen basis). Metric K ab can be used for lowering or raising the sl(2, R) indices, e.g. f abc = K cd f d ab . One may also show that One specific representation of the sl(2, R) algebra, which also realized the SL(2, R) isometry of (2.3), is given in (2.5). SL(2, R) which is a double cover of SO(2, 1) is also the isometry group of AdS 2 manifold, defined as the set of points with square distance −1 from the origin of a flat 1+2 dimensional Minkowski space. In a suitable coordinate system in which the metric is (A.3), this condition is explicitly n a n a = K ab n a n b = −1 , where x a = n a are the position of points of AdS 2 in the embedding space. coordinates. A solution for n a , parametrized with two parameters t, r is then the induced metric on the AdS 2 surface is which is the metric of AdS 2 in Poincaré patch. The n a , a = 1, 2, 3 form a vector representation under SL(2, R) and hence, where δ ξa n b is the Lie derivative of the vector n b . Using the explicit form of (2.5) and (2.9) one may show that n a δ ξa n b = 0 , The above relations also show that the constant r, t part of the NHEG metric (2.3), the codimension two surface H, is an SL(2, R) invariant space, i.e. its metric and volume form do not depend on which constant r, t the surface H is defined. Definition. The binormal tensor of the SL(2, R) invariant surfaces H is defined as: In the basis (2.5) and coordinate (2.9), this tensor can be calculated as follows: where in the last equality we used ∂ r n a = −ξ t a , ∂ t n a = ξ r a . Explicit computation for µ = r, t and with metric (2. One can also readily show that ǫ 2 ≡ ǫ µν ǫ µν = −2 (A.14) A.1 AdS 2 in global coordinates, another example As another example, let us consider NHEG in the global coordinate for AdS 2 : where Γ, Θ αβ , γ ij are some functions of θ α , specified by the equations of motion. Associated with this coordinate system, the sl(2, R) Killing vector fields are given as In this basis the sl(2, R) commutation relations and metric are The solution to (A.5) which also satisfies (A.8) is now given as It can be checked that relations ∂ r n a = −ξ t a and ∂ t n a = ξ r a also hold in the global coordinate and hence (A.9) is still true. Using the same discussion as above one can show that using the definition (A.10) leads to the same result for the binormal tensor B Symmetries and conserved charges Symmetry is a transformation which maps a set of solutions of equations of motion (with appropriate boundary conditions) to themselves and hence leaves the action invariant, or equivalently, changes the Lagrangian up to a total divergence. The symmetries could be local (gauge) or global and both of these have been argued to be a basis for deriving constants of motion or conserved charges, see [34] and references therein for a historical review. Here we will be mainly concerned with symmetries associated with spacetime coordinate transformations and diffeomorphisms and will follow Wald's papers [8,9,11]. Consider a diffeomorphism invariant theory with a Lagrangian density L and the corresponding action in d-dimensional space-time in which Φ denotes all of dynamical fields of the system and each of them will be denoted by Φ i . Associated with any infinitesimal diffeomorphism as a symmetry of the theory, one can find a Noether current and the corresponding Noether charge. Following [9] we take the Lagrangian L to be a top form, a d-form equal to √ −gLǫ d with ǫ d being the Levi-Civita tensor, and generator of diffeomorphism symmetry to be a 1-form ξ. Variation of Lagrangian under the diffeomorphism is [35] where E i = 0 is the e.o.m for Φ i . The (d − 1)-form Θ is the surface term generated by the variation. According to the identity δ ξ L = ξ ·dL + d(ξ ·L) and noting that dL = 0, we can replace the LHS of (B.2): Now, we can associate a Noether (d − 1)-form current J as: Therfore dJ = −E i δ ǫ Φ i so that dJ = 0 whenever e.o.m is satisfied and according to the Poincaré's lemma, since J is closed, it would be exact and can be written as: where Q is a (d − 2)-form, the Noether charge density. B.1 Ambiguities It has been shown [9,11] that the (d − 1)-form J in (B.4) has twofold ambiguities. One ambiguity comes from freedom of the definition of Lagrangian of the theory up to an exact d-form: which leads to J → J + δ ξ µ. The other ambiguity comes from the freedom in specifying J itself (for a given Lagrangian) up to an exact (d−1)-form dY (Φ, δΦ). Therefore, the Noether current J is defined up to the following ambiguities where the (d−2)-form Y (Φ, δΦ) is linear in δ ξ Φ and we used the identity δ ξ µ = ξ ·dµ+d(ξ ·µ). When we want to find the Noether charge, in addition to these ambiguities there is another one which is the freedom of choosing Q up to an exact (d − 2)-form dZ(Φ, ξ) where Z is linear in ξ. So accumulating all of the ambiguities, we have the freedom of choosing the Noether charge density as: and hence the Noether charge density Q is not unique and its most general is [9] where W µ and E µν and Y and Z are covariant quantities which are locally constructed from fields and their derivatives, Y is linear in δ ξ Φ, Z is linear in ξ and, In order to fix/remove these ambiguities, we need some physical reasoning and/or reference point for defining the charges (like requesting to coincide with the ADM charges etc.) D Inner/outer horizons permutation symmetry In this appendix we state and prove the permutation symmetry of black hole horizons. Permutation symmetry states that: 7 Let {r i } denote the position of horizons of a given black hole, a permutation in black hole parameters of the form r i → r σ i , has the following effect on black hole horizon chemical potentials: Proof. we assume that ∆ is an analytic function of r, then ∆ = n m=0 c m r m which has n roots {r i } and n constants c m , ∆(r i ; {c m }) = 0 , i = 1, 2, · · · , n .
13,042.4
2013-10-14T00:00:00.000
[ "Mathematics", "Physics" ]
Enhancing code clone detection using control flow graphs ABSTRACT INTRODUCTION Software reuse refers to a series of activities of using existing code for developing new software products. Software reuse has positive aspects of improving development productivity and quality, but simple copy-and-paste reuse can produce redundant and duplicate code (also known as code clones) across a program. Code clones can be defined as syntactically or semantically equivalent code fragments of source code. The presence of the code clone can hinder the consistency of an application in software maintenance such as bug fixes, security updates, and code refactoring. Such code changes can be involved in undesirable consequences due to the code clone. If only part of the code clone is modified without covering the whole of the code clone, potential problems can be introduced during the lifecycle of a software system. Therefore, for better software maintenance, locating and keeping track of code clones are a crucial part when modifying code of an existing program. In this paper, a novel approach to clone detection is presented with a feature extractor and a clone classifier using deep learning. The feature extractor constructs feature vectors to characterize a give code fragment for clone detection. The clone classifier is based on supervised learning which produces a deep learning model through training data consisting of input vectors and desired output values. Therefore, the clone classifier is trained with known true clones and false clones in a training phase. In a testing phase, the clone classifier predicts whether or not two method pairs have a clone relationship. The proposed approach to clone detection extracts input features from the CFGs of a code fragment. The input features are used to compute similarity scores for comparing two code fragments. The clone classifier is trained and tested with the similarity scores that quantify the degree of how similar two code fragments are. The proposed clone detection framework represents a code fragment as control flow graphs to catch semantically similar clones. A control flow graph is a directed graph that represents the control flow of a code fragment (e.g., functions and methods). The CFG consists of a set of nodes (also known as vertices) and edges. A node represents the basic statement of the code fragment such as expressions, if-statements, and forstatements. An edge of the CFG connects one node with another and means a program control flow from between the two nodes. The node could be connected to multiple nodes if it is involved in controlling more than one program control. The CFG has a single entry node where a control flow starts and a single exit node where a control flow ends. There can be multiple control flows between the entry node and the exit node. A path in a CFG is a sequence of directed edges in which all nodes are distinct. For example, 3_Path in this paper means any three distinct nodes are connected in a CFG-based representation. The path represents structural features in code fragments and its length is various in a CFG-based representation. The code fragments are characterized by the length and the occurrence counts of the CFG path. The first step of clone detection is to determine the scope of a code fragment which is a continuous segment of source code. Some clone detection tools take a function or a method as the code fragment. In some cases, a portion of the function can be considered as the code fragment. The code fragment is identified with three elements such as a source file, a starting line number, and an ending line number. If a whole method is considered as the code fragment, the starting line number and the ending line number of the code fragment will be the same as those of the method. The proposed clone detection approach takes each method in Java source files as the code fragment. Code clones are a pair of code fragments of source code that are syntactically or semantically equivalent. There are four types of code clones: Type-1 (T1), Type-2 (T2), Type-3 (T3), and Type-4 (T4). Type-1 clones are syntactically identical code fragments, except for differences in white space, layout and comments [1]. Type-2 clones are syntactically identical code fragments, except for differences in identifier names, literal values, white space, layout and comments [1]. Type-3 clones are syntactically similar code fragments that differ at the statement level. Fragments have statements added, modified and/or removed with respect to each other [1]. Finally, Type-4 clones are syntactically dissimilar code fragments that implement the same functionality [2]. The proposed approach to clone detection uses BigCloneBench [3] which is one of the popular benchmarks of code clones. The BigCloneBench data are categorized by separating Type-3 and Type4 clone pairs into four categories based on their syntactical similarity: Very-Strongly Type-3 (VST3) clones with a similarity in range 90% (inclusive) to 100%, Strongly Type-3 (ST3): 70-90%, Moderately Type-3 (MT3): 50-70%, and Weakly Type-3 or Type-4 (WT3/4): 0-50% [4]. Deep learning is a specific kind of machine learning and allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction [5]. Deep learning algorithms have been applied to solve problems of artificial intelligence fields and have produced promising results for some specific tasks such as natural language processing, computer vision, and image processing [6]. These advanced learning algorithms can also contribute to solving software engineering problems that are the analogous ones in artificial intelligence. Figure 1 shows the deep neural network architecture that consists of three layers-input layer, hidden layer, and output layer. The input features (e.g., similarity scores) are given to input nodes in the input layer. The nodes in the different layers are connected with connection weights and biases. The deep learning network model can possess multiple hidden layers so to improve the performance of the code clone classifier. In the clone classifier model, the Rectified Linear Unit (ReLU) is an activation function for the hidden layers and the softmax function is used as activation function for the output layer. As two nodes reside in the output layer, the prediction result of the clone classifier is either "Clone" or "Non-Clone" on the given method pairs. The rest of this paper is organized as follows. Section 2 overall describes the proposed approach to detect code clones using control flow graphs. Section 3 presents experimental results to demonstrate the effectiveness of the CFG-based clone detection. Section 4 summarizes related studies and Section 5 finally remarks conclusions and future research directions. Figure 2 illustrates the overall workflow of the proposed code clone detection framework. The clone detection system has the training and testing phase. The solid arrows show the flow of the training phase which produces a trained clone classifier from a list of features. The clone classifier determines if pairwise methods are clones to each other in the testing phase which is shown the dashed arrows in Figure 2. The clone detection system used BigCloneBench which is one of the popular benchmarks of code clones and has been used for clone-related studies. The BigCloneBench data are separated into the training and test datasets. The clone classifier model in the detection system is trained and optimized by the training dataset and then is evaluated by the test dataset. The first step of the training phase is to identify method modules from the given Java source files through lexical analysis and syntactic analysis. Pairwise method modules are constructed from the separate methods. The pairwise methods are used to train or test the deep learning model. The proposed clone detection system extracts syntactic and semantic features of code fragments from CFGs of the identified methods. A CFG is generated for each method. CFG features are generated from the CFG. The CFG feature sets are represented by feature vectors that could be computed effectively in the deep learning model. Similarity scores are used to quantify the degree of how similar two methods are. The similarity score of the pairwise method is calculated and is given to the input layer of the clone classifier. The range of similarity score is [0, 1] through the normalization step. The trained clone classifier can be produced after the training phase. In the testing phase, the similarity scores of the pairwise methods are created from the test dataset. The steps for generating the similarity score are the same as those of the training phase. The clone classifier can tell whether the given method pairs are clones or not using the similarity scores. APPROACH The proposed clone detection framework is based on a deep learning network model which is a clone classifier and determines the semantic similarity of the method using its corresponding control flow graph. A CFG is a directed graph and represents all possible execution paths of a method. It also encodes the behavior of the method at a higher level of abstraction. A set of features are extracted from the control flow graphs. Such a feature set includes node information and path information of the control flow graph. For the comparison of two methods, the pairwise feature sets are used to compute the similarity score. Figure 3 shows an example of Java source code and its corresponding control flow graph. In this example, the class Sample has only one method main and the control flow graph of the main method is presented on the right side of Figure 3. The node of the CFG is known as a basic block and represents expressions, if-statements, for-statements, etc. The edge of the CFG represents a possible control flow from the end of one block to the beginning of the other. As seen in Figure 3, CFGs also have a single entry and a single exit point. Figure 3. Example of control flow graph A path in a CFG is a sequence of directed edges which connect a sequence of distinct nodes. For example, 2_Path means any two distinct nodes are connected in the control flow graph. The proposed clone detection framework considers 1_Path, 2_Path, 3_Path, and 4_Path to compare pairwise methods. Table 1 lists the feature paths which are extracted from the control flow graph shown in Figure 3. The first column shows the type of paths and the second column is the total frequency of each path type in the control flow graph. The last column shows the distinct instances of each path type and their frequency. The summation of the frequency of each path instance should be equal to the total number in the second column. EVALUATION A set of features can impact seriously the performance of the clone classifier. Therefore, extracting the meaningful features of a dataset is one of the key steps to develop an acceptable deep learning model. Figure 4 shows the CFG feature extraction algorithm to identify a set of CFG features from a given method. Such CFG features of each method will be used to compute the similarity score for the method comparison. Given a target method in a program, the proposed clone detection system extracts a set of CFG features from the corresponding control flow graph of the method. After creating the corresponding CFG of the input method, the detection system finds all paths on the control flow graph. The next step is to count the frequency of the CFG features. If the label of a CFG node on a CFG path is found in the feature set, the frequency of the CFG feature node increases by one. If the CFG node is found newly, it is added to the feature set. The frequency of the newly added feature node is one. The counting of the frequency of the CFG feature nodes is performed until all CFG paths are considered. Finally, through the feature extraction algorithm, the CFG feature set is produced that holds a pair of the feature node type and its frequency. Input: Let MED be a target method in a program. Output: Let FeatureSet be a set of CFG features. FeatureSet = {(f 1 , feq 1 ), · · ·, (f n , feq n )} 1: aCFG  generateCFG(MED) //Creating CFGs by calling CFG generator on MED 2: allPaths  findAllPaths(aCFG) //Finding all paths on the control flow graph 3: foreach path i from allPaths do 4: foreach node j from path i do 5: if node j  FeatureSet then //Getting the frequency to which node j is mapped in FeatureSet 6: feq j  getFequency(FeatureSet, node j ) 7: feq j  feq j + 1 // Increase the frequency by one 8: else //Adding node j to FeatureSet if node j is new 9: FeatureSet = FeatureSet  {(node j , 1) } //Assigning one to frequency of node j 10: The proposed clone detection system needs to train a binary-class classifier for code clone detection which uses training and test dataset for both true clone pairs and false clone pairs. For the training of the classifier, the similarity scores of each feature vector and its label are provided from the training dataset. The true clones are labelled to '1' while the false clones are labelled to '0'. Keras [7]-an open source machine learning library for deep learning-is used to train a clone classifier. Four input nodes and two output nodes are configured for the input layer and the output layer respectively. Since the proposed clone detection approach considers four features of the control flow graph, the input layer holds four nodes which independently take in four feature values. The output layer contains two nodes because the clone detection problem is the binary classification task to check whether two methods are clones or not. The proposed clone detection system configures the deep learning model to include eight hidden layers and to run 200 iterations for training. Table 2 lists the node types of the control flow graph that should be considered to detect code clones. The node type is the construct of Java programs such as expressions, control constructs, conditional constructs, and exception handling constructs. The Java syntax of the node type is represented in the last column and follows BNF-style conventions: [expr] means zero or one occurrences of expr, {expr} means zero or more occurrences of expr, and (x | y) denotes one of either x or y. The proposed clone detection system will extract node types from methods and then generate edges and paths which are made up of the nodes. Table 3 shows the datasets of training and testing for the clone classifier. Folder #4 in BigCloneBench is used as the training dataset since it has the largest number of true and false clone pairs. The true clone pairs in the training dataset include T1, T2, VST3, and ST3 clones and the total number of true clones is 13,399 which is equal to the total number of false clones. MT3 and WT3/4 clones are intentionally excluded from the training dataset because they could contain noisy data which may produce false alarms on detecting code clones. For the testing of the clone classifier, the test dataset is made from Java source files in the four folders of BigCloneBench such as #2, #3, #7, and #10. The source files in the folder #4 are excluded because they are used as the training dataset. Unlike the training dataset, the test dataset contains only true clone pairs without false clone pairs. The similarity scores on the pairwise methods are computed through the same procedures as the training phase. These procedures include the CFG construction, the CFG path generation, the CFG feature extraction, and the similarity score computation. Figure 5 shows the experimental result on the test dataset. The proposed clone classifier has been evaluated with different similarity thresholds in order to check how the similarity threshold affects the performance of the clone detection. The classifier finds clones when the likelihood value is greater than or equal to the similarity threshold. In these experiments, the clone classifier is configured with 8 hidden layers and 200 epochs. The proposed detection system effectively identified T1, T2, and VST3 clones-the recall results are close to 100% (except the folder #2 dataset) with the different similarity thresholds: 0.95, 0.96, 0.97, and 0.98. The proposed classifier also detects effectively ST3 clones on the datasets #2 and #10-the recall values are greater than 90%. With the datasets #3 and #7, the performance of the clone detection goes down a little bit as the detection thresholds become larger. The detection system did not show desirable detection performance on the MT3 and WT3/4 clones. It is still challenging to detect semantic clone types even though the proposed approach is based on control flow graphs which may represent more abstract perspectives of semantically similar code blocks than token-based approaches. Deep neural network models may be affected by hyperparameters such as hidden layer and epoch. Tables 4 and 5 Table 5 shows the number of epochs impacts on the performance of the clone detection. The proposed clone classifier works best when the number of epochs is 200. RELATED WORK Many studies have been conducted to overcome research challenges in detecting code clones across source code. Most of existing clone detection methodologies can be categorized into five types: text-based [8], token-based [9], tree-based [10], graph-based [11], and metrics-based [12] approaches. One of the emerging research trends is to leverage deep learning algorithms to enhance the performance of the existing clone detection strategies. The promising outcomes of deep learning affect many other fields beyond artificial intelligence communities. Software engineers have been involved in actively applying deep learning to solve typical problems in software engineering such as clone detection, bug prediction, and security prediction. Sheneamer and Kalita's clone detection approach [13] uses typical machine learning algorithms so to detect semantic code clones. They use a supervised learning approach where semantic features are extracted from ASTs and PDGs and training data are labelled with clones or non-clones. Their approach is based on machine learning algorithms, not deep learning algorithms. White et al. [14] approach to detect code clones by combining recurrent neural network with recursive neural network at method and file levels. The experimental results show their methodology is feasible in some cases, but they still need to conduct more case studies on popular clone benchmarks. Li et al. [15] provide a token-based clone detection approach using deep learning-based clone classifier. They extract feature vectors by tokenizing method pairs and then compute similarity scores of the feature vectors. The clone classifier is trained with eight similarity scores of known true clones and false cones. In the testing phase, the trained clone classifier predicts code clones in a codebase of unknown clones. This approach still has room for improvement on finding semantic clones like Type-3 and Type-4 clones. Saini et al. [16] propose a clone detection approach to focus on harder-to-detect semantic clones. Their approach is based on a deep neural network with Siamese architecture where information retrieval and metric-based methods are combined. To detect semantic clones, their approach excludes semantically dissimilar clones using a semantic filter instead of using semantic features. Phan et al.'s work [17] show CFG-based deep learning can be used for software defect prediction as well as clone detection. They apply convolutional neural networks for predicting software defect using control flow graphs. Control flow graphs are built from assembly files and then are represented as vectors which are given to convolutional neural networks. The convolutional neural network explores the behavior of target code using vector representations and reports software defects in unseen datasets after being trained with training datasets. CONCLUSIONS AND FUTURE WORK This paper presents a code clone detection framework to find effectively clone types with a deep learning-based clone classifier. The proposed approach to clone detection is based on the extraction of features from CFGs of given code fragments. The CFG features are represented as feature vectors so that they can be compared to determine whether code fragments are similar or dissimilar. The clone detection classifier is trained and tested with similarity scores that are computed from the feature vectors. The proposed detection framework effectively found syntactic clone types such as T1, T2, and VST3 clones. It also identified ST3 clones with the acceptable but not excellent recall results. In the case of sematic clone types such as MT3 and WT3/4 clones, the detection performance still needs to be improved. Although the proposed approach to clone detection needs to be improved further, the promising experimental results suggest that deep learning-based clone detection classifiers can be effective in finding code clones. In the future, more code clones will be explored to enhance functions of the proposed clone detection on semantic clone types. Furthermore, unsupervised deep learning algorithms can be considered to improve the weakness of the clone classifier using supervised learning. The clone detection framework will be applied for other programming languages to extend the generality of the proposed clone detection methods.
4,873.4
2019-10-01T00:00:00.000
[ "Computer Science" ]
PARTIAL UPDATE ALGORITHMS AND ECHO DELAY ESTIMATION PARTIAL UPDATE ALGORITHMS AND ECHO DELAY ESTIMATION In this paper, we introduce methods for extracting an echo delay between speech signals using adaptive filtering algorithms. Time delay estimation is an initial step for many speech processing applications. Conventional techniques that estimate a time difference of arrival between two signals are based on the peak determination of the generalized cross-correlation between the signals. To achieve a good precision and sta-bility in estimation, the input sequences have to be multiplied by an appropriate weighting function. Regularly, the weighting functions are dependent on the signals power spectra. The spectra are generally unknown and have to be estimated in advance. An implementation of the time delay estimation via the adaptive least mean squares is analogous to estimating the Roth generalized cross-correlation weighting func-tion. The estimated parameters using proportionate partial-update adaptive Introduction Time delay estimation (TDE) has always been and remains a popular research topic. It finds application in many areas of electrical engineering [1][2][3][4]. As technology advances and the data transmission methods tend more to packet-switching concepts; the traditional echo problem remains important. An issue in echo analysis is the round-trip delay of the network. The main problem associated with IP-based networks is that the round-trip delay can be never reduced below its fundamental limit. There is always the delay of at least two to three packet sizes (50 to 80 ms) [5] that can make the existing network echo more audible [6]. A number of efforts were made in order to improve the TDE precision. Various methods based on the Generalized Cross-Correlation (GCC) were recently proposed [7][8][9][10]. The GCC algorithms mainly arrange a pre-filter to obtain the modified signal spectrum for optimal time delay estimation. To specify the filter's characteristic, it requires a priori knowledge of the statistics of the received signals. However, the efficiency of the algorithms decreases considerably when little or no prior knowledge about the signal statistics is known. From the time when B.Widrow proposed an adaptive filtering technique based on Least Mean Squares (LMS) [11][12][13], an adaptive theory also found an application to delay estimation. An adaptive implementation of the time delay estimation via Widrow's LMS algorithm is usually referred to as TDLMS. Comparing to the GCC algorithms, the adaptive filtering techniques do not require a priori information of the signal statistics, because the estimation of the signal spectrum is no longer needed. The adaptive filtering algorithms determine the time delay in an iterative manner. There are comparative studies, which provide comparison of the LMS versus the generalized cross-correlation [14], [15]. Generally, the time domain imple-mentation of any adaptive filter is associated with high computational complexity. It directly depends on the length of the adaptive filter [16]. In order to reduce the computational load of the TDLMS, we offer using adaptive filtering algorithms with reduced computational complexity [17][18][19]. Time Domain Adaptive Techniques Traditionally in the implementation of the echo canceller, the NLMS algorithm performs as a reference [13]. Basically, the NLMS algorithm is a simple extension of the Widrow's LMS algorithm [12]. Knowing the adaptive theory, it is trivial that the delay estimation can be achieved by selecting the largest value from the adaptive filter weights vector, w. There is only one issue that has to be taken into account. The adaptive filter needs some time in order to converge to the optimal performance. The existing adaptive algorithms differ from each other with different convergence properties and computational memory requirements. The robust fast converging algorithms are primarily used in the acoustical echo cancellation applications. They take a lot of computational resources. In our case, it is not necessary to apply the complex algorithms, because the adaptive filter is not directly used for the purpose of echo cancellation, but for the delay estimation. Therefore, the reduced complexity adaptive filtering algorithms became the subject of our interest. Proportionate Adaptive Filtering The proportionate normalized least mean squares (PNLMS) algorithm proposed in [20] has been developed for use especially PARTIAL UPDATE ALGORITHMS AND ECHO DELAY ESTIMATION Kirill Sakhnov -Ekaterina Verteletskaya -Boris Simak * In this paper, we introduce methods for extracting an echo delay between speech signals using adaptive filtering algorithms. Time delay estimation is an initial step for many speech processing applications. Conventional techniques that estimate a time difference of arrival between two signals are based on the peak determination of the generalized cross-correlation between the signals. To achieve a good precision and stability in estimation, the input sequences have to be multiplied by an appropriate weighting function. Regularly, the weighting functions are dependent on the signals power spectra. The spectra are generally unknown and have to be estimated in advance. An implementation of the time delay estimation via the adaptive least mean squares is analogous to estimating the Roth generalized cross-correlation weighting function. The estimated parameters using the adaptive filter have a smaller variance, because it avoids the need for the spectrum estimation. In the following, we discuss proportionate and partial-update adaptive techniques and consider their performance in term of delay estimation. in the telephone network environment. For hybrid echo cancellers, it is reasonable to assume that the echo path has a sparse character (i.e., many IR's (Impulse Response) coefficients are close to zero). Although there are studies and research on the multiple reflection echo paths [17], a typical echo path impulse response in the practical communication networks has only one reflection, which means all the active coefficients are occupied in a continuous area of the whole echo span. Proportionate approaches achieve their higher convergence rate by using the fact that the active part of network echo path is usually much smaller (4-8ms) compared to 64-128 ms of the whole echo path that has to be covered by the adaptive filter. In case of voice transmission over the packet-switching network, these numbers may be more considerable [5]. In the PNLMS algorithm, the adaptive step-size parameters are assigned to all the filter coefficients. They are calculated from the last estimate of the filter weights in such a way that a larger coefficient receives a larger increment. As a result, the convergence rate can be increased the fact that the active taps are adjusted faster than non-active coefficients. Therefore for the sparse IR, the PNLMS algorithm converges much faster comparing to the NLMS. This feature is an advantage especially when it is necessary to estimate the long echo delays. The PNLMS algorithm can be described using the following equations [21]: where G(nϪ1) is a diagonal matrix adjusting the step-size parameters, μ 0 is an overall step-size parameter. The diagonal elements of G(n) are estimated as follows: Parameters δ p and ρ are positive numbers with typical values δ p ϭ 0.01 and ρ ϭ 5/L. The first term in (5), ρ, prevents w l (n) from stalling when it is much smaller than the largest coefficient and δ p regularizes the updating when all coefficients have zero values at initialization. In spite of the sparse system identification, which is a vital requirement for the fast converging adaptive filters, there is another requirement. It is directly addressed to the adaptive filter implementation. The algorithm should have reasonable power concerns. Unfortunately, the PNLMS algorithm has several drawbacks. One of them is an increase in the computational complexity by 50 % compared to the NLMS algorithm. Furthermore, the PNLMS algorithm shows the slow convergence rate after the fast initial start. It is because of the slow convergence rate dedicated to the small , g n n n n n n n n en n coefficients [22]. The increased computational complexity can be reduced by the way of selective partial-updating. In turn, the slow convergence of the PNLMS in the stable state can be improved by switching from the PNLMS to NLMS equations after the fast initial convergence has been achieved [23]. Partial-Update Adaptive Filtering The partial-update algorithms can be seen to exploit the sparseness of the echo path in two different ways. It is known that when the unknown system's impulse response is sparse, many of the adaptive filter's weights can be approximated to zero. Alternatively, the sparseness may be present in the weight update vector as a consequence of the distribution of the input samples in the (Lx1) input vector, T . In both these cases, exploiting the sparseness properties can reduce complexity and improve performance of the adaptive algorithm [24], [25]. Some of the first work on the partial-update algorithms was done by Douglas [26]. It presents the periodic and the sequential updating schemes for the Max-NLMS algorithm. However, these partialupdate algorithms show slow convergence 2properties compared to the full-update algorithms. The reason is inconsistent updating schemes. More recently, the partial-updating concept was developed by Aboulnasr [27]. It leads to the M-Max NLMS algorithm and supporting convergence analysis [28]. Another block-updating scheme for the NLMS algorithm was studied by Schertler [29]. The latter work was published by Dogancay and Tanrikulu. They consider approaches for more robust Affine Projection Algorithm (APA) [30], [31]. MϪMax NLMS The algorithm selects a specified number of the coefficients providing the largest reduction in the mean squared error per iteration [32]. Only M out of the total L filter coefficients are updated. Those M coefficients are the ones associated with the M largest values within the following vector |x(n Ϫ i ϩ 1)|; i ϭ 1; …; L. The update equations for this algorithm are , One of the features of the M-MAX-NLMS algorithm is that it reduces the complexity of the adaptive filter by selectively updating the coefficients while maintaining the closest performance to the full-update NLMS algorithm. We present misalignment curves for the algorithm in the follow-up section. Selective-partial-update NLMS This algorithm opposed to the M-Max NLMS has a block structure. An objective behind the latter is the same: it reduces computational costs by updating a subset of the filter coefficients. But first, the vector x(n) and the coefficient vector w(n) are arranged into K blocks of length M ϭ L/K, where L is an integer as in (7) . The coefficient vector's blocks w 1 (n), w 2 (n), …, w K (n) represent candidate subsets that can be updated during the current iteration. For a single-block updating scheme, the constrained minimization problem, which is solved by the NLMS algorithm, can be written as , The selection of the block that has to be updated is made by determining the block with the smallest squared-Euclidian-norm update [30]. According to (9), that justification can be described by the following terms where x I B and w I B are defined as follows (12) . The computational and memory requirements of the selectivepartial-update NLMS algorithm are almost identical to those of the selective-block-update algorithm proposed in [28]. Nevertheless, simulation results illustrated in the next section shows that this approach does not lead to the reasonable trade-off between performance and simplicity. The algorithm's efficiency is weaker than the one of the M-Max NLMS algorithm. As an alternative approach, a sparse-partial-update NLMS algorithm applies more relevant selection criterion. Sparse-partial-update NLMS This algorithm utilizes a so-called sparse-partial (SP) weight selection criterion [33]. The adaptive filter weights are updated based on the largest product of the multiplication of x(n) and w(n). The SP-NLMS single-block update equations are given by , (14) n n n e n n w w n n n w w n e n x x 1 1 Hongyang and Dyba recently suggested a generalization for updating B blocks out of K [17], i.e. (16) (17) Simple-partial-update PNLMS The approach is based on the proportionate technique and partial updating of the adaptive filter coefficients. The algorithm exploits the sparseness of the communication channel to speed up the initial convergence and employs the partially updating scheme to reduce the computational complexity. A selection procedure is performed in accordance with the estimated magnitude of the channel's impulse response. The S-PNLMS algorithm for singleblock update is defined as follows. Arrange x(n) and w(n) into K blocks of length M ϭ L/K in the same way as it is done in (7) and (8). Then let G i (n) denote the corresponding M ϫ M block of the diagonal weighing matrix, G(n). The recursion for updating adaptive filter weights is given by , (18) where the block selection is done according to the following It is different to , which is used with the SPU-PNLMS algorithm [30]. It is apparent from the simulations that the S-PNLMS has similar performance to the SP-NLMS and outperforms the SPU-PNLMS algorithm. Its misalignment curves are presented in the next section. The S-PNLMS algorithm for updating B blocks out of M has these update equations (19) . Further, we provide the comparison results for the presented algorithms and demonstrate their performance while estimating n n n e n n w w x x n x 1 1 the predefined echo delay. Table 2 illustrates the computational complexity of the full-update algorithms and shows saving achieved by the partially updating schemes. The only down side is that in order to find out the M largest outputs or inputs, you have to sort the output or input values. If the fast sorting algorithm is chosen [34], only 2log 2 (L)ϩ2 comparisons are required. For large L and small M, which is appropriate for the sparse impulse response, big computational savings are expected. Results of experiments To evaluate the performance of the algorithms, we implemented an adaptive filter in MATLAB. The filter has to estimate the predefined echo path's impulse responses specified in the ITU-T Recommendation [35]. The overall step-size parameter, μ 0 , is chosen to be 0.1. The control parameters ρ and δ p are chosen to be 0.001 and 0.01 respectively. For simplicity reason, a double-talk situation is not considered. In the first part of the experiment, we look at the misalignment curves of the M-Max-, SPU-, SP-and S-PNLMS algorithms. They are illustrated in Fig. 1 below. The SPU-updating scheme produces the worst results. The proposed S-criterion considerably outperforms it, especially in terms of the initial convergence speed. The rest of the algorithms have nearly the same convergence and tracking performance. All the algorithms, except the M-Max-PNLMS, show poor results when the M value equals 64. It can be explained by the fact that the active part of the IR is approximately 16ms long. This value corresponds to 128 samples for sampling frequency of 8kHz, therefore, 64 samples are not enough to cover the active region completely. Regarding to the dissimilar selection criterion, the M-Max-PNLMS algorithm can deal relatively well with that problem. The Max-updating formula does not count with the sparse character of the IR. It performs selection according to the distribution of the values of the input vector. Otherwise, its drawback is lower initial convergence speed comparing to the SP-PNLMS algorithm. The second part of our experiment concerns the performance of the adaptive algorithms versus the ones based on the generalized cross-correlation function. They are compared in the context of the time delay estimation. Comparison in computational complexity Table 2. Conclusion The presented paper is a comparative study on the partialupdate algorithms and their application to the time delay estimation. When delivering the VoIP service in the packet-switching network, it is important to have the value of the echo delay under control. The increasing transmission delay associated with packet data transmission can make a negligible echo more annoying. Therefore, it is suggested using the echo assessment algorithm based on the reduced complexity partial-update adaptive filters. If the estimated echo is considerably delayed, it can be audible to the user. As a decision, an additional attenuation has to be placed to a particular channel in order to activate an echo canceller that removes the echo. The experiments show a reliable performance of these algorithms. Their precision only suffers at the initial stage when the adaptive filter's coefficients have not converged to the optimum value yet. According to the ITU-T Recommendation G.168, this period should not last more than one second. Taking into account the fact that the generalized cross-correlation algorithms operate in the frequency domain and use advantages of the fast Fourier transform, further computational savings for the adaptive filters can be achieved. It can be done through the multi-delay filters that outperform their time domain counterparts in terms of convergence rate and complexity. Therefore, the multi-delay filters and their implementation aspects are the next subject to our research of the adaptive filtering theory. Acknowledgement Research described in the paper was supervised by Prof. Ing. B. Simak, CSc., FEL CTU in Prague and supported by Czech Technical University grant SGS10/275/OHK3/3T/13 and the Ministry of Education, Youth and Sports of Czech Republic by the research program MSM 6840770014. Mean values of the estimated echo delays Table 3. [
3,900.4
2011-06-30T00:00:00.000
[ "Computer Science" ]
N-Best ASR Transformer: Enhancing SLU Performance using Multiple ASR Hypotheses Spoken Language Understanding (SLU) systems parse speech into semantic structures like dialog acts and slots. This involves the use of an Automatic Speech Recognizer (ASR) to transcribe speech into multiple text alternatives (hypotheses). Transcription errors, common in ASRs, impact downstream SLU performance negatively. Approaches to mitigate such errors involve using richer information from the ASR, either in form of N-best hypotheses or word-lattices. We hypothesize that transformer models learn better with a simpler utterance representation using the concatenation of the N-best ASR alternatives, where each alternative is separated by a special delimiter [SEP]. In our work, we test our hypothesis by using concatenated N-best ASR alternatives as the input to transformer encoder models, namely BERT and XLM-RoBERTa, and achieve performance equivalent to the prior state-of-the-art model on DSTC2 dataset. We also show that our approach significantly outperforms the prior state-of-the-art when subjected to the low data regime. Additionally, this methodology is accessible to users of third-party ASR APIs which do not provide word-lattice information. Introduction Spoken Language Understanding (SLU) systems are an integral part of Spoken Dialog Systems. They parse spoken utterances into corresponding semantic structures e.g. dialog acts. For this, a spoken utterance is usually first transcribed into text via an Automated Speech Recognition (ASR) module. Often these ASR transcriptions are noisy and erroneous. This can heavily impact the performance of downstream tasks performed by the SLU systems. * The first three authors have equal contribution. To counter the effects of ASR errors, SLU systems can utilise additional feature inputs from ASR. A common approach is to use N-best hypotheses where multiple ranked ASR hypotheses are used, instead of only 1 ASR hypothesis. A few ASR systems also provide additional information like wordlattices and word confusion networks. Word-lattice information represents alternative word-sequences that are likely for a particular utterance, while word confusion networks are an alternative topology for representing a lattice where the lattice has been transformed into a linear graph. Additionally, dialog context can help in resolving ambiguities in parses and reducing impact of ASR noise. N-best hypotheses: Li et al. (2019) work with 1-best ASR hypothesis and exploits unsupervised ASR error adaption method to map ASR hypotheses and transcripts to a similar feature space. On the other hand, Khan et al. (2015) uses multiple ASR hypotheses to predict multiple semantic frames per ASR choice and determine the true spoken dialog system's output using additional context. Wordlattices: Ladhak et al. (2016) propose using recurrent neural networks (RNNs) to process weighted lattices as input to SLU.Švec et al. (2015) presents a method for converting word-based ASR lattices into word-semantic (W-SE) which reduces the sparsity of the training data. Huang and Chen (2019) provides an approach for adapting lattices with pretrained transformers. Word confusion networks (WCN): Jagfeld and Vu (2017) proposes a technique to exploit word confusion networks (WCNs) as training or testing units for slot filling. Masumura et al. (2018) models WCN as sequence of bag-of-weighted-arcs and introduce a mechanism that converts the bag-of-weighted-arcs into a continuous representation to build a neural network based spoken utterance classification. Liu et al. (2020) proposes a BERT based SLU model to encode WCNs and the dialog context jointly to reduce ambiguity from ASR errors and improve SLU performance with pre-trained models. The motivation of this paper is to improve performance on downstream SLU tasks by exploiting transfer learning capabilities of the pre-trained transformer models. Richer information representations like word-lattices (Huang and Chen (2019)) and word confusion networks (Liu et al. (2020)) have been used with GPT and BERT respectively. These representations are non-native to Transformer models, that are pre-trained on plain text sequences. We hypothesize that transformer models will learn better with a simpler utterance representation using concatenation of the N-best ASR hypotheses, where each hypothesis is separated by a special delimiter [SEP]. We test the effectiveness of our approach on a dialog state tracking dataset -DSTC2 (Henderson et al., 2014), which is a standard benchmark for SLU. Contributions: (i) Our proposed approach, trained with a simple input representation, exceeds the competitive baselines in terms of accuracy and shows equivalent performance on the F1-score to the prior state-of-the-art model. (ii) We significantly outperform the prior state-of-the-art model in the low data regime. We attribute this to the effective transfer learning from the pre-trained Transformer model. (iii) This approach is accessible to users of third party ASR APIs unlike the methods that use word-lattices and word confusion networks which need deeper access to the ASR system. N-Best ASR Transformer N-Best ASR Transformer 1 works with a simple input representation achieved by concatenating the N-Best ASR hypotheses together with the dialog context (system utterance). Pre-trained transformer models, specifically BERT and XLMRoBERTa, are used to encode the input representation. For output layer, we use a semantic tuple classifier (STC) to predict act-slot-value triplets. The following sub-sections describe our approach in detail. Input Representation For representing the input we concatenate the last system utterance S (dialog context), and the user utterance U . U is represented as concatenation of the N-best 2 ASR hypotheses, separated by a special delimiter, [SEP]. The final representation is shown in equation 1 below: As represented in figure 2, we also pass segment IDs along with the input to differentiate between segment a (last system utterance) and segment b (user utterance). Transformer Encoder The above mentioned input representation can be easily used with any pre-trained transformer model. For our experiments, we select BERT (Devlin et al., 2019) and XLM-RoBERTa 3 (Conneau et al., 2020) for their recent popularity in NLP research community. Output Representation The final hidden state of the transformer encoder corresponding to the special classification token [CLS] is used as an aggregated input representation for the downstream classification task by a semantic tuple classifier (STC) (Mairesse et al., 2009). STC uses two classifiers to predict the actslot-value for a user utterance. A binary classifier is used to predict the presence of each act-slot pair, and a multi-class classifier is used to predict the value corresponding to the predicted act-slot pairs. We omit the latter classifier for the act-slot pairs with no value (like goodbye, thankyou, request food etc.). The input representation is encoded by a transformer model which forms an input for a Semantic Tuple Classifier (STC). STC uses binary classifiers to predict the presence of act-slot pairs, followed by a multi-class classifier that predicts the value for each act-slot pair. Dataset We perform our experiments on data released by the Dialog State Tracking Challenge (DSTC2) (Henderson et al., 2014). It includes pairs of utterances and the corresponding set of act-slot-value triplets for training (11,677 samples), development (3,934 samples), and testing (9,890 samples). The task in the dataset is to parse the user utterances like "I want a moderately priced restaurant." into a corresponding semantic representation in the form of "inform(pricerange=moderate)" triplet. For each utterance, both the manual transcription and a maximum of 10-best ASR hypotheses are provided. The utterances are annotated with multiple actslot-value triplets. For transcribing the utterances DSTC2 uses two ASRs -one with an artificially degraded statistical acoustic model, and one which is fully optimized for the domain. Training and development sets include transcriptions from both the ASRs. To utilise this dataset we first transform it into the input format as discussed in section 2.1. Baselines We compare our approach with the following baselines: • SLU2 (Williams, 2014): Two binary classifiers (decision trees) are used with word ngrams from the ASR N-best list and the word confusion network. One predicts the presence of that slot-value pair in the utterance and the other estimate for each user dialog act. 2016): A convolution neural network (CNN) is trained with the N-best ASR hypotheses to output the utterance representation. A longshort term memory network (LSTM) with a context window size of 4 outputs a context representation. The models are jointly trained to predict for the act-slot pair. Another model with the same architecture is trained to predict for the value corresponding to the predicted act-slot pair. • CNN (Zhao and Feng, 2018): Proposes CNN based models for dialog act and slot-type prediction using 1-best ASR hypothesis. • Hierarchical Decoding (Zhao et al., 2019): A neural-network based binary classifier is used to predict the act and slot type. A hybrid of sequence-to-sequence model with attention and pointer network is used to predict the value corresponding to the detected actslot pair.1-Best ASR hypothesis was used for both training and evaluation tasks. • WCN-BERT + STC (Liu et al., 2020): Input utterance is encoded using the Word Confusion Network (WCN) using BERT by having the same position ids for all words in the bin of a lattice and modifying self-attention to work with word probabilities. A semantic tuple classifier uses a binary classifier to predict the act-slot value, followed by a multi-class classifier that predicts the value corresponding to the act-slot tuple. Experimental Settings We perform hyper-parameter tuning on the validation set to get optimal values for dropout rate δ, learning rate lr, and the batch size b. Based on the best F1-score, the final selected parameters were δ = 0.3, lr = 3e-5 and b = 16. We set the warm-up rate wr = 0.1, and L2 weight decay L2 = 0.01. We make use of Huggingface's Transformers library (Wolf et al., 2020) to fine-tune the bert-base-uncased and xlm-roberta-base, which is optimized over Huggingface's BertAdam optimizer. We trained the model on Nvidia T4 single GPU on AWS EC2 g4dn.2xlarge instance for 50 epochs. We apply early stopping and save the best-performing model based on its performance on the validation set. Results In this section, we compare the performance of our approach with the baselines on the DSTC2 dataset. To compare the transfer learning effectiveness of pre-trained transformers with N-Best ASR BERT (our approach) and the previous state-of-the-art model WCN-BERT STC, we perform comparative analysis in the low data regime. Additionally, we perform an ablation study on N-Best ASR BERT to see the impact of modeling dialog context (last system utterance) with the user utterances. Since the task is a multi-label classification of actslot-value triplets, we report utterance level accuracy and F1-score. A prediction is correct if the set of labels predicted for a sample exactly matches the corresponding set of labels in the ground truth. As shown in Table 1, we compare our models, N-Best ASR BERT and N-Best ASR XLM-R, with baselines mentioned in section . Both of our proposed models, trained with concatenated N-Best ASR hypotheses, outperform the competitive baselines in terms of accuracy and show comparable performance on F1-score with WCN-BERT STC. To study the performance of model in the low data regime, we randomly select p percentage of samples from the training set in a stratified fashion, where p ∈ {5, 10, 20, 50}. We pick our model N-Best ASR BERT and WCN-BERT STC for this study because both use BERT as the encoder model. For both models, we perform experiments using the same training, development, and testing splits. From Table 2, we find that N-Best ASR BERT outperforms WCN-BERT STC model significantly for low data regime, especially when trained on 5% and 10% of the training data. It shows that our approach effectively transfer learns from pre-trained transformer's knowledge. We believe this is due to the structural similarity between our input representation and the input BERT was pre-trained on. Significance of Dialog Context Model Variation F1-score Accuracy N-Best ASR BERT without system utterance 86.5 80.2 with system utterance 87.8 81.8 Table 3: F1-scores (%) and utterance-level accuracy (%) of our model N-Best ASR BERT on the test set when trained with and without system utterances. Through this ablation study, we try to understand the impact of dialog context on model's performance. For this, we train N-Best ASR BERT in the following two settings: • When input representation consists of only the user utterance. • When input representation consists of both the last system utterance (dialog context) and the user utterance as shown in figure 3. As presented in Table 3, we observe that modeling the last system utterance helps in achieving better F1 and utterance-level accuracy by the difference of 1.3% and 1.6% respectively. It proves that dialog context helps in improving the performance of downstream SLU tasks. Figure 3 represents one such example where having dialog context in form of the last system utterance helps disambiguate between the two similar user utterances. Conclusion In this work, building on a simple input representation, we propose N-Best ASR Transformer, which outperforms all the competitive baselines on utterance-level accuracy for the DSTC2 dataset. However, the highlight of our work is in achieving significantly higher performance in an extremely low data regime. This approach is accessible to users of third-party ASR APIs, unlike the methods that use word-lattices and word confusion networks. As future extensions to this work, we plan to : • Enable our proposed model to generalize to out-of-vocabulary (OOV) slot values. • Evaluate our approach in a multi-lingual setting. • Evaluate on different values N in N-best ASR. • Compare the performance of our approach on ASRs with different Word Error Rates (WERs).
3,061.8
2021-06-11T00:00:00.000
[ "Computer Science" ]
Multimode fibre probe calibration Multimode fibres (MMF) used in endoscopy have advantage of small diameter and flexibility, thus causing less damage to living animals. However, the imaging requires wavefront shaping techniques to obtain a sharp image despite the mode dispersion in the waveguide. We suggest version of transmission matrix calibration which uses internal modes of the waveguide and, thus, lessens requirements on the endoscopy apparatus removing the external reference path. Endoscopy imaging The prevalence of neurodegenerative diseases connected with an increasing age of the population requires preclinical studies of animal models to develop a knowledge base of neuronal processes and blood flow deep in the brain. Especially experiments in vivo are of immense importance for the future of the research of Alzheimer's, Parkinson's and other diseases in order to translate the results to human medicine [1]. Measurements in brain tissue require access to the deep layers, but the penetration of microscopy techniques do not go deeper than approximately 1.5 mm. Endoscopy probes come in different sizes and imaging quality. From several millimetres in diameter (GRIN lenses) [2], over fibre bundles [3] to multimode optical fibres (MMF) [4]. Especially the latter are of increasing popularity for their narrow footprint and flexibility. Obtaining an image using MMF probe is not straightforward, though. Due to different group velocities of the fibre modes, the phases are mixed and resulting image is an apparently chaotic distribution of speckles. To characterize the fiber and achieve successful imaging, it is necessary to map the system response described by a transmission matrix (TM) [5,6]. The calibration process requires a highly homogeneous reference beam in order to extract modes' phases using phase-shifting interferometry. Although a high quality matrix (as well as resulting endoscopic images) is obtained, the experimental setup is somewhat complicated by the necessity of introduction of a reference beam in the calibration part of the apparatus and the overall phase stabilization of the optical paths not to introduce a rtefacts in the measurement. A suggested alternative calibration technique uses internal modes of the endoscopic probe as a reference for TM calibration. However, the intensity distribution of the reference beam in the sample plane is a speckle pattern which results in blind spots in the measured TM due to lack of interference signal in the reference mode intensity minima. One of the previous ideas was to run the calibration procedure second time with another mode set as a reference field. Since the speckle patterns are different, it is possible to use the second measurement to cover the holes in TM [7]. Our approach is a little bit different. Running several procedures, each with different single internal reference mode, we obtain N "holey" TMs. Every output mode is defined as a superposition of all the input modes with phase and amplitude measured with reference R1 in the first measurement. The same output mode measured with reference Rn is reconstructed from input modes with a constant phase shift Δφ=φR1-φRn; the value of Δφ is revealed by dot product of the input mode vectors. Combined TM is then created by adding all the particular TMs to TM1 where the added columns are shifted by Δφ and negated. Although the calibration procedure is time consuming due to repetition of the calibration task with different references, it results in a high quality homogeneous image. Experimental results The experimental setup requires a device for wavefront shaping (a spatial light modulator SLM in our case), transformation optics to project the Fourier plane of SLM to the MMF probe input facet and an imaging path to observe the output modes of the probe. We have used circularly polarized light (one polarization state only), as it is less sensitive to crosstalk with the other polarization state. An infrared laser (λ=1070 nm) and an optical fibre with NA=0.22, d=50 μm was used. The external reference beam is introduced to setup for comparison and it is not used during calibration with internal references. To estimate the performance of the internal calibration technique, we have measured the intensity of all the output modes addressed one by one. The values were put together to form an image of output fibre facet for quick visual check and their uniformity was expressed using the interference contrast formula. We have characterized the results from calibration with randomly chosen references. Obviously, the more internal references we used, the less blind spots or fluctuations in the output mode intensity we observed. Surprisingly, it was saturated quickly, as more than five references rarely had any effect on the uniformity. We have used the calibrated endoscope to record images of 1951 USAF resolution test prepared by electron beam lithography in a chromium layer. The target was placed approximately 10 μm behind the output facet, the output modes were addressed one by one and the detector further down the optical path collected light passing through. This way, we have simulated the endoscopy imaging, where the fluorescence signal is collected by the probe itself and passed to the detector in the setup. Conclusion We have demonstrated calibration and imaging with multimode fibre probe. The calibration itself used internal modes of the fibre and removed the necessity to stabilize the external reference path. The price for it is longer time of the procedure, as the technique requires several runs of measurements of the transmission matrix. The resulting calibration is highly uniform, especially when calibrated with 5 or more references, as we have proved by imaging the USAF target in an endoscopic configuration.
1,275.4
2020-01-01T00:00:00.000
[ "Physics" ]
Association of Methylenetetrahydrofolate Reductase Gene Polymorphism in Mothers With Adverse Clinical Outcomes in Neonates Background: The presence of polymorphic methylenetetrahydrofolate reductase (MTHFR) in mothers poses a risk for numerous detrimental outcomes in neonates. The present study investigated the association of maternal MTHFR A1298C and C677T single nucleotide polymorphisms (SNPs) with the clinical outcomes in their neonates. Materials and methods: The cross-sectional study included 60 mothers and their neonates. Blood samples from mothers were analyzed for MTHFR A1298C and C677T SNP genotyping by real-time polymerase chain reaction. Clinical details of mothers and neonates were documented. Study groups were stratified based on wild, heterozygous, and mutant genotypes for the respective polymorphisms observed in mothers. Multinomial regression was applied for the association, followed by gene model formulation to estimate the impact of the genetic variants on the outcomes. Results: The frequency percentages of mutant CC1298 and TT677 genotypes were 25% and 8.06%, respectively, and the mutant allele frequencies (MAF) were 42.5% and 22.5%. Percentages of adverse outcomes such as intrauterine growth restriction, sepsis, anomalies, and mortality were higher in neonates born to mothers with homozygous mutant genotypes. Maternal C677T MTHFR SNPs revealed a significant association with neonatal anomalies (p = 0.001). The multiplicative risk model depicted OR (95% CI) for CT vs. CC+TT as 3.0 (95% CI: 0.66-13.7), and for TT vs. CT+CC was 15 (95% CI: 2.01-112.12). The C677T SNP in mothers predicted a dominant model for neonatal death (OR (95% CI): 5.84 (0.57-60.03), p = 0.15), whereas the A1298C reported recessive model for 1298CC mothers (OR (95% CI): 11 (1.05-115.5), p = 0.02). Both the genotypes assumed a recessive model for adverse neonatal outcomes: OR (95%CI) for CC vs. AA+AC was 3.2 (0.79-12.9, p = 0.1), and for TT vs. CC+CT was 5.48 (0.57-175.7, p = 0.2). The risk for sepsis in neonates was nearly six times higher in those born from mothers with homozygous CC1298 and TT677 than in the wild and heterozygous variants. Conclusion: Mothers with C677T and A1298C SNPs are highly susceptible to adverse outcomes in their neonates. Hence, screening the SNPs during the antenatal period can purposefully serve as a better predictive marker, following which proper clinical management could be planned. Introduction The enzyme 5,10-methylenetetrahydrofolate reductase (5,10-MTHFR) catalyzes the reduction of 5,10methylene tetrahydrofolate to 5-methyltetrahydrofolate (5-MTHF). The essential role of 5,10-MTHFR in the homocysteine-methionine cycle and S-adenosyl methionine (SAM) formation is well known. SAM is a donor of a methyl group vital for various metabolically demanding trans-methylation reactions, including the methylation of deoxyribonucleic acid (DNA) [1]. These reactions are essential for modulating the functionalities of protein and nucleic acid involved in regulating gene expressions like DNA hyper-and hypo-methylation, which have been extensively studied for various gene expressions and genome imprinting. Therefore, reduced activity of the enzyme creates a state of folate deficiency, which is crucial as it results in altered gene expression consequent to impaired DNA methylation and oxidative stress owing to homocysteinemia [2]. A1298C (adenine by cytosine at position 1298 of MTHFR gene, glutamic acid to alanine at position 429 of MTHFR protein) [3]. The frequency of these variants in the Indian population is not uncommon and is reported to vary from nearly 2% to 24% for 677T and 19% to 44% for 298C [1,4]. The variant form of the enzymes demonstrates reduced activity [5]. The enzyme, a key determinant for one-carbon transfer reactions involving active folate and vitamin B12, is vital for DNA synthesis and repair mechanisms, especially during implantation, fetal organogenesis, and in-utero development. Hence, women taking folic acid supplements are said to be protected from neural tube defects (NTDs). Yet, few mothers fail to be benefitted from folic acid supplementation and have offspring with NTDs [6,7]. In addition, these variants have been implicated in raised plasma homocysteine, leading to endothelial damage resulting in thromboembolic risk [8]. MTHFR polymorphisms and altered homocysteine metabolism are thus considered potential risk factors for impaired fetal perfusion. This pathophysiology in pregnant women often leads to obstruction in placental blood vessels, recurrent abortions, intrauterine growth restriction (IUGR), and fetal anomalies [8][9][10]. Studies have reported a significant association of MTHFR C677T and A1298C variants with NTDs, congenital heart disease (CHD), congenital anomalies such as Down syndrome, preterm birth, low birth weight (LBW), IUGR, and various other adverse birth outcomes [3,7,[11][12][13]. Therefore, the candidate genes involved in these metabolic pathways should be explored to identify the associated genetic risks. It is speculated that MTHFR polymorphism might be a potential genetic risk for adverse neonatal outcomes but with inconsistent findings [3]. Therefore, the present study aimed to investigate the association of MTHFR gene variants, A1298C and C677T, in pregnant women with adverse outcomes of their neonates. Materials And Methods The cross-sectional study involved 60 adult women and their neonates. Mothers and their neonates admitted within one month of delivery were included in the study. Mothers with any history of smoking, high body mass index, gestational diabetes, preeclampsia, hypertension, sickle cell disease, other hemoglobinopathies, any other acute or chronic diseases or infections such as toxoplasmosis, rubella, cytomegalovirus, herpes simples (TORCH), tuberculosis, human immunodeficiency virus (HIV), human papillomavirus (HPV), or any other infection at any time of antenatal period were excluded from the study. The Institute Ethics Committee approved the study, and the participants were enrolled following written informed consent. Blood samples from mothers only were collected in the ethylenediaminetetraacetic acid (EDTA) vial. As per the case record form, all clinical details of the mother and the neonate were entered. The DNA extraction and Taqman-based SNP genotype assay by polymerase chain reaction (PCR) were processed per the manufacturer's instructions using the MTHFR Genotyping Kit from Mylab Solutions, Pune, India [14]. The study group was stratified based on wild, heterozygous, and mutant genotypes for the respective polymorphisms observed in the maternal population. The wild, heterozygous, and mutant genotypes for A1298C were AA1298, AC1298, and CC1298, respectively. For C677T, the respective genotypes considered were CC677, CT677, and TT677, respectively. A1298 and C677 alleles are denoted as the wild (major) alleles. Similarly, the respective mutant (minor) alleles were C1298 and T677. The different genotypic categorization of the MTHFR SNPs used in this study is delineated in Table 1. Statistical analysis We performed the statistical analysis in IBM SPSS version 20 (IBM Corp., Armonk, NY). The frequency percentage distribution of genotypes was computed in the study population. For percentage calculation for genotype frequency, the total study population considered was 60; for allelic frequency, the total allele population considered was 120 (60*2). Multinomial logistic regression was performed to assess the association of the MTHFR C677T and A1298C genotypes of the mothers with the clinical outcomes of their neonates and interpreted with an odds ratio (OR) with a 95% confidence interval (95% CI). The variables showing OR more than two were further analyzed for gene models. The neonates without any altered outcome were considered normal, and accordingly, the gene model strategy was applied to estimate the impact of the homozygous, heterozygous, and mutant genotypes of A1298C and C677T of mothers on the outcome variables of their neonates [15]. The risk of the wild (major) and mutant (minor) alleles of mothers on the outcome variables of the neonates was evaluated by binary logistic regression. The statistical significance was considered for p < 0.05. FIGURE 1: Percentage distribution of the MTHFR A1298C and C677T genotypes and the alleles in enrolled mothers Image A denotes A1298C genotypes, image B denotes C677T genotypes, and image C denotes wild and mutant alleles of both SNPs. In the neonatal population, the overall prevalence of LBW was 31.7%, preterm born were 26.7%, IUGR was diagnosed in 13.3%, congenital/chromosomal anomalies were observed in 13.3%, prolonged hyperbilirubinemia in 11.7%, neonatal death seen in 6.7%, and neonatal sepsis in 5% ( Figure 2). The distribution of maternal MTHFR variants and their association with the outcome variables in neonates is deciphered in Table 2. It was noted that neonates of 80% of mothers with CC1298 and 100% with TT677 genotypes were diagnosed with one or the other complications enlisted in Figure 2 (A1298C: p = 0.24, C677T: p = 0.18) ( Table 2). Maternal allelic frequency distribution and their association with the outcome variables in neonates are deciphered in Table 3. The presence of either of the mutant alleles in mothers increases the risk of neonatal complications by nearly two times: A1298C -OR (95% "N" denotes the total allelic population (2 x 60), "n" denotes the number of neonates with or without the outcome variable, and "n (%)" denotes column percentage. P < 0.05 is considered significant. C1298 denotes the variant allele and A1298 denotes the wild allele of A1298C MTHFR SNP. T677 denotes the variant allele and C677 denotes the wild allele of C677T MTHFR SNP. As delineated in Figure 3 and Table 4, it was observed that a higher percentage of mothers with mutant genotypes documented altered outcomes in their neonates, such as IUGR, neonatal sepsis, and congenital/chromosomal anomalies. Mothers with the CC1298 genotype reported a significant association with neonatal death (p = 0.049). Neonatal death was reported in 20% of mothers with homozygous mutant genotype as compared to only 4.2% with wild genotype ( Figure 3A). Unlike A1298C, the maternal C677T MTHFR genotype revealed a significant association with congenital/chromosomal anomalies in neonates (p = 0.001). A total of 60% of TT677 and 23.5% of heterozygous CT677 had babies delivered with anomalies ( Figure 3B). Similarly, neonates of 26.7% of mothers with CC1298 and 14.3% of mothers with AC1298 were diagnosed with anomalies (p = 0.13; Figure 3A). Neonates born to 13.3% and 20% of mothers with homozygous mutant genotypes developed sepsis within a month of delivery as against those born to 4.2% and 2.6% of mothers with wild genotypes (p = 0.19 for A1298C and p = 0.24 for C677T, respectively). "N" denotes the study population, "n" denotes the number of neonates with or without the outcome variable, and "n (%)" denotes column percentage. * p < 0.05 is considered significant. CC1298 denotes the variant form, AC1298 denotes the heterozygous form, and AA1298 denotes the wild form of A1298C MTHFR SNP. TT677 denotes the variant form, CT677 denotes the heterozygous form, and CC677 denotes the wild form of C677T MTHFR SNP. C677T genotypes for the outcomes variables in their neonates (N = 60) MTHFR: methylenetetrahydrofolate reductase; SNP: single nucleotide polymorphism; IUGR: intrauterine growth restriction. Figure 4 and Table 5 illustrate the frequency percentages of sepsis, congenital/chromosomal anomalies, and mortality were higher in neonates born to mothers with mutant allele than mothers with wild allele. The risk for sepsis in neonates was nearly six times higher in those born to mothers with homozygous CC1298 (95% CI: 0.57-80.74, p = 0.09) and TT677 (95% CI: 0.48-89.8, p = 0.11) than the wild and heterozygous variants. Maternal C1298 and T677 allelic distribution were significantly associated with congenital/chromosomal anomalies in neonates (p = 0.023 and p < 0.001, respectively). The risk for diagnosing with anomalies was nearly 3.5 times (95% CI: 1.14-10.88) for C1298 allelic mothers and more than 8.5 times for T677 allelic mothers (95% CI: 2.73-26.61). Nearly 11% of the maternal mutant allelic population depicted neonatal death (p = 0.05 for C1298 and p = 0.29 for T677). The presence of mutant alleles in mothers raised the probability of neonatal mortality by 4.5 times for C1298 (95% CI: 0.86-23.1) and 2.2 times for T677 (95% CI: 0.49-9.87), as illustrated in Table 5. The graphs for the gene model depicting the risk of maternal C677T and A1298C genotypes for the outcome variables in the neonates born to the mothers enrolled for the study are reflected in Figure 5. Both the genotypes assumed a recessive model for the presence of any of the neonatal complications ( Figures 5A, 5B). OR (95% CI) for CC vs. AA+AC was 3.2 (0.79-12.9), and for TT vs. CC+CT, it was 5.48 (0.57-175.7). It signified that mothers with mutant genotypes (CC1298 or TT677) were three to five times more prone for their neonates to develop an altered outcome. For neonatal sepsis C677T genotypes, mothers revealed an additive model (OR (95% CI) CT+TT vs. CC = 3.7 (0.32-43.37)), whereas, for A1298C, the noted OR (95% CI) for CC vs. AC+AA (6.7 (0.57-80.74)) reflected recessive model for the risk (Figures 5C, 5D). The SNPs' models were the multiplicative type for congenital/chromosomal anomalies (Figures 5E, 5F) The neonatal outcomes such as LBW, preterm birth, IUGR, and hyperbilirubinemia documented odds of less than one and thus implied that maternal mutant alleles have the most negligible influence on these variables. However, the genotypic distribution showed a very insignificant recessive model for IUGR for mothers homozygous for CC vs. AC+AC (OR (95% CI): 1.9 (0.42-9.6)) and TT vs. CC+CT (OR (95% CI: 1.64 (0.05-15.02)). Discussion The enzyme MTHFR plays a central catalytic role in DNA synthesis and repair during the cell cycle and cellular differentiation during fetal development. MTHFR SNPs are not uncommon in our population. Hence, the study was undertaken to delineate the significant genetic susceptibility of altered outcomes in neonates. Mothers exhibiting polymorphism in the gene coding MTHFR, such as C677T and A1298C SNPs, make them highly susceptible to adverse neonatal outcomes. The SNPs significantly increased the risk for congenital/chromosomal anomalies, development of sepsis, and mortality risk in their neonates. The gene models predicted that neonates born to mothers homozygous for the mutant alleles were more for adverse outcomes. The frequency percentages of the homozygous variant genotypes CC1298 and TT677 were 25% and 8.06%, respectively, corroborated with the previously reported frequency of 19.7% and 2%, respectively, by Patel et al.'s study in this area [4]. Angeline et al. recorded frequencies of 15.3% for CC1298 and 1.38% for TT677, and the MAF were 38.9% and 10.4%, respectively [16]. Similarly, the MAF in Kumar et al.'s study was 44% and 15% [17]. The respective MAFs recorded in the present study were 42.5% and 22.5%. The mutant allele percentage distribution among various regions in India varied nearly from 2% to 24% for T677 and 19% to 44% for C1298 [1,4]. The frequency distribution difference might reflect the distribution pattern in different geographical regions [17,18]. MTHFR is the critical enzyme for maintaining the active folate level and one-carbon transfer crucial for DNA methylation, synthesis, and repair. Reduced activity due to polymorphism minimizes the synthesis of 5-MTHF, essentially required for the remethylation of homocysteine to methionine and DNA methylation [7,11]. Polymorphism begets conformational changes in the binding site of the SAM that affects the methylation reactions. DNA methylation is crucial for regulating gene expression, which is imperative in implantation, apoptosis during organogenesis, and overall in-utero fetal development [6,11,12]. We observed that the presence of either mutant variants of A1298C and C677T in mothers recorded higher percentages of altered outcomes in their neonates. Of all the variables, anomalies such as NTD, CHD, patent ductus arteriosus (PDA), atrial septal defect (ASD), and Down syndrome were remarkably higher in neonates born to these mothers ( Figures 3A, 3B). Recent studies demonstrated the association of MTHFR SNPs with congenital anomalies. Yan et al. stated MTHFR C677T is a genetic risk factor for NTDs. The meta-analysis depicted the risk to be twice more (2.022) in the presence of TT677 than CC677 (95% CI: 1.508-2.712) [7]. Similarly, Zhang et al. observed a significant association of C677T SNP with CHD. C677T variants depicted higher odds for both recessive (1.69) and dominant models (1.35) and homozygous and heterozygous models for CHD. However, A1298C MTHFR failed to significantly impact CHD except for the recessive model (OR = 1.42) [11]. Similarly, Yadav et al. reported that mothers with C677T polymorphism (OR (95% CI): 1.20 (1.13-1.28)) were potentially susceptible to giving birth to offspring with NTD, while mothers with A1298C polymorphism did not exhibit significant distribution [19]. Zhu et al. too corroborated a strong association of the TT677 genotype and T677 alleles with ASD and PDA [20]. The present study findings stand in agreement with the above for the SNPs. A strong association was evidenced for CC677 genotypic mothers with congenital/chromosomal anomalies in neonates such as CHD, meningomyelocele, spina bifida, cleft palate, and Down syndrome (p = 0.001, Table 4 and Figure 2). Further, the study established a significant multiplicative risk for delivering babies with such anomalies if the mother has any one of the mutant alleles ( Figures 5E, 5F). Like previous studies, the results also imply that MTHFR polymorphisms affect the methylase-specific enzymes, primarily involved in DNA synthesis and repair and eventually the fetal organogenesis and development in-utero. Thus, MTHFR C677T and A1298C SNPs play a potential role in decreased fetal viability and might be crucial for the in-utero survival of the fetus [11]. Folate and vitamin B12 are indispensable for genome stability. Animal model and cell culture studies have reflected DNA hypomethylation, chromosome breakage, and aneuploidy, as in Down syndrome and recurrent abortions, in folate depletion state [20,21]. At the same time, Saraswathy et al. observed higher odds for TT677 for recurrent miscarriages (OR (95% CI): 7.33 (0.48-111.2)), highlighting the role of hypermethylation of MTHFR C677T at specific promoter regions that might have a vital role in implantation or proper fetal development [10]. Hyperhomocysteinemia, the resultant metabolic effect of insufficient active folate, is also related to MTHFR mutations and congenital anomalies [8,22]. Homocysteinemia, in turn, increases oxidative stress and inflammatory cascade in the endothelial cells leading to vascular thrombosis, including placental thrombosis [9]. Thrombotic events during in-utero development lead to abnormal materno-fetal perfusion, eventually leading to intrauterine death (IUD), IUGR, LBW, or preterm delivery [3]. Conversely, few studies reported a lower risk or a protective influence for LBW or preterm delivery [23][24][25]. No association was observed for either maternal MTHFR SNPs for LBW or preterm deliveries ( Figure 3 and Table 4). Instead, the data revealed a decreased risk of LBW and preterm delivery in the presence of mutant alleles T677 and C1298 ( Figure 4 and Table 5), as reported by Nurk et al. and Resch et al. [24,26]. On the contrary, Tiwari et al.'s study comprising 209 cases of preterm deliveries concluded that the distribution of MTHFR mutant genotypes was higher in preterm cases and increased the risk of preterm delivery [27]. However, mothers with homozygous mutant genotypes for A1298C and C677T did show some risk (nearly 1.5 times) for IUGR in their neonates. The finding was equivocal to the report for elevated risk of IUGR in mothers with mutant T alleles (OR (95% CI: 1.2 (1.0-1.4); p = 0.04) by Nurk et al. [24]. This could be attributed to compromised materno-fetal circulation and placental vaso-occlusion following thrombosis as a result of reduced active folate and homocysteinemia. However, various studies attributed maternal MTHFR SNPs as potential genetic risk factors for adverse outcomes in neonates, but with inconsistent conclusions. The varied observations might be due to the differences in sample size, inclusion criteria, and study design applied for these studies. Further analysis estimated that the maternal mutant MTHFR forms magnified the risk of acquiring neonatal sepsis in their infants ( Figure 5C, 5D), contrary to Zeeshan et al.'s study, which showed no risk [28]. Associated nutritional deficiency of folate and vitamin B12 in mothers due to polymorphic MTHFR, shall also be reflected in neonates [28]. These water-soluble vitamins are essentially required for adequate production and maturation of blood cells, including the immune cells [27,29]. The higher risk observed in this study for mortality in neonates born to mothers with mutant alleles could be multifactorial and majorly ascribed to the deficiency of micronutrients in neonates secondary to maternal MTHFR SNPs. We have not evaluated the nutritional status of mothers and neonates and thus, exclusive research needs to be conducted to establish the connecting link between mutant MTHFR genotypes and sepsis in infants. Studies have suggested that supplementation with L-methyl folate rather than conventional folic acid during the antenatal period would prevent the adverse outcome in fetuses and neonates born to mothers with MTHFR SNPs since methyl folate is the active form that becomes readily available to the mother and fetus to maintain the folic acid and homocysteine levels in mothers [30]. Limitation The study's primary limitation was that it was a hospital-based cross-sectional study catering small sample size. Secondly, biochemical parameters like folic acid, vitamin B12, and homocysteine levels were not estimated, either in mothers or neonates, which could have added more insight regarding the biochemical changes associated with the genotypes and their impacts on neonates. Therefore, a well-designed study on large cohorts would enable a more accurate association of the genotypes with the outcome variables. Screening for MTHFR SNPs and biochemical parameters in the cohort of antenatal mothers at each trimester would provide more accurate analytical results regarding the association of these genotypes with fetal and neonatal health. Further, randomized clinical trials with active folate supplementation in one arm and conventional folic acid supplementation in the other arm provide more substantial evidence to derive an appropriate treatment protocol. Conclusions The current study depicts the susceptibility of pregnant women with variant MTHFR A1298C and C677T genotypes to altered outcomes in neonates delivered, owing to a folate deficient state consequent to polymorphic MTHFR enzyme. Thus, the studied MTHFR SNPs considered a genetic risk factor for congenital/chromosomal anomalies, IUGR, sepsis, and mortality in their neonates, and antenatal screening for MTHFR SNPs could assess the high-risk mothers and initiate appropriate clinical interventions. Highlights The MTHFR A1298C and C677T gene polymorphisms in mothers could be ascribed as a genetic risk predictor for adverse outcomes in neonates. The present study expressed frequency percentages of 25% and 8.06% for mutant CC1298 and TT677 genotypes, respectively. Neonates born to mothers with mutant genotypes were more prone to complications like congenital/chromosomal anomalies, IUGR, sepsis, and mortality. Polymorphic MTHFR might result in a deficiency of active folate in mothers, eventually creating a lack of the micronutrient in their neonates. The mothers with the studied MTHFR SNPs might require dose modification of folic acid or active folate. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Institute Ethics Committee, All India Institute of Medical Sciences, Raipur issued approval AIIMSRPR/IEC/2022/1155. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
5,275.2
2023-04-01T00:00:00.000
[ "Medicine", "Biology" ]
Drivers of Employment Elasticities in Kenya The relationship between output growth and employment elasticities has been of intense debate among many economists. Though there is no conflict between the two objectives, the question that arises is the rate at which employment growth responds to economic growth. The policy focus on employment in Kenya is manifested by the sheer number of employment targeted development plans and Sessional papers that have been formulated. Basically, all the policy documents developed by the government have premised employment creation on economic growth. The purpose of the study was therefore to determine the drivers of employment elasticities in Kenya. Empirical findings indicated that the first lag of employment elasticity, average wage, inflation rate, labour force participation rate, first and second lags of labour force participation rate, population density, first and second lags of foreign direct investment to be the short run drivers of employment elasticity. Empirical findings also indicated that exchange rate, foreign direct investment and population density were the long run drivers of employment elasticity in Kenya. The study recommends that policy measures to control inflation should be tightened and more efforts to attract foreign direct investment to be undertaken. The study further recommends that a stable exchange rate should be maintained. Lastly, the government should harmonize the salary scale framework to regulate the wages in the country. This could be realized through salary adjustments based on a periodical and systematic evaluation of wage parameters in the public sector and taking cognizance of the prevailing economic dynamics. employment growth of 6 per cent yielding a total of 3.7million new jobs. However, an annual average of 511,000 new jobs were created in 2008-2012 against a target of 740,000 jobs per year (Republic of Kenya, 2013). During MTP II (2013-2017), the government targeted the economy to grow at 10 per cent and create an average of one million new jobs. However, an average of 826,600 jobs were created annually. The MTP III (2018-2022) targets to increase real GDP annual growth from an average of 5.5 per cent achieved over the 2013-2017 period to 7 per cent in order to support higher economic growth. It also aims to create over 6.5 million jobs over the Plan period. There is need therefore, to determine the drivers of employment elasticities in Kenya given the fact that the government has intensively attempted to use fiscal policy measures as instruments of employment creation. Literature Review 2.1 Labour Demand Theory The theory of labour demand has its roots from Marshall (1890) and Hicks (1930). The labour demand theory states that the demand for labour is a derived demand, since workers are hired for their contribution in the production of goods and services. A key feature of the theory is that price flexibility plays an important role in the correction of labour market disequilibrium and market clearing. Assuming that there are only two factors of production, the number of employee-hours hired by the firm (L) and capital (K), the aggregate stock of land, machines, and other physical inputs, the production function is written as: Where is the firm's output. The theory assumes that production exhibits constant return to scale as described by such that: > 0 , < 0 and > 0 Where represents marginal products. Elasticity of substitution is given by the rate of change in the use of to from a change in the relative price of to , holding output constant. Elasticity of substitution is by given: Where measures the ease of substituting one input for the other when the firm can only respond to a change in one or both of the input prices by changing the relative use of two factors without changing output. The constant output labour demand elasticity which is the change in demand for labour from a change in its wage is given by: is labour demand elasticity. Equation (3) implies that: = −(1 − $) < 0 (4) Where $, ($ = ) is the share of labour in total revenue. Equation (4) implies that when output requires substantial amounts of labour for production, the constant output labour demand elasticity will be smaller, because the possible change in spending on other factors is small relative to the amount of labour being used. Consequently, the constant output cross-elasticity of demand for labour describes the response to labour from a change in the price of capital is given as: = ! (5) Or, % = −(1 − $) > 0 (6) The scale effect is the factor's share times the product demand elasticity. The scale effect takes into account the possibility that output will change as a response to a change in the price of labour, and that in turn may affect the overall demand for labour. The scale effect which is the total response from a change in the wage is given as: Equation (7) is the fundamental factor law of demand. It divides the labour demand elasticity into substitution and scale effects. Empirical Literature Crivelli, Furceri & Bernate (2012) assessed the effect of structural and macroeconomic policies on the employment-intensity of growth for 167 countries using an unbalanced panel over the period 1991-2009. The objective of the study was to provide estimates of employment-output elasticities and assess the effect of structural and macroeconomic policies on the employment-intensity of growth. The study employed two approaches. The first approach consisted of estimating elasticities using time-series regressions for each country using the equation: ln(6 1 ) = 7 + 9 : ln(6 1;: ) + < : ln(= 1 ) + > 1 (8) Where 6 1 was the level of employment at time t, and = 1 was the level of GDP at time t, 7 was the intercept coefficient, 9 : , < : were the regression coefficients and > was the error term. The main advantage of the time series regressions was to directly provide country-specific employment estimates. The second approach relied on a panel framework in which long-term elasticities were estimated using country-specific estimates for GDP slopes and employment persistence using the equation: ln(6 1 ) = 7 + 9 : ln(6 1;: ) + 9 ? @ ln(6 1;: ) + < : ln(= 1 ) + < ? D B ln(= 1 ) + > 1 (9) Where 6 1 was the level of employment for country i at time t, and = 1 was the level of GDP for country i in time t. @ was a country-specific dummy, 7 was the intercept coefficient, 9 3 , < 3 were the regression coefficients and > was the error term. The study found that point estimates of elasticities fell between 0-1, with the majority of them ranging between 0.3 and 0.8. The elasticities varied considerably across regions, income groups, and production sectors with the highest estimates recorded for the most economically developed regions, industry and service sectors. The study also found that structural policies aimed at increasing labour and product market flexibility and reducing government size had significant and positive impact on employment elasticities. Macroeconomic policies aimed at reducing macroeconomic volatility also had a positive and statistically significant impact on employment elasticities. The study recommended that in order to maximize the positive impact on the responsiveness of employment to economic activity, structural policies have to be complemented with macroeconomic policies aimed at increasing macroeconomic stability. Slimane (2015) assessed the determinants of cross-country variations in employment elasticities mainly focusing on the role of demographic and macroeconomic variables. The study used an unbalanced panel of 90 developing countries from 1991 to 2011. The equation for a country's specific elasticities was given by: ln(6 1 ) = 7 + < ln(6 1;: ) + C ln(= 1 ) + > 1 (10) Where 6 1 was the level of employment at time t, = 1 was the level of GDP at time t, < DEF C were estimation coefficients, 7 was the intercept coefficient and > 1 was the error term. The study also estimated the long-term employment to GDP elasticities for each country through the equation: γ H B = 7 + I : J + I ? K + ' (11) Where γ H B was employment elasticity for country L, J denoted macroeconomic variables, S denoted structural variables, I : and I ? were the estimation coefficients 7 was the intercept coefficient and ' was the residual term. The results of the study indicated that the elasticity estimates varied considerably across countries while employment elasticities were higher in more advanced and closed countries. Employment elasticity comparisons across countries revealed wide variation in employment elasticities with the highest estimates found in Comoros, Gabon, Cote d'Ivoire, Niger, Algeria, Madagascar and Togo. In contrast, employment elasticities were modest in other countries such as Bosnia (0.05), Ukraine (0.09) and China (0.10). Negative estimates were found for Serbia (-0.101), Belorussia (-0.112) and Romania (-0.238). The study also revealed that the macroeconomic policies aimed at reducing macroeconomic volatility had a statistically significant effect in increasing employment elasticities. Employment intensity of growth was found to be higher in countries with larger service sector and also for the countries with a higher share of urban population. Methodology 3.1 Theoretical framework To meet the objective of the study, labour demand theory was used. The labour demand theory is attributed to Marshall (1890) and Hicks (1930). The firm's production function describes the technology that the firm uses to produce goods and services. The firm's objective is to maximize profits that are given by: M = − (12) Where M is the output price, is the price for labour (wage rate) and is the price for capital. The first order conditions are given by: Where λ is a Lagrangian multiplier. The ratio of the two first order conditions shows that the marginal rate of technical substitution, , equals the factor-price ratio, , for a profit maximizing firm. Assuming a Cobb Douglas production function of the form: P = Q R S (15) Where P is the level of output, Q is technical efficiency, is the labour input, is capital input and 7 and < are elasticity parameters. The firms profit function is given by: Equation 23 is the derived labour demand function expressed as a function of output, real wage and the price level. Empirical Model Following the labour demand theory (Marshall, 1890 andHicks, 1930), labour demand is a function of output, real wage and the price level as shown in equation 23. Relating labour demand to employment elasticity, the study improved on the model by Mouelhi and Ghazali (2014) by incorporating demographic factors. The demographic factors were included to assess the effect of agglomeration factors on employment elasticity. The estimatable equation was expressed as: 6Ja 1 = (LE bD 1 , 6cdℎ 1 , Oa6EE6$$ 1 , Df6 1 , aOaF6E 1 , bDg O d6 1 , FL 1 , ) (24) The variable 'Ja 1 was the aggregate employment elasticity for all the sectors in period h, LE bD was the annual inflation rate, 6cdℎ was the nominal exchange rate (Kenyan shillings/US dollar), Oa6EE6$$ was a proxy for trade openness, Df6 was the average annual real wage, aOaF6E was the population density, bDg O d6 was labour force participation rate and @i was foreign direct investment. Population density and labour force participation rate also gave the effect of labour supply on aggregate employment elasticity. Definition and Measurement of Variables Employment elasticity (EMP): It is the responsiveness of employment growth to economic growth and measured as the ratio of the relative change in employment to the relative change in output. Inflation (INFLA): It is the sustained increase in the general price level of goods and services in an economy over a period of time. It is measured by change in consumer price index (CPI). Real exchange rate (EXCH): It is the nominal exchange rates adjusted for differences in price levels between two countries. It is measured as a product of the nominal exchange rate (Kenya Shilling against the US dollar) and the ratio of the consumer price index, with year 2010 taken as the base year in both countries. Wage rate (WAGE): It is a measure of the price of labour. It is measured as the average monthly earnings per employee for each of the sectors, over time. Population Density (POPDEN): It is a measure of the intensity of land use. It is measured as the average population per square kilometre. Labour Force Participation rate (Labfoc): is defined as the section of working population in the age group of 16-64 in the economy currently employed or seeking employment. It is calculated as the labour force divided by the total working-age population. It is used as a proxy for those individuals aged between 16-64 years. Foreign Direct Investment (FDI): It is the direct investment in productive assets by an entity established in a foreign country and measured as a percentage of investment inflows to GDP. Trade Openness (OPENNESS): It is the extent which an economy is open to trade. It is measured as the ratio of the value of the total foreign (exports + imports) to the GDP Data Type and Source The study used annual time series data for the period 1970 to 2016. The choice of the period was primarily informed by availability of data. Data was obtained from different sources as indicated in Table 3.1. Testing for Stationary of Data To detect the presence of unit root in the series, the study employed the Clemente Montanes Reyes (CMR) test. The CMR test is based on the approach that allows for the possibility of having structural breaks in the mean of the series. Clemente, Montanes and Reyes (1998) extended the Perron and Vogelsang (1992) model to take care of two structural changes in the mean. The CMR test is based on a modification of the tests by Perron and Vogelsang (1992) known as the additive outlier (AO) and the innovative outlier (IO). The AO describes the break as occurring suddenly through changes in the mean and the IO views the break as sprouting slowly over time. The CMR test has a null hypothesis that the series has a unit root with structural break(s) against the alternative hypothesis that they are stationary with break(s). The CMR test hypothesis is specified as; j0: = 1 = = 1;: + I : @lm :1 + I ? @lm ?1 + n 1 (25) j1: = 1 = n + F : @o :1 + F ? @lm ?1 + ' 1 (26) Where @lm :1 is a pulse variable equal to one if h = lm + 1 and becomes zero otherwise. @o :1 = 1 if h > lm (L = 1,2) and zero otherwise. lm : and lm ? represents the time periods when the mean is being modified. Auto Regressive Distributed Lag Model To achieve the objective of this study, equation 24 was estimated using an Auto Regressive Distributed Lag (ARDL) model. An ARDL model is a standard least squares regressions that include lags of both the dependent variable and explanatory variables as regressors (Greene, 2008). An ARDL model for employment elasticities with the regressors identified in the functional relationship 24 was expressed as: 6Ja 1 = < q + ∑ < : (27) This was the general ARDL model which was then rewritten as: The distributed lag form of the model that defined long run relationship was thus given as: Where 7 gave short run coefficients, λ was the speed of adjustment parameter and }~l was the residuals that were obtained from the estimated cointegration model of equation 30. The ARDL model was employed so as capture the partial adjustments and adaptive expectations in employment elasticity in Kenya as used by (Nakata & Takehiro, 2003) to estimate both partial adjustments and the elasticities of employment with respect to output and the relative wage in Japan. The partial employment adjustment model helped to explain the employment elasticity adjustment behavior and the observed employment fluctuations. The adaptive expectations model explained how the economy compensates for long-run adjustment of employment elasticity to changes in relative output by speeding up short-run employment adjustments. Empirical Findings 4.1 Descriptive Statistics The study analyzed the data for all the variables in order to discern its characteristics prior to estimation. This involved the determination of the mean median, maxima, minima and the standard deviation of the variables. According to Table 4.1, the average aggregate wage employment for the period under consideration was 1,472,466 persons. The total wage employment had a maximum of 2,553,500 workers and a minimum of 644,500 workers. The growth in wage employment over the study period was as a result of the Kenyan government efforts to create employment opportunities to absorb the country's growing labour force. This was achieved through various short, medium and long-term employment creation measures like Kenyanization Programme, tripartite agreements, active labour market policies, public works programs, foreign employment, and rural development over the plan period (Omolo, 2013). Trade openness was used to reveal the impact of international trade on employment intensity. Trade openness was given as a ratio of the sum of exports and imports to the GDP and averaged 57. Foreign Direct Investment was measured as a percentage of investments inflows to GDP. The estimation results presented in Table 4.1 shows that FDI ranged from 0.04 per cent to 2.73 per cent with an average of 0.86 per cent. The estimation results reveal that FDI inflows to Kenya have been highly volatile. This volatility could be attributed to low confidence of investors as a result of insecurity and political instability when FDI inflows were quite low as shown by a minimum value of 0.04 per cent. On the other hand, high FDI inflows could be attributed to improved investment environment through implementing various macroeconomic reforms. Population density in the country ranged between 20 people per square kilometre of land and 83 people per square kilometre of land with a mean of 47 people per square kilometre of land. The sample population density mean was low compared to the country's projected population density mean of 73.9 people per square kilometre in 2014 (Republic of Kenya, 2014). Exchange rate was used to reveal the impact of external shocks on employment elasticity. On average the exchange rate was 43.19 Kshs/US$ with maximum and minimum values of 101.05 Kshs/US $ and 7.02 Kshs/US$ respectively. Therefore, the exchange rate over the study period was volatile. This implies that the country experienced low, mild, and high exchange rates at some points in the study period. The rate of inflation for the period under study ranged from 1.55 per cent to 45.98 per cent with an average of 12.43 per cent a standard deviation of 12.15. Overall, therefore, Kenya experienced mild, rapid and galloping inflation rates. The average rate of inflation was more than a single digit which was higher than the one envisaged in the EAC Monetary Union Protocol. According to this protocol, headline inflation rate should be about 8 per cent under macroeconomic convergence criteria (EAC, 2013). Kenya's output level was captured by real GDP. Over the period 1970-2016, the real GDP averaged Kshs.1, 638,972 million. The real GDP had a maximum of Kshs. 4,300,302 million and a minimum of Kshs. 527,290 million. The growth in real GDP over the study period could be attributed to the various development plans and strategies implemented by the government over the study period. Unit Root Test Results Unit root tests for all the variables were conducted so as to establish the order of integration. The test results for all the variables are reported in Table 4 The test statistic for CMR unit root is the minimum h-statistic. The estimation results of the CMR unit root test shown in Table 4.2 indicate that variables inflation rate, FDI, wage rate, and employment elasticity, were statistically significant at 5 per cent level. This is because the minimum hvalue for these variables were smaller than the critical value of -5.490 at 5 per cent significance value. Thus, according to CMR unit root test, the null hypothesis of presence of a unit root with structural break(s) for variables GDP growth rate, inflation rate, FDI, wage rate and employment elasticity was rejected and the alternative hypothesis that the series are stationary was not rejected. This implies that these variables were stationary at levels suggesting that they are integrated of order zero, I (0). According to Table 4.2, the test statistic for variables population density, exchange rate, labour force participation rate and trade openness, were not statistically significant at 5 per cent significance level. This means that the variables were not stationary at levels. Thus, according to CMR unit root test, these variables had at least one unit root and required to be differenced to become stationary. The series were, however, stationary at first difference and therefore integrated of order one, I (1) as shown in Table 4 The structural breakpoints for key macro-economic variables in the series coincided with key economic developments in the country. Empirical Results Unit root tests results in Table 4.2 indicated that the variables used in the study mixed orders of integration, that is I (0) and I (1) suggesting that ARDL was the appropriate model for estimation. These variables included employment elasticity, average wage, exchange rate, inflation, labour force participation rate, population density, trade openness and FDI. The ARDL model was preferred due to its ability to estimate the long and short-run parameters of the model simultaneously for the avoidance of the problems posed by non-stationary time series data. Pesaran and Shin (1999) showed that cointegrating systems can be estimated as ARDL models, with the advantage that the variables in the cointegrating relationship being either I(0) or I(1), without needing to prespecify which are I(0) or I(1). Pesaran and Shin (1999), also note that unlike other methods of estimating cointegrating relationships, the ARDL representation does not require symmetry of lag lengths; each variable can have a different number of lag terms. Before estimation of the ARDL, a bound test was estimated to determine whether the independent variables had a long-run relationship with the dependent variable. The bound tests were estimated using the approach proposed by Pesaran and Shin (1999) for testing long run relationship among the variables expressed in equation 24. The -statistic tests for the joint significance of lagged variables. If the -statistic falls below the lower critical value (Lower bound), the null hypothesis of no long-run relationship is accepted irrespective of the orders of integration but if the -statistic falls above the upper critical value (upper bound), the null hypothesis of no longrun relationship is rejected (Pesaran and Shin, 1999). However, if the -statistic falls between the lower and upper critical values, any inference would be inconclusive and knowledge of the order of integration of the variables will be needed before conclusive inferences are made. The bound test results are shown in Table 4.4. Source: Derived from collected data The estimation results in Table 4.4 show an -statistic value of 10.12 which is more than the upper bound value of 4.43 at one (1) per cent significance level thus rejecting the null hypothesis and concluding that there exists a long run relationship among the model variables. The results, therefore, justify the use of ARDL and ECM version of ARDL to derive the long run and short run relationships of the variables. Before estimating the short and long run relationships of the variables, diagnostic tests were conducted to establish the ARDL statistical appropriateness. Equation 24 was employed to establish the optimal lag length and the goodness of fit. The adopted optimal lags by the Akaike's Information Criterion (AIC) automatic lag selection were (1, 2, 2, 0, 2, 0, 0, 2). The R-squared value was 95 percent. The estimated model had a -statistic of 11.349 with a corresponding a-value of 0.0023. Since this a-value (0.0023) was less than the critical value of 0.5 at 5 per cent significance level, the null hypothesis of joint significance of the explanatory variables being equal to zero was rejected. The LM serial correlation test and Breusch Pagan -Godfrey tests were used to test for the presence of serial correlation and heteroscedasticity respectively. The estimated results gave an observed •squared value of 1.0542 with a corresponding a-value of 0.28. The null hypothesis that there is no serial correlation was thus accepted at five per cent level. The a-value (0.4332) for the observed •-squared in Breusch Pagan Godfrey test also led to the acceptance of the null hypothesis of no heteroscedasticity at 5 per cent level. The objective of the study was therefore realized by estimating the Auto regressive Distributed Lag (ARDL) model given in equation 24 for long run and an Error Correction Model (ECM) version of ARDL for short run sources of employment elasticities. The short run estimation output are given in Table 4.5. Short Run Sources of employment elasticities in Kenya The short run estimation was done in two stages. The first stage involved estimating the reduced cointegrating ARDL equation 24. The residuals got from estimation were then lagged once (ECT-1) and were used in the second stage to estimate the ECM version of ARDL model. The short-run results represent the coefficients of the differenced explanatory variables and they give short-run growth effects. The coefficients describe short-term growth in dependent variable resulting from previous period's growth in the independent variables. The results are presented in Table 4.5. Table 4.5 indicates that only short-run coefficients for variables first lag of employment elasticity, average wage, inflation rate, labour force participation rate, first and the second lag of labour force participation rate, population density and the first and the second lag of FDI were statistically significant. This reflects presence of short-run relationship between employment elasticity and these variables. Employment elasticity was the dependent variable and denoted the overall growth employment elasticity at time h. The annual employment elasticity was calculated by dividing the percentage change in employment by the corresponding percentage change in GDP during a given period to provide a time variability which is not possible with OLS estimates. The estimation results presented in Table 4.5 shows that the coefficient of the first lag of employment elasticity change is positive and statistically significant at one per cent level of significance. The change in employment elasticity in the current period impacts positively the change in employment elasticity after one period. Inflation rate is viewed as a proxy for the level of economic stability in an economy and it is theoretically expected to have a negative effect on employment yield of economic growth. The coefficient of inflation was -0.11 and statistically significant at 1 per cent level. This implies that a one per cent increase in inflation rate will reduce employment elasticity by 0.11 per cent. The study results concur with Kapsos (2005), Crivelli et.al (2012) and Ghazali (2014) who found the rate of inflation to be negatively related with employment elasticity. Inflation can be expected to have negative effect on employment yield of output growth. This is because high rates of inflation mean high production costs in terms of high prices for inputs, raw materials and even labour. Instability in macroeconomic variables would thus imply limiting investment opportunities that would in turn create employment opportunities. The population density variable had a coefficient of 0.031. The coefficient was positive and statistically significant at 1 per cent level. Therefore, a one per cent increase in the population density will cause a 0.031 increase in employment elasticity. This implies that increase in the population density growth rate enhanced employment yield of output growth. The study results concur with Adegboye, Egharevba & Edafe (2017), but contradict with Crivelli et.al (2012) who found population density to be negatively correlated with employment elasticity. The implication of the results is that in areas with high population density, the level of unemployment is also very high. This could mean that any change in employment growth would have a bigger impact than in areas which are densely populated. The annual average wage had a negative sign for its coefficient in the current period and statistically significant at 5 per cent level. The magnitude of annual average wage was -0.0012; this indicates that a one per cent increase in annual average wage will lead to a decrease in employment elasticity by 0.0012 per cent. The implication is that average wages have an indirect relationship with employment elasticity. The results concur with Ghazali (2014) that higher average annual real wages reduce employment elasticity. However, as the period moves into the future, the coefficient becomes insignificant. A one-unit growth of average wage affects employment elasticity by 0.004 one period latter and 0.001 two periods after. The study findings imply that higher average annual real wages reduce employment-growth elasticity within the current year. This is because higher wages and non-wage benefits increases the cost of production and could constrain growth-induced employment opportunities. Labour force participation rate was included in the model to assess the effects of labour supply on employment elasticity. The coefficient of labour force participation variable was found to be positive with a magnitude of 0.045 and statistically significant at 5 per cent level in the current period. The coefficients of the first and second lags of labour force participation variable were also positive and statistically significant at 5 and 1 per cent levels respectively. The magnitude of effect increases as the variable move into the future as indicated by higher value of the coefficient as compared to the previous value. This implies that an expanding supply of labour leads to a more employment-intensive growth. This concurs with the classical theoretical prediction that higher labour supply will lead to lower average wages and ultimately lead to an increase in demand for labour input (Ambrosi, 1986). The results are consistent with Chan (2001) who found a positive and significant relationship between labour force participation rate and employment elasticity. The estimation results, however, contradict Crivelli (2012), who found labour force participation rate to be negatively correlated with employment-output elasticities for advanced countries. A probable explanation for the relationship between labour force participation rate and employment elasticity could be the existence of a dualistic economy in Kenya where there exists a big difference between formal sector and informal sector wages due to the existence of labour market imperfections. The rates of rural to urban migration have tended to exceed the absorptive capacity of the modern sector, leading to the growth in the informal sector. This means that employment growth would be responsive to any slight change in the growth of labour force. Another probable explanation could be that population growth stimulates technological progress and makes possible the realization of economies of scale that provide incentives for the adoption of more efficient techniques and institutional arrangements (Kawagoe, Hayami and Ruttan, 1985). This would thus imply that population growth promotes development by increasing the economy's productive capacity which in turn create employment elasticities. The coefficient of foreign direct investment variable was found to be negative with a magnitude of -0.718 and was statistically insignificant at all three levels in the current period. The magnitude of the first lag of FDI was 1.179 and statistically significant at 5 per cent level. The magnitude of the second lag was 0.896 and statistically significant at 10 per cent level. This implied that a unit increase in foreign direct investment increases employment elasticity by 0.1179 in the first year and 0.896 in the second year respectively. The increase in FDI in the current period affects employment elasticity after one period and also two periods later. The study results concurs with Adegboye, Egharevba, & Edafe (2017), who found that lagged values of FDI had a positive impact on employment elasticity in Sub-Saharan Africa. A probable explanation could be the bureaucratic procedures involved in the release of funds from abroad. Even if the funds were to be released and invested immediately, the production in the affected sectors of the economy will only be realized with a lag since it takes time to establish the investments. Foreign direct investment also generates employment through forward and backward linkages with domestic firms. Table 4.5 reveals that the coefficient of the trade openness variable though positive with a magnitude of 0.0009, was not statistically significant at all the three levels of significance. The results concur with (Bruno, Falzoni, & Helg, 2003) who did not find any statistically significant relationship between trade openness and labour demand elasticity. The results, however, contradict Crivelli (2012), who found that employment elasticities tend to be higher in more advanced and closed economies. Despite the insignificant relationship between trade openness and employment elasticity, the positive relationship among the two variable is consistent with the trade theory. Wood (1999), ascertains that trade openness can lead to an increase in labour demand in labour-abundant countries due to comparative advantage. This in turn is expected to increase labour-demand elasticities as labour comes under pressure due to stiffer competition in the goods and labour markets. Just like inflation, exchange rate was also used in empirical analysis as an indicator for assessing macroeconomic stability. The coefficient of exchange was found to be negative and statistically insignificant at all levels in the current period. The coefficients of the first and second lags of exchange rate were, however, positive but also statistically insignificant at the three levels of significance. The study results contradict Ghazali (2014) who found a negative and highly statistically significant relationship between exchange rate and employment elasticities in Tunisia. The co-efficient of the error term or the adjustment factor had a magnitude of -1.921 and was statistically significant at 5 per cent level. The fact that the error correction term was negative provides evidence about the existence of a long-run association among the variables. The coefficient of the error term showed the proportion of the current changes in employment elasticity that were explained by the disequilibrium error in the previous period. Long Run Sources of Employment Elasticities in Kenya The long run coefficients of the determinants of employment elasticities showed how employment elasticities reacted to a permanent change in the independent variables. The long run results are given in Table 4.6. Table 4.6 indicates that only long run coefficients for variables exchange rate, FDI, and population density were statistically significant. This reveals presence of long run relationship between employment elasticity and these variables. The coefficient of exchange rate variable was found to be negative with a magnitude of 0.143 and statistically significant at 5 per cent level. Therefore, a one per cent increase in the exchange rate will cause a 0.143 decrease in employment elasticity in the long run. This implies that increase in the rate of exchange rate (depreciation of the domestic currency) deterred employment yield of economic growth. A probable explanation is that depreciation of the domestic currency contracts the growth of real output due to high dependence on import of inputs and capital goods. Alexandre, Bacao, Cerejeira & Portela (2010) finds similar results; a negative relationship between exchange rate and employment elasticity for 23 OECD countries. The results are also consistent with Mouelhi and Ghazali (2014) who found that nominal exchange rate was negative and highly statistically significant to employment elasticity across all specifications in Tunisia. The FDI variable had a statistically significant coefficient of 0.061 at one per cent level of significance. Therefore, a one per cent increase in FDI led to 0.061 per cent increase in employment elasticity. This implies that an increase in FDI enhanced employment elasticity in the long run. Foreign direct investment brings investable financial resources, provides new technologies and improves the efficiencies of existing technologies and therefore acts as a stimulant for employment growth. The results contradict Akinkugbe (2015) who found that the surge in FDI inflows has not manifested in significant formal sector job creation and reduction in unemployment levels in the long run. The coefficient of population density variable was found to positively influence employment elasticity. The coefficient was also statistically significant at 5 per cent level with a magnitude of 0.053. Therefore, a one unit increase in population density increases employment elasticity by 0.053 per cent. This meant that an increase in population density also led to a high employment elasticity in the long run. The estimation results contradict Crivelli et.al (2012), who found population density to be negatively correlated with employment-output elasticities, for 167 countries. Conclusions and Policy Implications This study concludes that the first lag of employment elasticity, average wage, inflation rate, labour force participation rate, first and second lags of labour force participation rate, population density, first and second lags of FDI were the short run drivers of employment elasticity. Empirical findings also indicate that exchange rate, FDI and population density are the long run drivers of employment elasticity in Kenya. The study recommends that policy measures to control inflation should be tightened. This could be realized by devising strategies to increase the tax base and improve the tax compliance in the country. The study also recommends that efforts to attract more foreign direct investment should be undertaken. Government through relevant agencies should increase the ease of doing business in the country. This could be achieved by enhancing infrastructural development in the country which is a key driver of FDI. Another measure to enhance FDI in the country could be through enhancement of foreign direct investments incentives. These measures can be directed by the National Treasury. A stable exchange rate should be maintained. This could be achieved by growing and diversification of exports. In addition, tourism which is a major source of foreign currency can be promoted by investing in product diversification in the tourism sector. Lastly, the government should harmonize the salary scale framework to regulate the wages in the country. This could be realized through salary adjustments based on a periodical and systematic evaluation of wage parameters in the public sector and taking cognizance of the prevailing economic dynamics.
8,842.4
2019-01-01T00:00:00.000
[ "Economics" ]
Dissecting Jets and Missing Energy Searches Using $n$-body Extended Simplified Models Simplified Models are a useful way to characterize new physics scenarios for the LHC. Particle decays are often represented using non-renormalizable operators that involve the minimal number of fields required by symmetries. Generalizing to a wider class of decay operators allows one to model a variety of final states. This approach, which we dub the $n$-body extension of Simplified Models, provides a unifying treatment of the signal phase space resulting from a variety of signals. In this paper, we present the first application of this framework in the context of multijet plus missing energy searches. The main result of this work is a global performance study with the goal of identifying which set of observables yields the best discriminating power against the largest Standard Model backgrounds for a wide range of signal jet multiplicities. Our analysis compares combinations of one, two and three variables, placing emphasis on the enhanced sensitivity gain resulting from non-trivial correlations. Utilizing boosted decision trees, we compare and classify the performance of missing energy, energy scale and energy structure observables. We demonstrate that including an observable from each of these three classes is required to achieve optimal performance. This work additionally serves to establish the utility of $n$-body extended Simplified Models as a diagnostic for unpacking the relative merits of different search strategies, thereby motivating their application to new physics signatures beyond jets and missing energy. I. Introduction Hadron colliders provide some of the most important experimental inputs in high energy physics. At the microscopic level the colliding particles are quarks and gluons, implying that the production cross section is highest for states that either carry color or have a large interaction strength with quarks. In many beyond the Standard Model scenarios these new physics states then decay back to colored Standard Model particles, along with some dark sector objects that escape detection. The resulting experimental signature is multiple high p T jets and missing transverse energy, H T . Searches for new physics that are characterized by this final state have very high priority at the Large Hadron Collider (LHC) [1][2][3][4]. Hence, many ideas for distinguishing signal from background have been proposed [5][6][7][8]. The framework introduced in this article has been developed in order to quantitatively compare and contrast these different approaches. It is particularly interesting to understand how the observables respond as a function of the final state parton multiplicity, which can vary between new physics models. The canonical jets + H T searches at the LHC are currently framed in terms of Simplified Models [9], the majority of which have been extracted from the Minimal Supersymmetric Standard Model (MSSM) [10]. In particular, these searches have been optimized for signals that are motivated by supersymmetric (SUSY) models, involving both gluinos g and squarks q (fermionic color octets and scalar color triplets respectively) which decay to a stable neutral particle, the neutralino χ, some number of light flavor q, bottom b, and top t quarks, along with the option of additional weak gauge bosons W ± , and Z 0 . The simplest and best studied decay modes are g → q q χ and q → q χ [11,12]. In typical R-parity conserving models, these particles are produced in pairs and the parton level final state involves some number of colored objects and H T . Table I Production Decay Channel Final State q q q → q χ 2 partons + H T q g g → q q χ 3 partons + H T q → q χ g g g → q q χ 4 partons + H T q g g → q q Z 0 χ 5 partons + H T q → q χ t t t → t χ 6 partons + H T q g g → t t χ 7 partons + H T q → q χ g g g → q q Z 0 χ 8 partons + H T While this suite of signal topologies covers a wide range of possible final states (not all of which have associated public results from the LHC collaborations), the relative optimizations are complicated by the fact that the different production modes do not yield the same cross section, and the presence of intermediate on-shell states can lead to additional features in the signal distributions. It is therefore difficult to contrast the variety of approaches for digging new physics out of jets + H T . In order to minimize the differences between the ways of generating these various final states, we are introducing a novel variation of the Simplified Models paradigm which we will refer to as the "n-body extension." As outlined in detail in the next section, we will be performing our analysis as a function of the final state parton multiplicity, which we achieve by varying the number of partons that result from the direct decays of "gluinos." This allows to us compare observables in the same regions of phasespace, without needing to correct for the relative effects inherent to different Simplified Models, and yields a concrete and transparent assessment of the performance of a wide variety of variables and their combinations. In order to achieve a fair comparison between the observables considered below, we use boosted decision trees (BDTs) to optimize the different search cuts applied, thereby achieving maximum signal-to-background discrimination. While BDTs are by now a widely used technique experimentally, they are not as familiar to theorists -we provide a brief technical introduction to them in Appendix A. We are advocating for the use of multivariate tools here, not as an in-practice analysis strategy, but instead as a guiding principle to evaluate the relative importance of different multivariable approaches. In particular, BDTs permit the straightforward analysis of both correlated and uncorrelated variables, which in turn allows for the identification of powerful combinations. In this study, we focus on the observables that have been used by ATLAS and CMS in their multijet plus missing energy analyses. Since our multivariate approach is not meant to be taken as a new search strategy, we neglect the possible signal and background uncertainties. Therefore, our results represent how the different variables would perform in an ideal optimistic case. We compare the discrimination power of different sets of observables to a nearly optimal benchmark analysis, in which we combine all considered variables using a single BDT. This 'aggregate' result is of course unrealistic and should be interpreted as an approximate upper limit on the possible performance. Studying single variables alone leads to a classification scheme into three classes -missing energy-, energy scale-, or energy structure-type -along with a few hybrids which exhibit characteristics of more than one of the relevant behaviors. In general, we find that most of the information about the final states studied can be captured using multivariable analyses including at least one of each type. For most of the parameter space, combinations of standard variables such as the number of jets, the sum of jet mass (or H T ), and H T lead to near optimal performance. For the simple topologies studied here, more sophisticated variables are usually strongly correlated with one of these well-known variables. This provides justification for the canonical approaches already in place, and helps guide modifications that can be used when designing future searches. The rest of this paper is organized as follows. Section II provides a detailed definition of n-body extended Simplified Models, along with some comments on their theoretical consistency. Section III details our approach, the variables that we consider, and the signal and background simulations. Finally, Section IV shows the results of our analysis for both compressed and non-compressed gluino-neutralino topologies. We conclude in Section V. A number of appendices are given which provide some additional details and justify some of the approximations taken in the main text. II. The n-body Extended Gluino-Neutralino Simplified Model Simplified Models are a convenient way to organize signals relevant for new physics searches at the LHC. The philosophy is to identify models involving the minimal number of new particles and couplings in order to populate regions of signature space. 1 Altering the masses of the relevant states leads to kinematic differences which motivate multiple signal regions that can be designed in order to provide sensitivity across parameter space. This approach has taken hold at both ATLAS and CMS, and most of the new physics results are now cast in terms of Simplified Models. Given a Simplified Model, there are many ways one can extend it. For example, one can add additional states and couplings which could lead to new production modes, new branching ratios, and/or new kinematic features. One of the key ideas in this paper is a new augmentation of the Simplified Model theory space, which we will refer to as the "n-body" extension. The starting point for n-body extended Simplified Models is a set of states and a Lagrangian. Take the now canonical "Gluino-Neutralino" Simplified Model, which will be the example used throughout this work. The full beyond the Standard Model new particle content is a color octet Majorana fermion g (the gluino) and a singlet Majorana fermion χ (the neutralino). The Lagrangian is given by where L decay is the Lagrangian that specifies the decays of the gluino, and D µ is the appropriate covariant derivative. 2 We will assume R-parity is conserved in this study. This parity along with gauge invariance implies that the gluino must decay via a non-renormalizable operator to some Standard Model states and a neutralino. For example, the gluino could decay via an off-shell squark, see Fig. 1. This yields the standard g → q q χ decay channel, and the decay Lagrangian is given by the four-fermion operator where 1/Λ is the suppression scale for this operator, y is a dimensionless coupling and the superscript refers to the number of final state colored partons that will result from the decay. It is straightforward to complete this operator in the ultraviolet (UV) by introducing a squark, as illustrated in Fig. 1 The simple extension proposed in this paper, which will be of use below when exploring the variables utilized for jets + H T searches, is to introduce a larger set of possibilities that allow us to vary the number n of partons in the final state: where G µν is the SU (3) field strength with associated gauge coupling g s , Λ is a dimensionful scale, and y is a dimensionless coupling, see Appendix B for a detailed discussion of these operators. Note that in practice we will ignore the angular correlations predicted by the Lorentz structure of these specific operators by assuming a flat matrix element and allowing relativistic phase space to predict the final state distribution in the parent rest frame. Note that here we have chosen the n-body extension at each higher point which adds the maximum number of quarks. It is entirely plausible that all the partons could be gluons, and the operator would just include more powers of G µν . However, such operators would either require the existence of lower order operators -these would provide the dominant gluino decay modes -or involve loop interactions. In the latter case, the additional loop suppression factors would severely restrict the parameter space available for prompt decays. This issue is discussed in more detail in Appendix B. It is also possible that the n-body extension could involve Standard Model states besides quarks and gluons. We will leave exploring the implications of these additional directions in model space to future work. For the sake of specificity, it is worth pausing to define some crucial notation. A "parton" is either a quark or gluon that is produced at the matrix element level before the parton shower has been applied to a given event. We will distinguish between partons that result from the direct decays of the gluino, the "decay partons," from those that come from higher order initial state radiation (ISR) of gluons and/or quarks off the hard collision process as implemented using the MLM merging procedure [15], the "matrix element partons:" n ≡ number of total decay partons. m ≡ maximum number of matrix element partons. This terminology will be used extensively in the discussion below. Finally, we conclude this section by contrasting n-body extended Simplified Models with On-Shell Effective Theories (OSETs) [16]. OSETs are a class of non-Lagrangian based parameterizations for new physics scenarios. They are characterized by the masses of the onshell particles that are involved in the process of interest, along with additional parameters that determine the size and shape of the production cross section. This framework was introduced as a suggested shift in the approach to interpreting new physics searches at the then-upcoming LHC program. The goal was to move away from relying on "full" models as the jumping off point for designing searches, the classic example being scans in the M 0 versus M 1/2 plane of the Constrained MSSM. Instead, OSETs were invented to provide a signature based signal injection. These ideas ultimately lead to the invention of Simplified Models and their subsequent adoption by both CMS and ATLAS. Additionally, the MARMOSET Monte Carlo program was developed as an approach to solve the "LHC Inverse" problem in a systematic way [16]. In a similar spirit, many related tools have been released that facilitate the straightforward reinterpretation of LHC results [17][18][19]. One of the key points of this approach was to recognize that the leading kinematic aspects of the production cross section are simply due to the behavior of the parton distribution functions in most cases of threshold production. This implies that one could reproduce the majority of observable features across a wide variety of models using only a few parameters. Since the parton luminosities fall with a very high power of the momentum fraction, a Taylor expansion of the cross section is essentially truncated at leading order -a few exceptions were identified, e.g. p-wave production, and the required modifications for their inclusion into the OSET paradigm was provided [16]. For our purposes, their work demonstrates that even though we have chosen to use "gluinos" as our parent particles, the implications of our results are expected to hold in a much wider variety of theories that are dominated by non-zero s-wave production. This is part of the justification for the comparison between the distributions provided for our 2-parton results with stop pair production (that subsequently decay yielding t t χ χ) given in Appendix C. In our view, one advantage of using Simplified Models is that they are well-defined Lagrangian based theories, which implies that one can analyze the feasibility of UV completions (see Appendix B), and an investigation of higher order perturbative corrections can be performed straightforwardly. Additionally, modern simulation tools can be utilized which allows us to include the impact of merging matrix elements involving different numbers of ISR partons, which are important for the modeling of signals from compressed spectra as discussed below in Sec. IV B. The OSET framework obviously must additionally involve some mechanism that decays the parent particles. The approach taken in [16] was to ignore spin correlations and decay all unstable new physics states using a flat matrix element integrated against the standard phase space distribution. As already discussed, this is the approach taken in this work as well. While this assumption does not provide a good approximation for all possible models of interest, it is broadly applicable to a wide variety of scenarios and is an obvious choice for the kind of study we are performing here. Certainly exceptions can be found, e.g. if there is an on-shell intermediate state and the invariant mass of the resulting decay products is used as a discriminator, see [16] for additional examples along with simple modifications to move beyond the assumption of phase space only decays. Furthermore, there are cases where the angular dependence of the decay products becomes important, see e.g. [20,21] for a discussion. However, much of this information is washed out once one boosts the decay products to the lab frame and integrates over the possible orientations of the intermediate particles. This accounts for the lack of sensitivity to these effects and further justifies the assumptions made in this work. Clearly OSETs and our n-body extended Simplified Models are complementary approaches, and the work of [16] gives many of the detailed arguments for the broad applicability of the choices made here. III. Dissection Toolkit The n-body extended Gluino-Neutralino Simplified Models can be used to systematically explore a range of jets + H T final states. Search strategies for gluinos at CMS and ATLAS predominantly employ inclusive cuts in a phase space of some number of observables, which vary from analysis to analysis [1,3,[22][23][24][25][26][27][28][29][30]. Our goal is to understand the performance sensitivity of these observables for various injected signals, including the impact of correlations that are taken advantage of through different variable combinations. Developing quantitative intuition for which observables can be best used to distinguish between a given signal and background will lead to a better understanding of how to maximize coverage for a given space of signals. There is an additional practical matter due to the fact that systematic errors are present for every aspect of an analysis, be it from theoretical uncertainties, e.g. due to working at finite order in perturbation theory, or from experimental issues that arise from a variety of sources as one goes from hits in a detector to reconstructed physics objects, e.g. jet energy scale uncertainties, isolation requirements, and so forth. When working with real data, any observable that one wants to include in an analysis must be validated, requiring suitable control regions along with the ability to make reliable computations in order to extrapolate into the signal region. This implies there will always be a trade off between including more information, and maintaining a reasonable level of systematic control. Hence, in practice the number of observables is limited. It is outside the scope of this study to quantify such systematic effects. However, when performing a multi-variable analysis it will always be desirable to optimize the sensitivity of those variables. Given that there are several analyses which use different sets of observables, if/when a putative signal is discovered, understanding the correlations between given observables will be necessary to properly characterize the new physics. In this article we will generally consider combinations of up to three variables; the reason for this choice will become apparent as we go through the results. To develop intuition for a broad set of variables, we will characterize sensitivity using curves which detail the signal efficiency versus background rejection power for a given cut in observable space, often referred to as receiver operating characteristic (ROC) curves. We first start by comparing single observables against each other; however, it is important to also consider multiple observables together since in practice a multi-dimensional space must be probed to achieve maximum sensitivity. Our quantitative analysis relies on BDTs, which are preferred for their convenience and flexibility. It is expected that while the absolute performance of the BDTs is better than that of the coarsely binned multidimensional templates often used in experimental searches. The ROC curves are then an interesting metric for comparison of variables. The effect of binning is not explored in greater detail as it is luminosity and background estimation method dependent and our goal is to derive results independent of these effects. The details of the BDT implementation are given in Appendix A. A. Observables Many searches have been designed to access new physics in jets + H T . We choose to study the following suite of observables, which incorporates the predominant variables used on the 8 TeV LHC data. • H T is defined as the scalar sum of the p T of all jets in the event whose p T > 30 GeV and |η| < 2.5. This variable is particularly powerful for topologies like ours, where a new heavy particle decays to multiple objects. • H T is defined as the negative vector sum of the transverse momenta of all jets in the event whose p T > 30 GeV and |η| < 5.0. Then H T = H T is the scalar missing energy. For signal events, non-zero H T dominantly results from neutralinos in the final state. • N j is defined as the number of jets in the event whose p T > 30 GeV and |η| < 2.5. Jets are clustered using the anti-k T algorithm with a cone size R = 0.4. [31,32], where H T and H T are defined above. This variable discriminates against events where the H T comes from jet mismeasurement. • M J ≡ m J [33], where m J is the mass of a given anti-k T (R = 1.0) jet with m J > 50 GeV. This variable is predicted to be particularly useful for large multiplicity signals where multiple objects with moderate p T can be clustered into hard fat jets [33][34][35][36][37][38]. • m eff ≡ p j T + H T is the effective mass variable often utilized in jets + H T searches performed by ATLAS, see [28] for a recent example. • Razor [7], is a two-dimensional variable m R , R 2 , used to identify final states with two visible objects j 1 and j 2 and H T . In a so-called "Razor frame", obtained by where H T and H T are defined above. A di-pseudojet system is constructed using the Razor definitions above. 3 Then ∆H T is the scalar p T difference between the two jets. The number of b jets, n b , is also often used which greatly changes the background composition and suppresses light flavor jet backgrounds. As our study focuses primarily on kinematic properties of the event, we do not consider n b directly; however, we do consider observable performance for different backgrounds separately which effectively separates performance in n b bins, e.g. searches with multiple b-tags will be dominated by t t backgrounds. B. Simulation We simulate both signal and background events using MadGraph5 v1.5.14 [39] using CTEQ6L1 parton distribution functions [40], interfaced with Pythia v6.4 [41] for parton showering and hadronization. Basic detector simulation is performed in Delphes [42], with the default implementation of the CMS detector. For the matched signal and background samples, MadGraph and Pythia are interfaced for MLM matching [43] with the k T shower scheme [44]. All our samples are generated at a center-of-mass energy √ s = 13 TeV. Signal Simulation We generate the following samples: for n = 2, . . . 8 denotes the total number of "decay partons" while m gives the number of "matrix element partons." When n is even, we require each gluino to decay to n 2 partons + χ with 100% branching ratio. For n odd, we require each gluino to decay to n±1 2 partons + H T with 50% branching ratio for each decay mode and keep only the events where the gluinos decay asymmetrically. Note that while this procedure is artificial in the case of identical particle production, it is possible to have odd numbers of final state partons in associated production, i.e, if the production channel involves multiple states, see Table I above for examples. The quark-gluon content of the final states is determined using the operators shown in Eqs. (4) - (7). In practice, we decay the gluinos in Pythia by simply specifying the final states, allowing the program to choose an appropriate color connection and decay the parent using phase space integrated against a flat decay matrix element. In the first part of our study, we set the neutralino mass to be m χ = 1 GeV in order 3 We also studied an alternative algorithm for defining the di-pseudojet system in which the H T of the di-jet system was minimized. It was found that the different algorithms had little effect on performance. for the gluino decay not to be constrained by phase space; this will be referred to as the "uncompressed" signal phase space. We do not require any jets from ISR (m = 0) and consider the following gluino masses The second part of our study considers "compressed" spectra where the LSP mass is 5% less than the gluino mass. Due to the limited phase space for the decays, the gluino decay products are now expected to be soft. We therefore consider topologies where the gluinos are boosted against one or two ISR jets by generating matched events with m = 0, 1, 2 with the matching scale set to 100 GeV. Specifically, we consider the following gluino and neutralino masses: mg, m χ = 500, 475 , 1000, 950 , 1500, 1425 GeV (15) We will only show results for mg, m χ = 1000, 950 TeV below, but will comment on the behavior of other masses. Background Simulation We generated matched samples of Z(→ ν ν) + jets, t t + jets, and QCD multijet events. We accommodate up to four partons in the final state, which determines the maximum number of jets for a given process. In order to efficiently populate the tails of the background distributions, we split each background into bins of the variable S * T , which is defined as the scalar sum of the p T of all generator level particles, i.e., at parton level. Following the procedure detailed in [45], we modify MadGraph to implement a cut on S * T at generator level and require each bin to satisfy where htmax i , htmin i are the edges of the i-th bin. The final overflow bin has to satisfy N overflow /10 > σ overflow × L, where N overflow is the total number of events to be generated in the overflow bin, σ overflow is the cross section in this bin, and L is the luminosity. Table II shows the various background categories, the number of events generated per S * T bin and how many S * T bins were generated. These events are then showered and passed through Delphes independently, before being weighted by the cross section in each bin and combined. IV. Results of Gluino-Neutralino Study Motivated primarily by trigger thresholds from LHC Run 1 analyses, we preselect our signal and background samples requiring Preselection: • H T > 500 GeV, • two or more jets above 30 GeV, • no isolated leptons above 20 GeV, • min(∆φ) > 0.4, where min(∆φ) is the minimum azimuthal angle between the missing energy and the two highest p T jets. This is a standard cut which is used to reduce the fake missing energy from mismeasured jets. 4 The motivation for these preselection thresholds are driven primarily by the current trigger thresholds of CMS and ATLAS analyses which are restricted to have H T in the trigger. Note this is not always the case and alternate triggers are sometimes utilized to reduce the QCD rate, although we leave this for future studies to explore. We then evaluate the performance of the variables described in Sec. III A by computing the background rejection rate for different values of the signal acceptance. In the following, we consider the signal topologies described in Sec. II as well as the Z(→ ν ν) + jets, t t, and QCD multijet backgrounds. Although not shown explicitly, we find that the results for W We validate these plots and also perform additional comparisons that have a tighter phase space requirement against public material from CMS [25]. We generally find good agreement in the relative normalization of the background contributions and overall agreement within a factor of 2 using the Delphes simulation. The largest discrepancy with ATLAS and CMS public results is the QCD H T distribution resulting from fake missing energy which has a noticeably longer tail in our Delphes samples. This can be understood as resulting from jet resolution mismodeling in our simulation. This implies that the results regarding missing energy variables in QCD could be quantitatively different than what is shown below, although the qualitative conclusions are robust. Additional 1D distributions of selected observables. The legend is given in Fig. 2. Backgrounds are normalized to 10 fb −1 . Signals are normalized to the same yields as the sum of all backgrounds for shape comparison. The solid (dashed) signal histograms are for uncompressed (compressed) spectra. for all of the signal models is 1 TeV. Since the relative importance of each of these backgrounds will depend heavily on the selection cuts, we will study the performance of our variables against each background separately. This helps accommodate the application of our results to searches where cuts on variables other than the ones we consider lead to an alternate background composition. For example, in a search that requires two or more b-tagged jets, the dominant background would be t t while for a compressed spectrum search without b-tags, the dominant background would be QCD. Note that we will not provide results for a "total" background for this reason. Furthermore, since we have not included K-factors the relative cross sections are not robust, and additionally the mismatch found when validating the H T QCD tails could be exacerbated by such a naive combination. Finally, notice that Fig. 2 does not include the α T distribution. We find that this variable is highly correlated with min(∆φ) and H T , and thus loses much of its discriminatory power after applying the pre-selection cuts. This effect be inferred from Fig. 4, where after applying the H T and min(∆φ) cuts, the QCD distribution in α T looks much more signal-like. By considering all three variables, the contribution to the signal region of QCD can be greatly reduced as designed. Since our interest here is to compare observables in the same region of phase space, we have chosen to use a pre-selection which roughly conforms to the ATLAS and CMS H T /H T triggers. We leave the study of α T outside of the preselection region to future work. In the following, we consider the background rejection power for a given set of observables as a function of the number of matrix element partons in the final state and for each of the different background processes. We fix the signal efficiency with respect to the pre-selection cuts to ε sig = 10%, and compute the background rejection power 1/ε bkg , again where the background efficiency is computed with respect to the pre-selection cuts. A signal efficiency of 10% is typical of most searches -we checked additional signal efficiency points ε sig 25% and find that the results do not change qualitatively. For completeness, Fig. 5 shows the absolute selection efficiency after preselection so that it is possible to infer the implications of our results in terms of limits on signal production cross section × branching ratio. To get the absolute signal efficiency of the final cuts, multiply the pre-selection efficiency by 10%. A. Massless Neutralino Limit This section applies our methodology to the study of n-body signatures with a massless neutralino. Backgrounds are considered separately to isolate the essential kinematic features of each signal and background. We begin with a study of the individual variables of interest, followed by judiciously chosen combinations of two and three observables. One variable at a time We first evaluate the performance of each observable as a function of the total number of decay partons in the final state. The results for a 1.5 TeV gluino and for the different backgrounds are shown in Fig. 6. Note that we consider both Razor variables M R and R 2 separately. We also define an aggregate analysis which feeds all the variables given above to the BDT. We regard this as the "optimal" background rejection rate that is possible, and show it in each plot as a reference. Based on the behavior of these variables versus number of partons, we can already learn many valuable lessons and define the following variable categories: 5 • H T -type: The missing energy variables H T , M CMS • E struc-type : The energy structure variable N j : is sensitive to the structure of the visible energy, e.g. how many partons are generated in the decay; • Hybrid-type : The hybrid variables Razor R 2 , tics from multiple types depending on the number of decay partons in the event. 6 The performance of some of the variables obey trends that are independent of the background. H T -type variables perform best at low number of partons since, for low multiplicity final states, each individual final state jet or particle is expected to have a large p T . As the number of visible objects and visible energy increases, the total energy has to be split between more and more final states. H T -type variables therefore become progressively supplanted by E scale-type and E struc-type variables. The relative importance of these different types becomes apparent when we consider performance across different backgrounds. H T -type variables are more important in t t and QCD events, while in Z(→ ν ν) + jets they are less effective due to the large recoil of invisible energy already present in the background. Therefore, in Z(→ ν ν) + jets, E scale-type and E struc-type variables perform better. E struc-type variables tend to be more powerful against Z(→ ν ν) + jets compared to t t. This can be understood since in t t events, energy structure is a natural consequence of the multiple scales in the problem. It is interesting to note that for low n partons, the H T -type variables perform very well, and in the particular case of QCD and t t are near optimal. We also provide results as a function of the gluino mass as shown in Fig. 7 for the different backgrounds under consideration. We find that for a low number of partons, the performance of H T -type variables improves quickly with mass, though this trend is mitigated as n partons increases. E struc-type variables do not have a strong dependence on the gluino mass as they are more sensitive to the structure of the energy. Focusing on the individual variables we can infer the following lessons: • For uncompressed spectra, M CMS T 2 tends to perform better than H T . This is unsurprising since the variable is optimized for a massless neutralino. • H T and M T 2 perform very similarly while M R tends to perform worse in terms of background rejection; this is expected as M R is highly complementary to R 2 and will be examined further below. Meanwhile, m eff tends to do the same or slightly better than H T at low n partons where performance gains are largest for H T . However, m eff is not superior to H T itself. • R 2 tends to perform like a H T -type variable at low n partons and like a E scaletype variable a high n partons although the performance overall is worse; again this is expected since the real power of Razor comes from the exploiting R 2 and M R together. • H T / √ H T tends to perform like a H T -type variable at low n partons and like a E scaletype variable at high n partons; though it never performs better than both types. 6 The hybrid variables can be categorized as Razor • M J tends to perform like a E scale-type and E struc-type variable becoming more E struc-type -like at high n partons. It typically does better than both H T and N j except at the highest n partons for lower gluino masses. Studying each variable separately shows that unsurprisingly no single variable maximizes the performance throughout all of the phase space considered here. Although the performance of observables like H T and M CMS T 2 has a weak dependence on n, variables such as H T or N j exhibit much better discriminating power, but only for some categories of signals. Inclusive searches aimed at a large variety of signatures therefore consider a minimal set of discriminating variables that cover complementary regions of the parameter space. Building such a set requires understanding the correlations between the different variables, which cannot be captured by the previous study. In the following sections, we study the discriminating power of algorithms that take into account more than one variable. Correlations between variables At this point, we have explored the performance of the variables individually, and used the results to classify them into three basic categories (plus hybrid). Yet, no single variable was a clear winner for the full phase space explored for all values of n. Therefore, it is interesting to explore how complicated of an approach is required to asymptote to the "ideal" aggregate result for all signals and backgrounds. This section is devoted to exploring combinations of variables that will lead to an improved discrimination power. In order to organizing the huge number of possibilities, we start by taking one variable from each category to generate two-or three-variable combinations. By analyzing the pairwise discriminating power, we can understand which variables are least correlated thereby leading to the best complementarity when designing a search. The results of these explorations are given in Fig. 8 aggregate, denoting a BDT combination of all variables, is shown in black long-dashed. 7 7 We note that for some comparisons, particularly against QCD, there are slight inconsistencies, e.g. adding a variable slightly decreases discrimination power. This is the result of non-zero statistical errors (which can be exacerbated by the presence of rare high weight background events) along with systematic errors associated with slight over/under training of the BDT. We do not attempt to quantify these effects, but caution the reader to keep these issues in mind so as to not over-interpret these results. The qualitative behavior shown in these plots is robust. From the figures, it is clear that no two-variable combination is optimal across all regions of phase space and all backgrounds. However, the doublet M CMS T 2 , M J does out perform all other two-variable combinations essentially across the full range -this can be understood by realizing that M J is a hybrid of E scale-type and E struc-type variables. This combination is only deficient at the very highest n partons and perhaps in the Z(→ ν ν) + jets case where the interplay between H T and N j is not fully captured in M J . For the other two-variable combinations we find the following general tends: • H T -/ E struc-type : deficient at high n partons where N j is more important; • H T -/ E scale-type: deficient at medium to high n partons where visible energy becomes more important; • E scale-/E struc-type : deficient at low n partons where missing energy variables are most dominant. Moving on to the three-variable combinations, we see that adding N j to M CMS T 2 , M J provides nearly optimal performance for the full range of n-partons shown here. There are additional three-variable combinations which are near optimal over the full uncompressed phase space in n partons and for different backgrounds. The near optimal three-variable combinations all involve one of each type of variable. Because M CMS T 2 is the best performing of the H T -type variables for the uncompressed spectra (see Fig. 6), we find that triplets which include it do the best: M CMS T 2 , H T or M J , N j . Another conclusion one can draw from Fig. 8 is a compelling confirmation that M J outperforms H T , especially for high multiplicity final states. Ignoring correlations with other observables, these two variables are approximately proportional to each other [33]: where κ √ α s when the jet mass is the result of the QCD parton shower, as compared [37,38], and is also amenable to analytic study, e.g. [48][49][50][51]. Ideally, M J would replace H T for the wide class of jets + H T searches whose phase space is covered by n-body Simplified Models, and can be done so while maintaining the core strategy implemented by many existing approaches for these beyond the Standard Model searches. However, it is worth acknowledging that once one goes to multi-dimensional variables the improvement in replacing H T by M J is not as dramatic and would require a careful job of finding a new class of control regions, along with a reassessment of systematic errors. Next, we turn our focus to the m eff results. Recall in Fig. 7, we found that m eff turned out to be slightly better performing than H T , particularly at low n partons. However, when we add additional variables to these E scale-type variables in order to find the optimal triplet, the final performance is extremely similar. For example, the triplets m eff , H T , N j and H T , H T , N j have essentially the same discriminating power. To end this section, we will comment on the Razor variables. The combination M R , R 2 does significantly better than each variable individually. This implies that they are highly complementary, as expected by the design of the variable and now seen explicitly. This is also illustrated in Fig. 9 where B. Degenerate Gluino-Neutralino Limit We now consider compressed topologies with a 5% splitting between the gluino and neutralino masses. With such a small splitting, if the gluinos are not boosted against additional objects, the final state jets and H T are expected to be soft and difficult to distinguish from the SM background. We therefore include topologies where the pairproduced gluinos are boosted against at least one ISR jet by producing matched signal samples. We consider the background rejection rate as a function of the number of final state partons n, for a signal efficiency of 10%, a 1 TeV gluino and a 950 GeV LSP. Figure 10 shows is quite powerful against t t and R 2 distinguishes itself from the other E scale-type and E struc-type variables. It is also important to note that the H T -type variables are not very close to optimal in the case of the QCD and Z(→ ν ν) + jets backgrounds. This means that additional information from the visible energy and its structure can be utilized to additionally distinguish signal from background. We have also checked the gluino mass dependence of the single variables and the performance is relatively independent of these variations, in contrast with the uncompressed spectra case. Specifically, the discrimination power changes by a factor of ∼ 2 when changing the mass from 500-1500 GeV while in the uncompressed spectra case, the discrimination power changes by more than 2 orders of magnitude (see Fig. 7 above). This is somewhat expected as the visible energy resulting from the decay of the compressed system does not drastically change as a function of mass. As we begin to look at multi-variable combinations as shown in Fig. 11, we come to the same general conclusions as in the uncompressed spectra case. No two-variable combination achieves near optimal performance across the range n partons and various backgrounds. However, it is possible to essentially maximize sensitivity using three-variable combinations. Among these, H T , H T , N j and m eff , H T , N j stand out, and in particular do better than combinations involving M CMS T 2 . It is interesting that in the case of QCD, m eff yields additional discrimination power over H T within the top-performing triplets. Combinations including Razor also achieve a good performance. However, the best performing triplet involving Razor depends slightly on the background under consideration: The main lessons for the compressed study is similar to those learned in the uncompressed case. Optimal performance for all three backgrounds considered is achieved by combining variables from each of the three classes. For some backgrounds, two-variable combinations are nearly optimal but if one is interested in near-ideal discrimination across all backgrounds three variables are required. V. Conclusions The n-body extension of Simplified Models provides a class of signal injections with which one can model a wide range of possible final-state phase-space within a unified phenomenological framework. This has many applications in collider searches for beyond the Standard Model physics, and is particularly well suited for seeking out final-state topologies which require additional optimization beyond the searches that are currently being performed. The focus in this work was to utilize this tool in order to assess the discriminating power for many of the ever-growing number of variables used for searches in the classic jets + missing energy final state. This was an ideal forum to explore the utility of the n-body extended approach since the only observables in these searches stem from a single class of object: jets of visible hadronic energy. A large number of variables were considered: H T , M T 2 , and M CMS T 2 . As was expected and shown in Fig. 6, no variable can do the job alone. A winning strategy derives from placing cuts on maximally uncorrelated observables in order to generate signal regions where background events are very rare. Boosted Decision Trees were used in order to access the strength of correlations between observables. Once a choice of variables was made, the BDT was trained to distinguish signal from background using only these inputs. Then the resulting machine takes events and generates an output which yields the optimal background rejection efficiency as a function of a target signal efficiency. The result is a quantitative assessment of performance. Analyzing single variables alone led us to a classification scheme based on their trends as a function of the number of final state partons, assuming an uncompressed spectrum: "missing energy"-type, "energy scale"-type, and "energy-structure"-type. This scheme grouped the variables based on the region of phase space where each provides the best discrimination against the backgrounds. Not all of the observed behaviors were intuitive. For example, behaves like missing energy, although it is slightly better optimized for uncompressed signals as compared to compressed ones. For uncompressed signals, M J and m eff tend to be slightly better performing than H T . However, the general lesson is that an ideal search strategy requires at least one variable from each class. Differences in both the power and behavior of combinations as a function of n partons or mg are reduced when analyzing triplets that include variables of each type. As expected, the combination of classic variable types H T , H T , N j performs very well in most cases. However in some instances it is not fully optimal, and other triplets should be considered when performing searches in the future, e.g. M CMS T 2 , H T or M J , N j . While this study reveals many properties of the search variables and their correlations for a large range of jets + H T signals, we have not attempted to realistically include sources of error. In particular, the use of BDTs obscures the exact nature of the "signal region," thereby making it difficult to assess the quality of agreement between the Monte Carlo predictions and the measured backgrounds in a control region. This is a standard issue with using machine learning tools, and we are not advocating to replace the traditional cut-and-count approach. Instead, our point of view is that one can use this technology to quantitatively evaluate the performance of variables with a particular emphasis on their correlations. There are a variety of future directions which will be interesting to explore. The n-body framework could be extended to other searches for supersymmetry, such as those for R-parity violation, as well as more directed searches involving heavy flavor tags and also electroweak production. This framework is also clearly useful for non-SUSY new physics searches as well. Along with extending the framework in theory space, it would be interesting to realistically quantify the effects of systematic and other errors on our conclusions. With the LHC on the cusp of delivering up to 100 fb −1 of data in the next few years, understanding the optimal ways to search for new physics has never been more important. We have provided a new framework for organizing and studying the collider phenomenology of a variety of beyond the Standard Model scenarios, which can be utilized to more deeply understand the breadth of results from the LHC, whatever they may be. And once a hint of new physics begins to emerge, n-body extended Simplified Models will be very useful as a signal injection. This will allow us to quantitatively unravel the properties of whatever A. Review of Boosted Decision Trees Decision trees are a method of separating a parameter space into signal and background (or noise-like) regions. They have been used in particle physics for over a decade (early examples include particle identification at MiniBOONE [52,53] and the search for single top-quark production at the Tevatron [54]). Decision trees operate through a recursive partitioning of parameter space into signal and background-like regions which are determined through the use of training datasets. In that sense they represent the optimal cut-and-count discrimination between signal and background that can be performed. Informally, a single decision tree can be imagined as a cut-flow through a series of nodes. Each node corresponds to a cut in a particular variable, with events being partitioned into different bins as they progress further down the tree. The end (or terminal) nodes of the tree correspond to signal or background-like regions, depending on whether they contain a majority of signal or background events from the training data-set used. However, single trees can be unstable, in the sense that the cuts chosen at each node are sensitive to the details of the training dataset. A more powerful a approach derives from the use of a multiplicity of trees -effectively a vote by committee. Such a collection is a called a boosted decision tree. This also has the advantage that events which were misclassified by the original single tree can now be up weighted, leading to greater attention from succeeding trees. Essentially, it this series of trees collectively act to minimize a predetermined loss function. An example is a least-squares loss function for fitting an unknown multidimensional function, although often other functions can be chosen which lead to greater stability against outlying points. We now describe this more formally, before providing an illustrative example using the razor variable. A good introduction to machine learning techniques is [55] (which we have adapted the following from), while the original ideas of boosted decision trees can be found in [56,57]. Formalism A tree is defined as a series of nodes, each of which corresponds to a cut on an observable calculated from the input data -in our case these correspond to the event observables such as M J , H T , M T 2 and so forth. More formally, a tree partitions the parameter space into a set of disjoint rectangular regions R j , which are represented by the final (terminal) nodes of the tree. Each region is associated with a constant γ j , which indicates whether that node or region of parameter space is considered signal-like or background-like. For classification into two classes these are usually taken to be {−1, 1}. Then any event which falls into the region R j is assigned value γ j . If we define the indicator function I x ∈ R j to be 1 if x ∈ R j and 0 otherwise, we can represent the a decision tree T by where the parameters of the tree are Θ = R j , γ j , and the number of regions J is a metaparameter which is usually 4 ≤ J ≤ 8. The numerical optimization problem which must then be solved is to find the regions R J and constants γ J . These parameters are set by requiring that they minimize a loss function L over a large set of training data whose properties are already known (that is to say, whether a given event is signal or background), so that the chosen parameters areΘ This is a difficult problem in numerical optimization, and so approximate solutions are usually used to find the regions R j and γ j , which we will describe below. The output of a single decision tree can be quite sensitive to minor changes in the training sample. Furthermore, since the decisions at each node are only locally optimal, there is no guarantee that the globally optimal decision tree is obtained in this way. It is also possible to overfit the training data using complex trees. To avoid these issues we use boosted decision trees in our study. Boosting starts with a group of individual 'weak learners' such as single trees whose output may be only slighter better than random guessing. Then by weighting their outputs, a much better 'strong learner' can be constructed, whose output is very well correlated with the true classification of any unknown event. In our work we use the gradient boosting algorithm as implemented in the TMVA class within ROOT [58]. A boosted tree model can thus be represented as a sum of trees, We do not attempt to solve for all trees simultaneously, but rather do so in forward stagewise manner: i.e., we solve for one tree at a time, where each tree is fit to the residual of the training data and the sum of all previous trees. In other words, the parameters R jm and γ jm of the m th tree are determined by minimizinĝ where the sum is over the elements in the training dataset and (m − 1) th boosted model. For example, if we wished to fit a sum of trees to a function using a squared-error loss function, the m th tree would be the tree that best predicts the residuals y i − f m−1 (x i ), and the constant γ jm would be given by the mean of the residuals in each region R jm . Such trees can be constructed relatively quickly. For more general loss differentiable loss functions simple fast algorithms do not exist for solving Eq. (A4). The forward stage-wise boosting strategy outlined above is very computationally greedy: it seeks to maximally minimize Eq. (A4) at each step of the process. To do this in practice, we calculate coefficients of the negative of the gradient of the loss function L at for each stage m: The approximate solution to Eq. (A4) is then given by fitting a tree to the negative gradients using a squared-error loss functioñ As noted above, these trees can be constructed quickly. On the other hand, the regionsR jm which result from the above process are not necessarily the same as those R jm which solve the exact problem in Eq. (A4). Of course, the forward stage-wise boosting procedure we employ is itself also approximation to the exact result (if it could be constructed). For our classification problem of discriminating between signal from the n-body extended Simplified Models and SM backgrounds we use the binomial log-likelihood loss which is also implemented in the TMVA package in ROOT. This is known to be more robust than the common exponential loss function L(y, f (x)) = e −F (x) Y , since misclassified points and outliers effectively are assigned a linear penalty, as opposed to an exponential one. Example: Razor Variables We now show a simple application of the BDTs to differentiate gluino events from QCD backgrounds using the razor variables M R and R 2 only. As shown in Fig. 9 and mentioned in Sec. IV A, combining these two variables together provides a much higher discriminating power than using either of them individually. Additionally, simple rectangular cuts in the two-dimensional parameter space would overlook some subtle features of the signal and background distributions such as the increase in the typical value of R 2 at low M R . For this simple study, we use the scikit-learn module from Python 2.7.9. We consider a training sample composed of an event mix of 10 5 unweighted QCD and g → j + χ events. We use gradient boosting to generate different numbers of decision trees with maximal depth J = 4. In order to limit overfitting, we multiply T (x i ; Θ m ) in Eq. (A4) by a "learning rate" coefficient α = 0.1. Introducing a small learning rate is a standard regularization procedure when using gradient boosting. Here, we use the Huber loss function, that is a combination of a squared-error and a least absolute deviation loss function. When applied to a given (M R , R 2 ) doublet, the final classifier will output a number 0 ≤ r ≤ 1 that is close to 0 if the event is background-like and close to 1 if the event is signal-like. This is a concrete demonstration of the issues involved in optimizing the BDT parameters discussed previously. B. Consistency of n-body Decay Operators It is reasonable to wonder if the n-body operators in Eqs. (4)- (7) can be realized in any complete new physics scenario. There are two issues that will be discussed: are there any regions of parameter space where a given L i would model the dominant decay mode of the gluino, and what would be the corresponding lifetime of the gluino. We begin with the L 1 and L 2 operators which can yield the dominant decay modes in models that include one extra heavy scalar particle, as shown in Fig. 13. A well-known example of such model is splitSUSY [59][60][61], where the squarks are decoupled. The relative importance of L 1 ∼ 1/Λ compared to L 2 ∼ 1/Λ 2 depends on the mass scale of the heavy particle. For splitSUSY with a 1 TeV gluino, L 1 starts dominating over L 2 only for squark masses heavier than 10 9 GeV [62]. Note that in this region of parameter space, the gluino tends to be sufficiently long lived to warrant a different class of search strategy. 8 This example already demonstrates the point of this appendix -the L i operator should not be interpreted as a physical gluino decay mode, but rather as a topology representative of (i + 1)-body decays in general. For example, our results for this operator can be used as a qualitative proxy with which to analyze signals of the form q → q χ as shown in Appendix C. Similar reasoning can be applied to the L 3 , L 4 and L 5 operators. However, it is important to note that concrete UV completions might involve taking couplings that do not obey the SUSY relations, even though we continue to call them gluons and squarks for convenience. These operators have a relatively high mass dimension, and most of the time will either give subdominant contributions to the gluino decays or will be associated with long-lived gluinos. Operators involving gluons, in particular, will either be loop-suppressed, as shown in Fig. 14, or will imply the existence of lower order operators without the presence of the gluon. For example, if the gluon in the L 3 operator is generated at tree-level, the gluino decays will be dominated by L 2 . Although operators other than L 2 will not describe dominant SUSY processes, they can be used to study either exotic processes with similar topologies or long decay chains involving intermediate on-shell states. As shown in Fig. 14, for example, the L 4 operator can have UV completions analogous to gluino cascade decays through one or more heavy squarks and/or electroweakinos. The operators studied here therefore do not apply exclusively to direct gluino decays but also to the general classes of models targeted by multijet plus / E T searches such as [3,24,25,27,28,[63][64][65][66]. Our study can also be applied to cascade decays involving top quarks or gauge bosons, such as the ones investigated in [1,67]. A more in-depth discussion of the effects of intermediate on-shell states is shown in Appendix C. FIG. 14. Top: Illustrative Feynman diagrams corresponding to a UV completion for the four-body decay of the gluino at loop level (the L 3 operator). Gluino decay modes involving gluons in the final state can be generated at tree-level, however the corresponding operators imply the existence of lower-order effective operators (here L 2 ). Center and Bottom: Illustrative Feynman diagrams corresponding to the UV completions for the five-body decay of the gluino. The process for a gluino decay to four quarks and a neutralino (L 4 operator, center) can be used to study gluino cascade decays through new particles. The operator involving two gluons at loop-level (bottom) will be highly suppressed. Note that, although we use the SUSY notation for the names of the new particles, the relevant couplings are not necessarily described in standard SUSY models. Lifetime Estimates High dimensional operators such as L 4 or L 5 or operators involving gluons in the final state -and therefore loop interactions -will be associated with highly suppressed gluino decay rates. If no other decay channel is open, the gluinos could therefore be long-lived, and the search strategies studied here will be less relevant. Note however that most of our results for these operators will remain applicable to signals involving long decay chains through on-shell particles. We provide a naive estimate of the lifetime associated with each L i . The effective operator corresponding tog → k (q + q) +χ (where k counts the number of quark-anti-quark pairs), takes the form with a corresponding decay width . (B2) Again, we emphasize that the y couplings might not be related to Standard Model couplings by SUSY. For operators that involve an additional gluon in the decay, the width is of the form where we have assumed that the gluino interacts with the gluon at one-loop; as discussed above the tree-level operators with gluons will never dominate over the lower point decay without this extra state. Note that the lifetime of a particle in its rest frame is given by Even with order one couplings and for heavy gluinos, L 5 will lead to macroscopic decay lengths due to the large number of final states along with the loop suppression associated with the gluon emission. Again, due to the gluon loop factor, L 3 will be more suppressed than L 4 and will be associated with gluino decay lengths longer than a millimeter for couplings less than one or gluinos lighter than a PeV. For gluino masses of a TeV or larger, L 4 will lead to tracks shorter than a millimeter for couplings of order one, but the lifetime of the gluino will strongly increase for smaller couplings. Finally, the operators L 1 and L 2 will lead to a short-lived gluino even for weak couplings. C. Mapping Onto n-body Decays In this appendix we quantify the effects of intermediate on-shell states on the kinematic distributions of the variables studied throughout this paper. We first consider a two-body decay of the form where j is either a quark or a gluon and assumed to be massless. We denote the fourmomentum of A as (E, p). There are two configurations in which the momentum of X in the lab frame can be simply estimated: • The Wide (W) scenario, with M A M X : in this case, the energy of A is evenly split between the jet and X. • The Compressed (C) scenario, with M A − M X M A : in this case, the jet p T will be negligible and the momentum of X will be similar to the momentum of A. Consider the decay g → q q χ, through the diagram shown on the right-hand side of Fig. 13. If the squark is on-shell, this decay can be decomposed into two two-body decays, with (A, X) being (g,q) in the first step and (q,χ) in the second. Studying each of these successive two-body decays within both the C and W scenarios described above, we can estimate the values of H T and H T for the CC, CW, WC and WW mass hierarchies. The results are shown in Table III, where we compare the estimates derived in these four scenarios to the typical H T and H T values obtained when the squark is off-shell. As shown in this table, the n-body formalism that we have adopted throughout this paper accurately describes the kinematics when one set of particles in the cascade is compressed. For contrast, when the neutralino is much lighter than the gluino, some discrepancies can appear from the presence of on-shell intermediate particles when comparing these decays to the n = 4 case. However, the typical values of H T and H T for the WC and CW scenariosfor which the highest disagreement is observed -are similar to the values associated with the gluino effective two-body decay operator L 1 when the neutralino is light. For these WC and CW mass hierarchies, the process dominating the kinematics is indeed the gluino two-body decay to a quark and a squark. Figure 15 illustrates this effect using the decay of a heavy stop to a top quark and a neutralino as an example. Although the top quark decays to A similar reasoning applies to more complicated processes such as g → t t χ or g → q q Z χ. For a TeV-scale gluino, the masses of the top quark and the Z can be neglected to good approximation and the two processes can be compared to g → j j χ and to g → j j j χ respectively. The comparisons are shown in Fig. 16 for g → t t χ and in Fig. 17 for g → q q Z χ. In both cases, the H T and H T distributions for the on-shell processes look similar to the ones corresponding to the appropriate choice of n-body operators. This demonstrates that although some of the higher-dimensional operators considered in this study will not lead to prompt gluino decays, most of our results can still be applied to models involving on-shell intermediate states. In particular, as shown in this section, a given cascade decay scenario can be mapped to a given effective decay operator with similar kinematic distributions for the "missing energy" and "energy scale"-type variables. This mapping, however, cannot be solely determined by the number of final state objects and will depend on the mass hierarchy between the parent particle and the intermediate states. It is also important to note that, as shown in Figs 15, 16 and 17, "energy structure" variables such as N j will remain by definition highly sensitive to the number of final states and to the quark or gluon nature of the jets, see Appendix D. However, this difference will not impact the conclusions drawn in the main body of the text. H T ( top right) and N j (bottom) distributions forg → 3j +χ (dashed red) andg → q +q + Z +χ (solid blue). We have considered only hadronic Z decays. D. Comparing Quark And Gluon Final States This appendix addresses the differences from having quarks versus gluons as the final state decay partons. Figure 18 shows the H T , H T and N j distributions for gluino threeand five-body decays with either the mix of quarks and gluons used in the n-body operators given in Eqs. (4)- (7) or final states with only gluons. As can be seen in this figure, the only variable that discriminates between quarks and gluons is the number of jets in the event, since gluons are associated with higher rates for hard splittings due to the larger Casimir. Since the angles between these jets due to showering and the initial parton tend to be collinear, the discrepancy between the N j distributions should decrease for larger jet radius. The other kinematic variables that we consider only minimally distinguish between quarks and gluons. E. Comparing Shapes of Various Signals In this appendix we compare kinematics of the n-body Simplified Models to show trends as a function of n-partons and the gluino mass. We show only three variables, one of each type presented in Sec. IV A, H T , H T , and N j . We have checked that the trends for variables within the same classification are qualitatively similar. Figures 19 and 20 compare kinematic distribution for a 1 TeV gluino decaying to different numbers of partons for an uncompressed and compressed mass spectrum respectively. We find that the similarity observed in the signal distributions for the compressed mass spectrum, see in Figure 20, is a generic feature of compressed signals. Figure 21 compares kinematic distributions for various gluino mass hypotheses with the gluino restricted to decay to either one parton and a massless neutralino or four partons and a massless neutralino. H T (left), H T (middle), and N j (right) distributions for various gluino mass hypotheses. The gluinos are always restricted to decay to either one parton and a massless neutralino (solid) or four partons and a massless neutralino (dashed). Three different mass hypotheses are shown: mg=500 GeV (red), 1000 GeV (green), and 1500 GeV (blue).
16,263.8
2016-05-04T00:00:00.000
[ "Physics" ]
Analysis and comparison between rough channel and pipe flows Direct numerical simulations of turbulent channel and pipe flows are presented to highlight the effect of roughness at low Reynolds number (Reτ = 180 − 360). Several surfaces are reproduced with the immersed boundaries method, allowing a one-to-one comparison of the two canonical flows. In general, all rough surfaces produce the same effect on the flow in pipes and channels, with small differences in the roughness function, RMS velocities and spectral energy density of pipes and channels. The only exception is for the rough surfaces made of longitudinal bars. In particular, the triangular bars (riblets) show drag reduction in the channel and drag increase in the pipe. This behaviour is linked to the development of spanwise rollers and wide u-structures near the plane of the crest of the pipe. Introduction According to the classical theory [1], the inner region of wall-bounded turbulent flows is essentially indistinguishable for the three canonical cases, namely boundary layers, pipes and channels. The differences between these three flows are expected to be limited to the outer region, where mean velocities and standard deviations are expected to scale with the friction velocity and the outer region length scale (boundary layer thickness, pipe radius, or the channel half-height). However, experimental and numerical studies show differences between these flows in the inner region. For instance, Patel and Head [2] found that the mean velocity in the pipe flow does not follow the logarithmic profile, in contrast to the channel flow. Closer to the wall (i.e., in the viscous sublayer and in the buffer region), the mean velocities of channel and pipe flows collapse, consistent with the notion that sufficiently close to the wall (i.e., at a sufficiently large Reynolds number), the pipe surface appears to be flat. Other studies have confirmed these differences in the log-layer between channels and pipes, concluding that the von Kármán constant κ might not be universal [3] and that the thickness of the logarithmic region might be different for internal and external flows [4]. In terms of the turbulent stresses, the literature also shows differences between the three canonical flows. One of the first studies comparing pipes and channels with Direct Numerical Simulation is Eggels et al. [5], who reported no significant differences of the velocity RMS but a meaningful difference in the skewness of the wall-normal velocity. They claimed that this difference is due to the impingement or splatting process, explained by Mansour et al. [6]. Monty et al. [7], on the other hand, stated that the wall normal and spanwise turbulence intensities differ in the core region of the flows, due to the outer structures that are larger in the pipe than in the channel. Other scholars have found differences in the turbulent kinetic energy between pipes, channels and boundary layers [8], and in the Reynolds stress budget of pipes and channels [9]. Several works [4,10,11] connect this differences to the fundamental differences in the large scale motions of the outer region of pipes, channels and boundary layers. These differences in the outer region of pipes, channels and boundary layers are probably partly responsible for the disparity of results obtained for k-type and d-type roughness in pipes, channels and boundary layers. Based on the data summarized by Jiménez [12], it appears that the turbulent velocity fluctuations between channels and pipes are comparable to each other, while those obtained in turbulent boundary layers seem to scale differently. Borrell [13] argues that it is the entrainment on the turbulent/non-turbulent interface (outer region, large scales) which causes the differences between internal and external flows over rough walls: Kármán's integral equations links the boundary layer growth (i.e., the entrainment) and the friction at the wall (the roughness), resulting in a core region more dominated by entrainment in the case of rough walls. It is important to note that, beside this work, the number of studies analysing the differences between channels, pipes and boundary layers over rough-walls is scarce. To the best of our knowledge, there are no numerical studies comparing one-to-one turbulent flows in channels and pipes with rough walls. It should be noted that, besides these differences between internal and external flows, there is a wide agreement in the community in that the main effect of roughness in pipes, channels and boundary layers is to shift downwards the mean velocity profile over a smooth wall by a constant amount in the log region, the so-called roughness function. In this paper, we will refer to the roughness function commonly found in literature as ΔU + c , and to our definition of it as ΔU + = ΔU + c + U + R where U + R is the slip velocity. The most pressing issue in this regard has been to connect the geometric parameters of the surface to the roughness function. A scaling in this sense has been found recently by Orlandi [14], who derived the relation ΔU + = Bκ −1ṽ+ R , with B and κ being the log law constants for the smooth wall andṽ + R the RMS of the vertical velocity fluctuations at the plane of the crests. The vertical stressṽ + R is then the fundamental quantity to account for the effects of roughness: if it exceeds a threshold value, the transition to turbulence occurs (Orlandi [15]). A chart of the friction coefficient similar to that by Moody [16], can be calculated by replacing the equivalent sand grain roughness proposed by Nikuradse [17] withṽ + R . The objective of the present work is to establish a one-to-one comparison between channels and pipes for different wall-roughness, in order to understand whether the differences between channels and pipes with smooth walls become more pronounced in presence of the wall corrugation. The paper is structured as follows. In Section 2 and 3, the details of the numerics and of the DNS database are briefly exposed. In Section 4 the results are shown and discussed. These primarily focus on the average velocity and turbulent intensities, the stress and the premultiplied spectra. Finally, in Section 5 the conclusions are presented. Physical and numerical model The incompressible Navier Stokes equations written in non dimensional cartesian form are: where Π is the pressure gradient required to maintain a constant mass flow rate, u i is the component of the velocity vector in the i direction, and p is the pressure. The reference values for the non dimensional quantities are either the channel half-height or the pipe radius, both termed h in the hereinafter, and the laminar parabolic Poiseuille velocity at the centreline, u P . An equivalent set can be written in cylindrical coordinates for the pipe flow, given in Chapter 10 of Orlandi [18]. The equations have been discretized in an orthogonal coordinate system, with a central staggered second-order finite difference method. The integration in time has been made with a third-order low storage Runge-Kutta algorithm, coupled with a second-order Crank-Nicolson scheme. To correct the non divergence-free field, the fractional step procedure is used. Further details are found in [18] and are not repeated here. Noslip boundary conditions are imposed in the non-homogeneous direction. For the cylindrical coordinates a particular treatment at the axis has been used. More details on the numerics for the pipe are found in Verzicco and Orlandi [19]. The periodicity in the homogeneous directions through a Fourier expansion allows to solve the elliptical equation for the pressure directly. The interaction between the flow and the roughness is reproduced by the immersed boundary technique, imposing zero velocity inside the body, and applying a correction to the viscous term at the first grid point near the solid surface. This method is described in detail in Orlandi and Leonardi [20]. DNS database Several simulations of pipes and channels with different wall-roughness are performed at Re = u P h/ν = 4900. This Re corresponds to a friction Reynolds number Re τ = u τ h/ν ≈ 200 for the smooth channel, where u τ is defined based on the total stresses at the plane of the crests of the roughness. Note that since the bulk velocity is u b = 2/3u P for the channel and u b = 1/2u P for the pipe, the bulk Reynolds numbers are different for the two flows: Re b = 3266 for the channel and Re b = 2450 for the pipe. In all the simulations (see Table 1) the size of the computational domain is L x = 8h in the streamwise direction. The spanwise size of the computational domain of the channel is L z = 2πh, in order to make the channel flow comparable to the pipe flow. The coordinates y and z will be used for both the channel and the pipe, where for the latter the wall-normal coordinate is y = h − r and the spanwise coordinate is z = θr, where r is the radial coordinate. The database consists of eight surfaces including the smooth case, that will be referred to as SM. The geometry of the rough surfaces is shown in Figure 1 for the channel, where all rough cases have the same roughness height k/h = 0.2. For the channel cases, both the upper and lower walls are rough. Note that the roughness is carved into the smooth wall geometry, so that the distance from the plane of the crests to the centre of the channel/pipe is always h. The square and triangular bars placed transversely to the flow (Figures 1a and 1b respectively) will be referred to as ST and TT. For these cases, the cavity width (ST) and the tip spacing (TT) are equal to the height: s = k. In these cases, the disposition of the bars makes it necessary to have a large number of nodes in the streamwise direction (801), to accurately reproduce the flow inside Figure 1c). A three dimensional geometry made of staggered rows of cubes (SC) is reproduced in Figure 1e. In this case, the number of grid nodes is increased in both directions (801 and 513). Table 1 gives the values of the friction Reynolds numbers for the different runs. Due to the difference in the bulk Reynolds number between the channel and the pipe, the values for SM differ significantly; however, the difference decreases in the rough cases. The transverse bars (ST, TT) and the three-dimensional roughness (SC) increase the value of the Re τ with respect to the smooth wall, both for the channel and the pipe. However, the behaviour of the longitudinal bars is more variable. The longitudinal square bars (SL and SLL) cause a moderate increase of Re τ with respect to SM, similar to ST. On the other hand, TL shows an increase of Re τ in the pipe and a decrease in the channel. For TLS, there is a small decrease in Re τ in both configurations. We expect therefore that the the riblets will achieve drag reduction. The largest increase in Re τ is observed for TT and SC, consistently with previous results [14]. Results The mean velocity profiles of the 16 simulations are presented in Figure 2, where the solid lines correspond to channels and the lines with circles to pipes. The wall normal coordinate is y + = 0 at the plane of the crests, which corresponds to the position of the wall in SM. The average value of the velocity at this plane is the slip velocity U + R (reported in Table 1), which has been subtracted to the mean velocity profiles plotted in Figure 2a. It can be observed how the velocity profile of SM agrees with the log-law with B = 5.5, while the pipe has a slightly higher value of B = 6, consistent with Wu and Moin [21]. For transverse square bars ST, a small decrease of the average velocity appears, without an appreciable difference between the channel and the pipe. The transverse triangular bars TT and the staggered cubes SC also show the same velocity defect for the channel and the pipe, with the same increase of the roughness function ΔU + . The largest differences in ΔU + between pipe and channel are found among the longitudinal bars cases, TL and SL. The latter also exhibits differences in the slope in the log region. The square bars with wide spacing SLL produce a large velocity defect, but similar for both pipe and channel. It is important to note that the roughness function computed here (ΔU + ) is not the one found commonly in literature: the one evaluated from Figure 2b, that will be referred to as ΔU + c can be rather different. In fact, ΔU + c is the sum of two effects: the slip velocity at the wall U + R , representative of the viscous drag, and the roughness function ΔU + , linked to the form drag. Orlandi [14] gave the relationship between this roughness function and the vertical stress at the plane of the crestsṽ + R (the ∼ denotes an RMS value): where B = 5.5 and κ = 0.4 are the constants in the expression of the log law for SM. Figure 3 demonstrates that this relationship holds for both channels and pipes. To understand the difference in the behaviour of the friction, and the drag increase/decrease effect of the roughness on the flow, it is worth looking at the profiles of the total stress τ = ν∂U/∂y − uv , shown in Figure 4a. In this plot, only the square bars SLL, the triangular bars TL and the staggered cubes SC are reported, since those are the cases with the greater differences between the channel and the pipe. As for the velocity profiles, both SC flows show large differences with respect to SM flows, leading to the conclusion that the wall curvature does not affect the total stress, and in particular the turbulent stress, which is much larger than the viscous stress. The square bars SLL, despite the large ΔU + in Figure 2a, show drag reduction for both flows. In this case, the reduction is due to the large decrease of the viscous stress, due to the large U + R at the plane of the crests. The turbulent stress is large, but does not compensate the reduction of νdU/dy. The large ΔU + and the low friction coefficient are not contradictory. In fact, due to the large slip velocity, ΔU + c is small, and the relationship between the roughness function and the friction coefficient (Hama [22]) can be satisfied. The TL flows are the more surprising, showing drag reduction in the channel and drag increase in the pipe. The profiles of the RMS of the streamwise velocity component are reported in Figure 4b, showing no substantial differences between channel and pipe runs for a given roughness. The SLL case exhibits the largestũ + at the wall, consistent with a relatively large value of the slip velocity (see U + R in Table 1). This behaviour is consistent with the recent results of Vanderwel and Ganapathisubramani [23], who investigated the effect of the spanwise spacing of longitudinal square bars on boundary layers. They found that a spacing of s/δ = 0.88 results in the largest secondary flow, very similar to the spanwise spacing of case SLL, s/h = 0.85. They also reported that the profile ofũ is maximum in the vicinity of the roughness, which is confirmed by our results. The vertical and spanwise stresses show similar trends as those forũ + and therefore are not presented here. The RMS profiles suggest that the distribution of velocity fluctuations with y is very similar in pipes and channels with the same roughness. It is however interesting to check if the same can be said about the characteristic wavelengths (sizes) of the structures containing that energy. That information can be obtained from the spectral energy density of the velocity fluctuations, presented in Figures 5-7. Only planes above the plane of the crests of the roughness elements are plotted. Figure 5 shows the one dimensional premultiplied spectrum of the streamwise velocity fluctuation, k x E uu (λ x /h, y/h), normalised with u τ and h. Coloured contours are for the channel flows, and the chain lines are for the pipe. The differences between the smooth channel and pipe (Figure 5a) near the wall (below y/h ≈ 0.5) are mostly due to the difference in Re τ . Both cases show a peak peak located at y/h ≈ 0.07 (y + ≈ 15), at a wavelength λ x /h ≈ 8, that corresponds to the position and length of the near wall streaks. However, the differences in the outer region also (y/h = 0.7 − 1, λ x ≈ h) do not seem to be a Re τ effect, since similar differences have been reported between smooth-walled channels and pipes at higher Reynolds numbers [9,24]. The spectra for the longitudinal bars (triangular in Figure 5b, square in Figure 5d) show the presence of long energetic structures near the plane of the crests, characterized by a broad peak around λ x /h ≈ 2 − 3. This peak seems to be shifted towards longer wavelengths for the pipes. It is interesting to note that, for case SLL (Figure 5d) the spectrum of the pipe flow is less intense than the channel's, despite the agreement of the profiles of the RMS of the streamwise velocity perturbation in Figure 4b. This apparent contradiction between the RMS profiles and the spectra is explained by the energy in the k x = 0 modes, which is not represented in Figure 5: for SLL, the pipe flow has more energy in infinitely large structures than the channel flow, an effect that apparently persists through the whole flow thickness. Comparing TL and SLL for pipes and channels, there is no apparent correlation between the streamwise length of the u-structures near the plane of the crests and the drag reduction/increase behaviour of the roughness. Comparing the longitudinal bars with the staggered cubes (Figure 5c), the latter show short isotropic structures near the plane of the crest. The sharp vertical peaks in the spectrum correspond to the streamwise periodicity of the cubes and its harmonics. The peak about y/h ≈ 0.05 is damped in case SC compared to SM, consistent with the damping of the buffer region cycle by the roughness [25]. The information of the streamwise/wall-normal spectra is complemented with the spanwise/wall-normal spectra shown in Figure 6. Again, the one dimensional premultiplied spectrum for the streamwise velocity is shown, k z E uu (λ z /h, y/h). Note that for the pipes, the spanwise wavelength is defined in terms of the arc length in the circumferential direction. Hence, as y tends to h (i.e., as we move from the wall to the center of the pipe), the available spanwise wavelengths shrink. This is clearly observed in SM (Figure 6a), where the channel spectrum is roughly aligned with λ z ∝ y, while the pipe spectrum turns towards smaller wavelengths near the centre of the pipe. It is possible that this effect is responsible for the shorter streamwise wavelengths for y/h 0.7 discussed before (Figure 5a). Near the wall, both the pipe and channel over smooth walls show a peak at λ z /h ≈ 0.5, which roughly corresponds to the spanwise spacing of the near wall streaks, λ + z ≈ 100. The spectrum for TL (Figure 6b) shows the direct effect of the surface roughness in a sharp energy peak at λ z ≈ 0.2h, which corresponds to the spanwise spacing of the bars. This peak extends up to y/h ≈ 0.07 (y + ≈ 12). Note that the peak for the pipe appears at a slightly smaller wavelength, which is probably a consequence of the curvature of the wall. For the channel, we can observe several harmonics at smaller wavelengths. For the pipe, these harmonics are less intense, and do not show in the spectrum. Moreover, the spectrum for the pipe is wider near the plane of the crest than that of the channel. For the SLL case (Figure 6d), we can also observe a dominant peak at λ z ≈ h (close to the spacing of the bars), with stronger harmonics than in TL. However, in contrast to TL, for SLL the u-structures of the channel near the plane of the crest are wider in the channel than in the pipe. In general, the flow structures generated by SLL are stronger than those generated by TL, and have a stronger influence in the outer flow. For SC, (Figure 6c), it is possible to observe an intense peak near the plane of the crests at λ z = 0.4h (λ + z ≈ 140), a wavelength equal to that of the spanwise spacing of the staggered cubes. In this case, the peak is localized closer to the plane of the crests than in TL. The effect of SC on the near-wall peak is stronger than TL, consistent with the previous discussion of Figure 5. Although it is not visible in Figure 6c because of the levels chosen for the contour, the nearwall peak is still present around y + ≈ 15 and λ + z ≈ 100, although with a lower energy content than in SM and TL. Finally, it is interesting to note that the SC case shows a very similar behaviour for pipes and channels near the plane of the crests. However, the spectra of TL and SLL for pipes and channels show differences near the plane of the crests: TL in pipes develops wider u-structures at the plane of the crest than in channels, while the opposite is true for SLL (at all heights). It is interesting to try to link these differences to the drag increase/decrease behaviour of these cases. According to García-Mayoral and Jiménez [26], the break-up of the drag reducing regime of riblets can be tracked to the appearance of spanwise rollers on the crest of the riblets, due to an inflectional instability of the mean velocity profile. This instability is given by the permeability condition of the plane of the crests, which is why these rollers are not present over a smooth wall. The analysis of García-Mayoral and Jiménez shows that these rollers are more apparent in the two-dimensional premultiplied spectra of the wall normal velocity, which is plotted for cases TL and SLL in figure 7. The plot shows that for SLL (Figure 7b), the spectra is essentially limited by the spanwise wavelength of the bars (indicated by the horizontal dashed lines in the figure). The most apparent difference between the pipe and the channel is the overall intensity of the spectra, consistent with the differences in the RMS observed in Figure 4b just above the plane of the crests. For case TL, the spectrum is mostly on the sub-harmonics of the spanwise spacing of the bars, and it shows the presence of rollers: very wide structures, with streamwise wavelengths of the order of λ + x ≈ 150, or λ x ≈ 0.8h. The rollers are more intense in the pipes than in the channel, which might explain why the former are drag increasing while the latter are drag reducing. Stronger rollers might be also responsible for wider u-structures at the plane of the crest for TL in the pipe as compared to the channel. Conclusions Direct numerical simulations of the flow in turbulent channels and pipes with several surface roughnesses have been performed, allowing a one-to-one comparison between pipes and channels for different wall-roughness at Re τ = 180−360. The comparison has been established in terms of global indicators of the flow (Re τ , τ ), mean velocity and RMS profiles, and the spectral energy density of the velocity fluctuations. In terms of global indicators, our results show that the scaling of the roughness function with the vertical velocity fluctuations at the plane of the crest (proposed by Orlandi [14]) works also for pipes. Of the roughness geometries explored here, the largest roughness functions are obtained for staggered cubes and longitudinal square bars with large spacing (s ≈ 5k ≈ h). These roughness geometries produce the largest perturbations in the overlying flow. While for the cubes the perturbations seem not to be coherent at large distances from the plane of the crests, for the longitudinal bars with large spacing the footprint of the roughness is visible in the spectra up to y ≈ 0.7h, well into the core region of the flow. In general, the spectral energy density of the velocity fluctuations of pipes and channels are similar, at least for the wall roughness analysed here. The largest qualitative differences between channels and pipes are found for the longitudinal triangular bars (riblets), which are drag reducing for the channel and drag increasing for the pipe. The streamwise velocity spectrum shows that the u-structures at the plane of the crest are wider in the pipe than in the channel. The analysis of the two-dimensional spectrum for the wall-normal velocity shows the presence of spanwise rollers in both cases, but the rollers in the pipe (drag increasing) are stronger than in the channel (drag reducing), in agreement with García-Mayoral and Jiménez [26].
5,898.2
2016-04-29T00:00:00.000
[ "Physics", "Engineering" ]
The Lump Solutions of the ( 1 + 1 )-Dimensional Ito-Equation In this paper, several kinds of lump solutions for the (1 + 1)-dimensional Ito-equation are introduced. The proposed method in this work is based on a Hirota bilinear differential equation. The form of the solutions to the equation is constructed and the solutions are improved through analysis and symbolic computations with Maple. Finally, figure of the solution is made for specific examples for the lump solutions. Introduction In recent years, the study to the exact solutions of nonlinear equation is one of the hot topics in nonlinear science.A variety of nonlinear complex physical phenomena appear in many fields, such as chemistry, biology, engineering and social sciences.So, seeking exact solutions of nonlinear partial differential equations (NLPDEs) has become more and more attractive.In order to solve the exact solutions of NLPDEs, sciences have come up with lots of ways, for example Backlund transformation [1] [2], Darboux transformation [3] and Hirota bilinear methods [4].Among these ways, Hirota bilinear method plays an important role in presenting lump solutions owing to its simplicity and directness.In these solutions, lump solutions are a kind of regular and rationally function solutions, localized in all directions in the space [5].Lump solutions are very important in fluid dynamic, propagation of surface waves, and many other fields of physics and some engineering fields.Some lump solutions to many integrable equation, for example the Benjamin-Ono equation [6], the KP equation [7] and (2 + 1)-dimensional Ito-equation [8], have been found.The lump solution which is called the vortex and anti-vortex solution for Ito-equation was firstly put forward by Zakharov [9] and later by Craik [10].The aim of this study is using the bilinear equations to search the solutions of (1 + 1)-dimensional Ito-equation. The Ito-equation is usually written as which is an extension of the K-dV(mK-dV) type to higher orders.Ito-equation is usually used to predict the rolling behavior of ships in regular sea.It also can be used to describe the interaction process of two internal long waves where ( ) , u x t is an analytic function. In this paper, we would like to present the lump solutions of (1 + 1)-dimensional Ito-equation.In Section 2, we mainly introduce the bilinear form of the Ito-equation through tedious transformation.In Section 3, based on the bilinear forms, we will get the lump solutions of the Equation (1.1) via analysis and symbolic computations in Section 3.And plots with different parameters will be made to show the change of the equation.In section 4, we will give the conclusions. The Bilinear Equation for Ito-Equation Hirota bilinear forms are one of the integrability characteristics of nonlinear partial differential equations and the bilinear equation can be solved by the Wronskian technique [11].Hirota bilinear equation plays an important role in generating the lump solutions.By Painleve analysis [12], we assume that the lump solution of Equation (1.1) as 2 2 ln . where ( ) , f x t is unknown real function.Through transformation Equation (2.1), the bilinear equation of Equation (1.1) can be presented where t D , x D are all the bilinear derivative operators and D-operator [13] is defined by where m and n are the positive integers, ( ) , a x t is the function of x and t , and ( ) , b x t is the function of the formal variables x and t . Lump Solutions for Ito-Equation Based on analysis and symbolic computations with Maple, we can show that the (1 + 1)-dimensional bilinear Ito-equation has a class of solutions determined by ( ) , f x t .Inthis section, we make ( ) , f x t as a combination of positive quadratic function, that is Next equating all coefficients of different powers of 2 2 , , , x x t t to zero, through the coefficients of the Equation (3.3), we can get the following relations between the parameters: where 1 2 4 5 , , , a a a a and 7 a are some free real numbers.Substituting Equation ( Similarly, solution Equation (3.5) is also a lump solution, which contains five free parameters 1 2 4 5 , , , a a a a and 7 a .When the values of these parameters are changed, the structure of the lump solutions will also change accordingly.Here, we give two plots with particular choice of the involved parameters for lump solutions (Figure 1). Conclusions Recently, a lot of work has been done to learn the lump solutions of the Ito-equation and Ito-like equations.It is natural and interesting to search for lump solutions to nonlinear partial differential equations.Based on the advantage of Hirota bilinear forms, lump solutions for Ito-equation have been solved with symbolic computations in this paper.At the beginning of the paper, we give the Hirota bilinear equation of Ito-equation via tedious calculation and analysis.And then, we verify the two lump solutions for the equation.Meanwhile, we found that the lump solutions contain many parameters and relations between these parameters which can affect the structure of solutions.Finally, we get different plots with particular choices of the involved parameters which have been made to show the lump solutions and their energy distributions.Open Journal of Applied Sciences Next, we would like to discuss that there is any other solutions to the Ito-equation.Furthermore, we can extend this method to solve other equations and learn their lump solutions.Those problems may be also interesting and be worthy to study. Figure 1 . Figure 1.When the Equation (3.5) selects the following parameters, the equation will be rendered with the lump solutions of a and b. ( ) 1
1,304.8
2019-03-29T00:00:00.000
[ "Mathematics" ]
Multiple X-line Reconnection Observed in Mercury’s Magnetotail Driven by an Interplanetary Coronal Mass Ejection How magnetic reconnection drives Mercury’s magnetospheric dynamics under extreme solar wind conditions is not well understood. Here we report MESSENGER observations of an active reconnection event in Mercury’s magnetotail driven by an interplanetary coronal mass ejection on 2011 November 23. The primary Hall magnetic field, sequential passage of X-lines with Hall field perturbations, and flux ropes (FRs) provide unambiguous evidence of multiple X-line reconnection in an unstable ion diffusion region. In addition, large FRs consisting of multiple successive small-scale FRs are ejected tailward at quasi-periodic intervals of ∼1 minute, which is comparable to the Dungey cycle time. We propose that these large FRs are generated by the interaction and coalescence of multiple ion-scale FRs. This is distinct from the commonly accepted Earth-like substorm process where plasmoids are created by widely separated X-lines in the magnetotail. These observations suggest that during extreme solar wind conditions multiple X-line reconnection may dominate the tail reconnection process and control the global dynamics of Mercury’s magnetosphere. Introduction Magnetic reconnection is a primary process that drives planetary magnetospheric dynamics. Mercury's magnetosphere is one of the most extreme planetary environments in our solar system due to the intense solar wind driving in the inner heliosphere. Like Earth, reconnection at Mercury occurs at the dayside magnetopause, where the solar wind energy is transferred into the magnetosphere, and in the magnetotail current sheet, where some energy is released back to the solar wind. This drives the circulation of magnetic flux and plasma that constitutes the Dungey cycle (Dungey 1961). However, the timescale of Dungey cycle at Mercury is estimated to be 2-3 minutes (Imber & Slavin 2017), which is much shorter than ∼1 hr at Earth (Baker et al. 1996). Magnetic reconnection is expected to play a much more important role in driving Mercury's magnetospheric activity than at Earth or other magnetized planets (e.g., Slavin et al. 2009). MESSENGER frequently observed lobe field loading and unloading in Mercury's magnetotail (Slavin et al. 2010;Imber & Slavin 2017). It is believed that magnetic energy is stored in Mercury's magnetotail and eventually released by tail reconnection; this drives global magnetospheric dynamics in a manner analogous to substorms at Earth (Baker et al. 1996;Hones 1977). One substorm-associated magnetotail phenomenon is the formation of a large-scale plasmoid between the near-Earth and the distant X-lines and the subsequent injection tailward into the solar wind (e.g., Slavin et al. 1989). As for Mercury, large plasmoids are occasionally observed in the near magnetotail (Zhong et al. 2019). Moreover, Zhong et al. (2018) reported the first observation of a rapidly evolving magnetic reconnection process in Mercury's magnetotail. They showed that the tail energy was released rapidly and impulsively as repeated tailward ejections of the reconnection fronts and instantaneously in response to the enhanced solar wind driving. However, direct observations of active magnetic reconnection sites are rare. The pattern of reconnection driving the global dynamics of Mercury's magnetosphere remains poorly understood. On 2011 November 23, an interplanetary coronal mass ejection (ICME) impacted Mercury's magnetosphere. As the ICME passed, the solar wind dynamic pressure was inferred to be ∼51 nPa, which is 4 to 9 times greater than normal (Slavin et al. 2014). The dayside magnetospheric dynamics were analyzed by Slavin et al. (2014). Here we report observations of active reconnection sites in Mercury's magnetotail under this extreme solar wind condition. We provide unambiguous evidence of multiple X-line reconnection in an unstable ion diffusion region. The observations suggest that the multiple X-line reconnection may dominate the reconnection process in Mercury's highly compressed magnetotail. Hence, it may be responsible for the global dynamics of the magnetosphere. The pattern of the magnetic energy released into the solar wind is distinct from Earth-like substorms or the convection of plasmoids created by widely separated X-lines. Observations An overview of the MESSENGER observations of Mercury's magnetotail crossing on 2011 November 23 is shown in Figure 1. The high-resolution (20 vectors s −1 ) magnetic field data from the Magnetometer (MAG; Anderson et al. 2007) and ion plasma data from the Fast Imaging Plasma Spectrometer (FIPS; one energy scan per 10 s; Andrews et al. 2007) were used. Between 08:17:00 UT and 08:31:37 UT the spacecraft traveled from the magnetosheath into the magnetotail near the noon-midnight meridian and crossed the far downstream southern magnetopause multiple times. The mean location of the nightside magnetopause was ∼0.4 R M inward from the normal magnetopause (Zhong et al. 2015). At ∼09:25:00 UT, the spacecraft traversed the current sheet from the southern to the northern lobe. The strength of the lobe field reached ∼100 nT, which is ∼100% stronger than it was for the orbits before and after the ICME. The substorm-related tail loading and unloading were not obvious. Instead, the lobe magnetic field was relatively constant throughout the tail crossing. These indicate that Mercury's magnetotail was highly compressed and under a driving force that remained approximately constant throughout the passage of the ICME. A remarkable signature is the occurrence of a large number of flux ropes (FRs) and their associated TCRs. The FRs and TCRs are characterized by north-to-south reversals or perturbations in the B Z component, coincident with enhancements of B Y and B | | . The positive-to-negative polarities in B Z are indicative of tailward-moving FR structures. Thirty-three FRs or TCRs with durations longer than 10 s and B Z peak-to-peak amplitudes greater than 10 nT were identified between 08:58:00 UT and 09:34:00 UT as the spacecraft moved from X MSM =−3.33 to −2.24 R M and Z MSM =−1.11 to 0.22 R M (red arrows in Figure 1(d)). The mean duration of these FRs is 16.8 s and they occurred at ∼1 minute intervals. These long-duration FRs have rarely been seen in previous observations (e.g., DiBraccio et al. 2015;Smith et al. 2017). A close-up view of the magnetic field data across the current sheet is shown in Figure 2. The magnetic field data were rotated to a local current sheet coordinate system, LMN, determined by the minimum variance analysis of the magnetic field. Here, N is the current sheet normal, L is directed along the reconnecting component of the magnetic field, and M = N×L points in the out-of-plane direction. This coordinate system is only slightly different from the crossing (09:22:40-09:26:40 UT), the negative-to-positive reversal in B L indicates a south-north crossing of the current sheet. The magnetic shear across the local current sheet is found to be ∼145°, the guide field B g ∼30 nT, and the reconnecting field B L0 ∼100 nT, consistent with a normalized guide field B g /B L0 of ∼0.3. The magnetic Hall field, B H =(B M − B g ), shows an overall positive-to-negative bipolar signature, with peak amplitudes reached ∼50% and ∼−30% of B L0 . In addition, the reversal of B H (∼09:26:06 UT) is located in the positive B L region or above the current sheet center. The average B N is negative throughout the crossing. These magnetic field signatures suggest that the spacecraft crossed a distorted quadrupole Hall magnetic field structure tailward of the primary guide field reconnection site (Eastwood et al. 2010;Wang et al. 2012), as illustrated in Figure 2(e). The quadrupole Hall magnetic field, produced by the ion-electron decoupling (Sonnerup 1979), is a key observational signature of the ion diffusion region (e.g., Nagai et al. 2001;Øieroset et al. 2001). The large perturbations of the Hall magnetic field and the formation of FRs suggest that the evolution of the reconnection was highly unstable. In the diffusion region there are clear negative-to-positive reversals of B N just before FRs FR4-8. This indicates that the spacecraft passed multiple tailwardmoving X-lines, as denoted by the purple arrows in Figure 2(d). Each X-line can produce a quadrupole structure of the Hall magnetic field (e.g., Deng et al. 2004), as illustrated in Figure 2(e). Before it crossed FR4-6, the spacecraft was immersed in the Hall region with positive (B M − B g ). The passage of the X-lines results in negative perturbations of the Hall magnetic field (blue arrows in Figure 2(c)). This can be interpreted as the passage of the southward and planetward quadrupole Hall magnetic field of the X-line, as shown by trajectory T1 in Figure 2(e). In contrast, before crossing FR 7-8, the spacecraft was immersed in the negative (B M − B g ) region tailward of the X-line. The observed positive perturbation of the Hall magnetic field just after the X-line passed indicates that the spacecraft encountered the northward and The magnetic field magnitude, and its three components in local current sheet coordinates (LMN). Eight long-duration FRs, labeled FR1-8, were identified from the 5 s smoothed magnetic field data (gray traces). In the diffusion region, the sequential crossing of X-lines, Hall magnetic field perturbations, and FR core fields are marked by the purple, blue, and black arrows, respectively. (e) Schematic diagram of tailward-moving X-line and FR structures. The black arrow denotes the overall trajectory of MESSENGER relative to the current sheet. The dashed arrows indicate the trajectories of the tailward-moving magnetic structure relative to the spacecraft at two locations. planetward quadrupole Hall magnetic field of the X-line, as shown by trajectory T2 in Figure 2(e). Following the X-lines are FRs. The presence of a strong core field suggests that they have a helical magnetic field topology. For the large FRs (FR4-8) observed near the current center, the maximum core field (black arrows in Figure 2(c)) exceeds 40% of the lobe magnetic field. This strong core field resulted from the compression of the guide field and Hall magnetic field, as well as the radial inward pinch of the FR due to the presence of the helical fields (Ma et al. 1994). Enhancements of the 1-10 keV thermal ion flux were observed within the FRs (green arrows above Figure 1(e)). This is consistent with the plasma density compression and pile-up between two X-lines (e.g., Chen et al. 2008;Liu et al. 2009). The ions would be accelerated along the core field due to force imbalance in the M direction, which yields further increases in the core field (Ma et al. 1994). Moreover, the magnitude of B M in the center of the FR reached approximately 100% of B | |, suggesting that the FRs are approximately orientated in the M direction, or parallel to the X-line. The FRs being parallel to the X-line, as well as the strong core field, further support the theory of multiple X-lines reconnection process (Lee & Fu 1985). From high-resolution magnetic field data sampled at 20 s −1 , a large number of small-scale FRs with timescales of ∼1 s can be identified. Close-up view of 1 minute interval data between FR3 and FR4 are shown in Figure 3(a). Eight short-duration FRs were identified by the positive-to-negative polarity in B N and coincident with the increase in B M and the peak in B | |. Negative perturbations of (B M − B g ) were also observed just before these short-duration FRs appeared, indicating the passage of the X-lines (T1 in Figure 2(e)). Based on assumed tailward propagation speed comparable to the local Alfvén speed, ∼1600 km s −1 from derived ion density (Figure 1(f)), the mean diameter of the short-duration FRs in the L direction can be estimated to be ∼10 d i , where d i =c/ω pi ≈160 km is the ion inertial length. This indicates that these are ion-scale structures. The ion-scale structures were also observed in the large FRs where they were identified from smoothed data. The close-up views of FR1 and FR5 are shown in Figures 3(b) and (c), respectively. Their whole bipolar B N (smoothed B N ) actually consisted of multiple successive short-duration bipolar variations. Each short-duration positive-to-negative reversal in B N is coincident with the sub-peak in B M and B | |. These are key observational features for the interaction and coalescence of FRs (e.g., Wang et al. 2015;Zhou et al. 2017) and confirmed by the simulations (e.g., Markidis et al. 2012), as illustrated in Figure 3(d). The spacecraft crossed FR1 away from the current sheet, and the observed short-duration B N bipolar are superimposed in the whole B N bipolar. The spacecraft crossed FR5 in the vicinity of the current sheet center, whereas more sequential reversals in B N were observed in the middle part of the large FR. These observations are consistent for the trajectories T1 and T2 of the tailward-moving merged large FRs relative to the spacecraft (Figure 3(d)). These suggest that the large FRs were likely formed repeatedly through the interaction and coalescence of many ion-scale FRs. Summary and Discussion The observations of the primary Hall magnetic field, sequential passage of X-lines with Hall field perturbations, and FRs provide unambiguous evidence of multiple X-line reconnection process in an unstable ion diffusion region. During extreme solar wind conditions, Mercury's magnetotail forms a long, thin, and compressed current sheet. Assuming the current sheet is stable in the N direction, the width of the diffusion region (2δ) is estimated to be 2d i . When the halflength of the current sheet L10δ∼1600 km or 0.65 R M , the diffusion region becomes unstable due to tearing instability, and hence multiple X-lines appear in the diffusion region ). Consequently, a chain of FRs would form between neighboring X-lines, as shown in Figure 4(a). From theory , the formation and convection of magnetic FRs should have a recurrence time τ ∼10t A /R, where t A is the Alfvén transit time and R is the normalized reconnection rate. We estimated that t A =δ/v A ∼0.1 s and the average =R Thus, the theoretical timescale τ=10 s is consistent with the average interval observed between repeated ion-scale FRs. These ion-scale FRs are expected to interact, coalesce, and grow into larger FRs until they are ejected tailward, as shown in Figure 4(b). These large FRs are estimated to be ∼2 R M in the north-south direction and a few R M in the L direction. Convection of such large FRs is expected to release a large amount of magnetic flux into the solar wind, which is supplemented by the dayside reconnection. The magnetic flux transport at the nightside and dayside is balanced in a quasisteady state, as indicated by the observed quasi-steady lobe field. The time needed to cycle the magnetic flux in the tail from the dayside, or the Dungey cycle time, can be estimated using T C ≈2π·B tail ·R tail /(B SW · v SW ) (Siscoe et al. 1975). Here, B tail is the field strength in the lobe region, R tail is the cross-sectional radius of the tail, and B SW and v SW are the solar wind magnetic field and speed, respectively. The multiple magnetopause crossings indicate that R tail is ∼2.12-2.60 R M at X MSM =−3.77 to −3.91 R M , and that B tail approximately equal to B SW . Given an inferred solar wind speed of ∼450 km s −1 (Slavin et al. 2014), we computed T C is ∼72-88 s. The observed interval between large FRs is comparable to the estimated Dungey cycle time. From the magnetotail observations, the repeated ejection of large FRs lasts at least 35 minutes. Thus, multiple X-line reconnection in the tail current sheet drives Mercury's magnetospheric dynamics globally and continuously. The response mode of Mercury's magnetosphere to the solar wind driving under extreme conditions is distinct from the previously accepted Earth-like substorm process or the formation of large-scale plasmoids created by widely separated X-lines in the magnetotail.
3,635.8
2020-04-09T00:00:00.000
[ "Physics" ]
On the sparsity of linear systems of equations for a new stress basis applied to three-dimensional Hybrid-Trefftz stress finite elements Hybrid-Trefftz finite elements have been applied to the analysis of several types of structures successfully. It is based on two different sets of approximations applied simultaneously: stresses in the domain and displacements on its boundary. This method presents very large linear systems of equations to be solved. To overcome this issue, most authors have been careful in the choice of the approximation fields in order to have highly sparse linear systems. The natural choice for the stress basis has been linearly independent, hierarchical and orthogonal polynomials which typically result in more than 90% of sparsity in 3-D finite elements. Functions derived from associated Legendre and Chebyshev orthogonal polynomials have been used with success for this purpose. In this work the non-orthogonal polynomials available in the Pascal pyramid are proposed to derive a harmonic and complete set of polynomial basis as an alternative to the above-cited functions. Numerical tests show this basis produces accurate results. No significant differences were found when comparing the sparsity of the linear system of equations for both functions. INTRODUCTION The hybrid-Trefftz stress element formulation presents itself as an alternative for the dominant conforming singlefield based displacement element in computational analysis after the pioneering work of Pian (1964), Pian and Tong (1969) and de Veubeke (1980), which has been thoroughly compiled along with other non-conventional methods by Freitas et al. (1999).This formulation, considering the linear isotropic case for simplicity, consists on the independent approximation of the stress field in the domain of the element and the displacement field on its boundary.The Papkovitch-Neuber solution of Navier equation is used to derive the stress approximation fields to satisfy the Trefftz constraint, i.e., the displacement in the domain must satisfy locally all field equations.This technique imposes the use of harmonic potential functions for generating stress solutions. The hybrid-Trefftz method, in particular when applied to 3-D problems, generates very large linear systems when high order polynomial approximations are used.For instance, a 50-element mesh with a hierarchical polynomial stress approximation of order 10 can create a matrix size larger than 20,000 × 20,000, making the sparsity feature paramount in the computation of the solution of the resulting linear system.The natural choice for the stress basis has been orthogonal polynomials to display high sparsity indices (Freitas, 1998), which typically results in high level of sparsity, say, more than 90% in three-dimensional finite elements. In this work the non-orthogonal polynomials available in the Pascal pyramid are proposed as an alternative to orthogonal polynomial bases to be used as stress approximation functions in the context of 3-D hybrid-Trefftz elements.Homogeneous Harmonic Polynomial (HHP) functions derived from the Pascal's pyramid of polynomials proposed by Wang (2002) are applied to the Papkovitch-Neuber solution of the Navier equation to derive a complete set of 3-D stress and displacement bases.This procedure was applied with success to the analysis of plates and shells with hybrid-Trefftz elements by Martins et al. (2018). The sparsity levels of the finite element matrices produced by this approximation functions are compared to those produced by one set of orthogonal and harmonic functions.Without loss of generality, in this work the basis derived from associated Legendre and Chebyshev (LC) polynomials is the chosen one.Both Legendre and Chebyshev stress approximation bases are orthogonal in [-1, 1] domain.It has been used with success by Freitas and Bussamra (2000), whereas the drawback of this choice is that in 3-D formulation the completeness of this stress basis is limited to the sixth degree.In addition, the accuracy of the stress predictions of the proposed element is analyzed through numerical tests. Hybrid-Trefftz elements with both nodal and generalized variables framework have shown good performance in linear elastic (Freitas and Bussamra, 2000) and elastoplastic (Bussamra et al., 2001) analysis of solids with LC functions.In crack analysis, singular stress fields and stress concentration problems were analyzed using LC and Airy functions (Bussamra et al., 2014) and Kaczmarczyk and Pearce (2009), respectively.In multisite cracked solids, Chebyshev functions were applied in a nodal framework (Argôlo and Proença, 2016).Some authors applied the hybrid-Trefftz formulation to problems other than linear elastic mechanics.Fu et al. (2011) analyzed heat conduction in functionally graded nonlinear anisotropic materials using a nodal hybrid-Trefftz element.Cao et al. (2013), Lee et al. (2010) and Souza and Proença (2009) used complex variables derived from works from Muskhelishvili (1953) and Qin and Wang (2008) to approximate the domain fields when analyzing micromechanics of heterogeneous composites, crack singularities and the effect of selective enriching approximation functions, respectively.Wang et al. (2014) applied the dual reciprocity method to orthotropic potentials modeled with hybrid-Trefftz finite elements, dividing the solution into homogeneous and particular parts.Petrolito (2004) used bi-harmonic polynomials as approximation functions implemented in triangular elements with hybrid-Trefftz formulation to analyze stability and buckling of thick and thin plates in a 2-D approach and later studied vibration and stability on thick orthotropic plates with complex conjugate harmonic polynomials (Petrolito, 2014).Karkon and Rezaiee-Pajand (2016) studied thick orthotropic plates, with the difference of using orthotropic Timoshenko beam interpolation functions for approximation at the boundary fields and using both triangular and rectangular hybrid-Trefftz elements in various benchmark tests from the literature.Karkon (2015) also proposed triangular and rectangular hybrid-Trefftz elements to analyze anisotropic laminated plates. No study was found, to the best of the authors knowledge, that compares not only numerical results but also the proposed function's numerical applicability measured in terms of the sparsity of their linear system of equations.Therefore, this work proposes to compare the aforementioned harmonic function sets in terms of sparsity and completeness.and Structures, 2020, 17(7), e307 3/17 FORMULATION The hybrid-Trefftz stress element formulation derived in this work is based on the linear elastic fundamental governing equations, applied to a system with domain V and enclosed by a boundary Ć, referred to a Cartesian coordinate system: where vector ó and å gather the independent components of the stress and strain tensors in the equilibrium and compatibility equations, Eqs. ( 1) and ( 2) respectively; b � represents the prescribed body forces vector and u the displacements vector.The constitutive equation Eq. ( 3) is represented in the flexibility format with f being the flexibility matrix, symmetric and with constant entries when a linear, reciprocal elastic law is assumed.ó � č and å � č represent the residual stress and strain vectors, respectively.For simplicity, ó , å and b � are set to zero.Equation (4) stands for the Neumann boundary condition, applied in the static section of the boundary (Ć ó ), where t ̅ Ć are the prescribed tractions.Equation ( 5) stands for the Dirichlet boundary condition, applied to the kinematic portion of the boundary (Ć u ), where the displacements (u � Ć ) are prescribed.D is the differential equilibrium operator and D * is its Hermitian transpose.Both are linear and adjoint in the context of geometrically linear models.Matrix N contains the unit outward normal vector associated with the operator D. Approximation fields The element formulation used in this work is the hybrid-Trefftz.The term hybrid means two independent field approximations are made.One field is approximated in the domain, and the other on its boundary.Since the element formulated is of the stress model, the generalized stresses are directly approximated in the domain and the displacements in the boundary.The stress approximation is while the boundary displacement approximation is where S and Z contain the approximation stresses and displacements and X and q their unknown weights, respectively. Trefftz constraint The Trefftz constraint is enforced in the domain equilibrium stress approximation Eq. ( 6) by requiring it to satisfy locally the system of differential equations.This requirement renders the following condition: which means that S must represent a self-equilibrated stress field.Equation (1) can also be written in terms of the domain displacement, generating the well known Navier's equation of compatibility.It is obtained by substituting the compatibility equation Eq. ( 2) and the constitutive relation Eq. (3) written in terms of rigidity into the equilibrium equation Eq. ( 1), following: Latin American Journal of Solids and Structures, 2020, 17(7), e307 4/17 = kD T u in V, and (9) The Trefftz constraint is based on a self-equilibrated approximation field S directly associated to the domain displacements u.The displacements u in the domain is approximated by where U collects the functions associated with the displacement basis, X is the displacement vector and u Ć collects the rigid-body motion.Substituting Eq. ( 11) into Eq.( 9) results in the stress basis S, based in the domain displacement field ON THE CHOICE OF U According to Fu et al. (2012) some 3-D isotropic elasticity fundamental analytical solutions are available in the literature, as for instance the Boussinesq-Galerkin (Wang, 2002), Papkovitch-Neuber and quasi Hu Hai-Chang (Hu, 2008).However, Fu et al. (2012) reached the conclusion that from these presented solutions only a modified version of Papkovitch-Neuber's proposed by Wang et al. (2012) is able to directly formulate a linear independent and complete set of displacement approximation functions.Papkovitch (1932) and Neuber (1934) independently proposed a threedimensional solution to the Navier equation Eq. ( 10) for isotropic materials, which has the form where ϕ and Ψ are a scalar and vector harmonic displacement potentials, respectively, r is the position vector, ∇ is the gradient operator, ķ is the Poisson ratio and G is the shear modulus.Naghdi and Hsu (1961) and Mindlin (1936) shown that the Papkovitch-Neuber solution can provide complete solution of the Navier equation.This means the solutions are capable of representing every elastic displacement field possible in a three-dimensional problem.However, Eq. ( 13) can provide redundant solutions, and therefore not unique, so it is necessary to verify for linear dependencies.The choice resides in which harmonic potential is used to generate the displacement field to be substituted into Eq.( 12).Four independent set of functions exist in the proposed solution (Ψ 1 , Ψ 2 , Ψ 3 and ϕ), each of them to be substituted for the desired harmonic function set. Moreover, Papkovitch (1932) also claimed the displacement potential ϕ could be set to zero without compromising the generality of the solution.Neuber (1934) claimed that any of the four harmonics could be set to zero with the same effect described by Papkovitch, but both statements were showed unsupported and inconclusive (Eubanks and Sternberg, 1956;Sokolnikoff, 1956), generating great discussion over the exact conditions that the four harmonic potentials could be turned into three.According to the investigation performed by Eubanks and Sternberg (1956) and followed by Naghdi and Hsu (1961) and Cong and Steven (1979) over the generality of the Papkovitch-Neuber potential, some of the conclusions found were: • if the domain is convex, any of the harmonic functions in Ų could be set equal to zero (regardless of the value of ķ) without loss of completeness; • the scalar function ϕ can be dropped if: o the domain is finite and star-shaped with respect to the origin; Eubanks and Sternberg ( 1956) demonstrated through a counter example that if 4ķ is an integer, ϕ cannot be dropped; • if 4ķ is non-integer and ϕ is a harmonic polynomial in x, y and z then ϕ can be dropped regardless of the form of the domain.Solids and Structures, 2020, 17(7), e307 5/17 The finite element geometry proposed to be used in is this work is a hexahedron, a convex three-dimensional domain.According to the conclusions presented above the potential ϕ can be dropped without any loss.Therefore, if potential ϕ is dropped the Papkovitch-Neuber solution Eq. ( 13) applied to Eq. ( 9) can be expressed by R E T R with: x 3 ) the point coordinates and ∂ 1 , ∂ 2 , ∂ 3 the partial differential operators in the three cartesian directions. Legendre and Chebyshev harmonic potentials Freitas and Bussamra (2000) assigned a polynomial stress basis derived from Legendre and Chebyshev hierarchical and orthogonal polynomials.The sparsity levels they are able to generate is paramount to the hybrid-Trefftz element due to the high number of degrees of freedom each element has, which leads to large linear system of equations.Their affinity with p-refinement is also an important and desired feature in hybrid formulations as it can exploit hierarchical function sets.Legendre and Chebyshev potentials generate complete sets of approximation functions, but only up to the sixth degree, as showed by Freitas and Bussamra (2000).The Legendre polynomials are given by The potentials generated from these functions are: It can be proven that this potential is harmonic if P n is a Legendre polynomial of degree n by performing Latin American Journal of Solids and Structures, 2020, 17(7), e307 6/17 The Chebyshev harmonic potentials were proposed by Freitas and Bussamra (2000), and can be separated into two subdivisions: Chebyshev ö and Chebyshev ϕ potentials, as follows: φ s y = r n sen(nθ) φ s x = r n sen(nθ) ϕ s z = zr n sen(nθ) ϕ s y = yr n sen(n) Legendre polynomials generate 3 different harmonic sets of functions producing a maximum of 9 independent fields.Chebyshev's generates 12 harmonic sets which produce a maximum of 36 independent fields.Together, they produce a maximum of 45 possible fields for each degree of approximation but there may exist linear dependent modes and these must be eliminated.This information is shown later in Tables 1 and 2. Homogeneous Harmonic Polynomials Aiming to build a harmonic polynomial set of independent functions derived from Pascal's pyramid trinomial distribution, Wang et al. (2012) applied the Laplace operator (the condition for a function to be harmonic) to the following polynomial, for a given degree of approximation n: As an example, consider n = 2, where a i , i = (1, …, 6) are constant coefficients, then f = a 1 x 2 + a 2 xy + a 3 xz + a 4 y 2 + a 5 yz + a 6 z 2 . (23) Applying the 3D Laplace operator: The result in Eq. ( 24) implies there is only one restriction, or dependency, among coefficients a i .Substituting Eq. (24) into Eq.( 23) it is possible to notice five independent terms arise.Factoring them according to the coefficients , i = (1, …, 5) results in the five independent terms of this polynomial set, namely The main advantage of using this harmonic set of polynomials as approximation functions lies in its full completeness for every desired degree of approximation n, as shown by Wang et al. (2012) and Martins et al. (2018). FINITE ELEMENT MATRICES As stated by Freitas (1998) and Bussamra et al. (2001) there are different approaches to establish the finite element equations from the fundamental relations Eqs.(1-5) and the basic field approximations Eqs.(6-7), namely the duality, principle of virtual work and well-established variational statements.Here the virtual work approach is followed. The element is based in the virtual work equation: This requires the stress field approximation ó to be in point-wise equilibrium within the element, and the boundary displacement field approximation u � to be the same along adjacent elements' common boundaries.As mentioned before, the generalized body forces are considered absent in this work.Substituting Neumann Eq. ( 4) and Dirichlet Eq. ( 5) conditions into Eq.( 24) follows: Taking the first variation in terms of the generalized stresses of Eq. ( 26) leads to Substituting approximations Eqs. ( 6) and ( 7) into Eq.( 27) follows As the solution is not trivial, δX ≠ 0. Substituting the constitutive relation Eq. (3) into Eq.( 28) the following system of equations rises: Alternatively, the element equation Eq. ( 29) can be represented in the following form: where, Analyzing the equilibrium in the static boundary, given by the Neumann condition Eq. ( 4), and considering that the virtual work of the internal forces must be equal to the virtual work of the external forces = the following equation is obtained: Applying the domain stress approximation Eq. ( 6) and the boundary displacement approximation Eq. ( 7) in the following form Latin American Journal of Solids and Structures, 2020, 17(7), e307 8/17 into Eq.( 34) results in with Equations ( 30) and (35) render the system of equations in matrix form shown below: Matrix N is the normal operator related to the differential operator D. Vectors v and Q depend on the geometry of the element due to the prescribed displacements and prescribed tractions on its surface, respectively.Matrix A, vectors v and Q must be calculated for each unconstrained face.Matrix F of a given element is a square matrix of size equal to the number of accumulated stress fields generated by Papkovitch-Neuber solution.For each degree n Eq. 14 generates an approximation field that composes the stress approximation matrix S as shown in Eq. 38.The only exception is S 0 , which is a 6x6 identity matrix. S = [S 𝟎𝟎 , S 1 , S 2 , …, S n ] (38) Matrix Z contains the boundary displacement approximation basis defined in Eq. ( 7).Its functions are built from simple binomial distribution, defined in each face's local coordinates (ī 1 , ī 2 ).Considering n as the boundary displacement approximation degree, the number of fields obtained is given by Eq. ( 39), where each row represents the displacement approximation in one specific direction. For n = 2: Equation ( 40) represents an unconstrained face.As an example, to simulate a simply supported case in a given face one of the rows representing the desired constraint direction should be removed, maintaining the remaining fields.In case of a clamped face, where movement in the three directions are restricted, the whole face is left out of the calculations and it is not accounted for in Eq. ( 31) nor in Eq. ( 36).This approximation does not satisfy face-to-face and edge continuity between elements.This effect is lessened as the exact solution is approached.An advantage of having this non-conformity is the higher continuity obtained in the stresses, which is desired in a stress element Freitas (1998). In general, matrices F, A and vectors v, Q are defined for each element, and all of them depend either on the stress approximation S or displacement approximation Z. COMPLETENESS OF THE STRESS BASIS Observing Eq. (37), F is the only part of the linear system strictly dependent of the stress approximation S. Since the face integral A has influence from the boundary displacement Z, which is commonly constructed with simple independent polynomials, the sparsity analysis of integral A can be inconclusive in terms of the effects each proposed stress approximation functions have. As h-and p-refinements are applied to increasingly complex problems, the linear system given by Eq. ( 37) can become very large and cumbersome to solve.Therefore, a common base for comparison must be established in order to Solids and Structures, 2020, 17(7), e307 9/17 obtain meaningful results regarding the sparsity levels of the LC and HHP functions.In order to do that, each of this function sets are evaluated in its completeness and in the independency of the generated approximation fields. Completeness and independency of the stress fields Considering that a three-dimensional complete field approximation of degree n, which has been defined by trinomial distribution in Eq. ( 22), has its size given by equation it is necessary to subtract from this group of possible solutions the ones that do not fulfill the imposed restrictions in Navier's equation The resulting number of accumulated independent fields after eliminating six rigid-body terms is given by Since a differentiation is applied at the Papkovitch-Neuber solution, the degree of S relates to the degree of n in the following manner Table 1 shows, for each degree n, the number of independent approximation fields that are obtainable from Eq. 14 and the accumulated amount of approximation fields carried over from the previous degrees, which is given by Eq. 41.The Papkovitch-Neuber solution provides 45 approximation fields for each degree of S. Out of the 45 obtained fields there exists linear dependencies among them which once eliminated will form the subset of linear independent fields that are used in Eq. ( 38). From Tables 2 and 3, it is possible to note that the approximation basis formed by: • Legendre + Chebyshev ϕ is complete up to 1 st degree; • Legendre + Chebyshev φ is complete up to 3 rd degree; • Chebyshev φ + Chebyshev ϕ is complete up to 4 th degree; • Legendre + Chebyshev φ and ϕ is complete up to 6 th degree. Table 2 shows how each orthogonal harmonic function set behaves when observed separately, in terms of independent approximation fields.In Table 3 these function sets are combined and have its completeness analyzed.It is possible to notice that the LC associated functions is able to generate complete stress approximation fields, according to Eq. ( 41), only up to the sixth degree.In the other hand, the HHP functions are able to generate complete sets of independent fields for any desired degree (Wang et al., 2012). NUMERICAL IMPLEMENTATIONS Each element's local coordinates are mapped on a master element, an 8-node hexahedron element, through a set of trilinear isoparametric functions.Its natural coordinates axes defined as [q, r, t], ranging from -1 to 1 from face to face and with its origin situated in the center of the element.The transformation functions are defined as where , and are the i th node coordinates of the master element.These functions are applied as shown below, where x local are the element's coordinates in each element's local system, and x master are the master's hexahedron element coordinates All surface and volume integrals are exactly calculated in the domain [1, -1] by using the Gauss-Legendre quadrature, as suggested by Zienkiewicz et al (2013). SPARSITY RESULTS Both LC and HPP potentials are compared in terms of the resulting linear systems of equations' sparsity.As previously mentioned, hybrid-Trefftz finite element analysis may generate very large linear systems, where the sparsity feature is more than desired.In this work sparsity is defined as the number of null terms in relation to the total number of terms present in the linear system.A 99% sparsity level implies that only 1% of the linear system terms are non-zeros.The following numerical examples were programmed using the software MATLAB© 2019b.Its finite element matrices are transformed to its sparse form through the sparse function, and the mldivide function is used as solver for the linear system of equations. Bi-clamped beam under distributed bending load A bi-clamped beam under a distributed bending load q = 1 applied on the top face is used to verify the accuracy of the results obtained by both proposed approximation basis, and to observe how the sparsity levels change when more Solids and Structures, 2020, 17(7), e307 11/17 elements are added.The beam has length of L and a square section of 0.2L, as shown in Fig. 1.The material is considered linear and isotropic with Young's Modulus E = 1 and Poisson's ratio ķ = 0.2.Three sets of degrees of approximations are used: [5,2], [7,3] and [9,4], where the first number is the domain stress approximation S degree and the second is the boundary displacement approximation Z degree.The displacement on the middle-bottom of the beam is evaluated and compared to the result obtained by a commercial finite element software analysis with 22,500 quadratic hexahedron displacement elements, v = 5.509qL/E.Table 4 displays the size and sparsity of matrix F (Eq. 31) of each element.This matrix is calculated only through the stress approximation function S, which results in a direct assessment of the sparsity level these functions can generate.In addition, the size of F is an important information since expressive computing time can be saved in the assembly of matrices F exploiting the fact that elements with the same material and size have the same F.In addition, the assembly the linear solving system (Eq.37) is well suited to parallel processing.These two proprieties were not exploited in this work.The size and sparsity of the resulting linear system of equations (Eq.37) is shown in Table 5.It is possible to verify that LC stress functions generated a slightly higher level of sparsity when compared to the HPP (the greatest difference, 1,42%, is found in the 1-element mesh of [9,4] degrees). Table 5 also presents the computation time spent to solve the linear system of equations.The increasing of the approximation degree (p-refinement) has more impact than h-refinement.It is also possible to note that HHP has, in most cases, a higher processing time.The two main reasons for that is the higher number of non-zero terms (lower Latin American Journal of Solids and Structures, 2020, 17(7), e307 13/17 sparsity) and since HHP is a complete basis for every approximation degree there are more equations in the resulting linear systems.This analysis ran with an Intel® Core™ i5-8400 CPU @ 2.80GHz processor. By analyzing figure 2, it is possible to observe that p-refinement had a higher impact when less elements were present.When h-refinement was performed, very good results were obtained with few elements for the approximation sets used.Another remark is that both potentials reached very similar results.Taking LC approximation results as a benchmark, Table 6 shows the relative difference compared with HHP functions. Cracked flat plate under traction load In the subject of crack analysis, stress concentration and singular fields, the hybrid-Trefftz elements were applied by Bussamra et al. (2014Bussamra et al. ( , 2016) ) in a generalized framework with associated Legendre and Chebyshev polynomials, and by Kaczmarczyk and Pearce (2009) in a nodal framework with Airy functions.To verify the accuracy of the stress predictions of the proposed finite element, a structure with high level of stress gradient is analyzed.In this Section, a p-and h-refinement analyses of the results of stress intensity factor (K) of a cracked flat plate are shown.The plate is under a uniform far field tension ó = 1, with a crack oriented 90° from the stress application direction.The material is considered homogeneous and isotropic, with E = 1, ķ = 0.3, with dimensions according to Fig. 3. A cracked plate behavior can be described by the stress intensity factor (K), which defines the crack tip stress state with help of the energy release rate (ÄG) that is related with the variation of deformation energy (ÄU) in the process of crack growth (Äa), say (Tada et al., 1973): Therefore, the stress intensity factor can be calculated through Eqs. ( 44) and ( 45) by increasing the crack length and calculating the variation of the energy release rate.The result can be compared with the one obtained by Tada et al. (1973), given by where F(a/H) for the given geometry is given by The analysis of the proposed beam was made using only the HHP functions.Approximation degrees of [5, 2], [7, 3], [9,4], [10, 3] and [10, 4] were used, where the first term is the domain stress approximation degree and the latter is the boundary displacement approximation degree.Five coarse meshes were analyzed, with 4, 12, 24, 48 and 72 elements (Fig. 3).For the given geometry and crack length the solution provided by Tada et al. (1973) is K = 5.0812.Results are displayed in Figure 4 and Table 7. CONCLUSIONS Orthogonal set of polynomials have been used to derive stress bases in hybrid-Trefftz finite elements by many authors.Legendre and Chebyshev (LC) potentials generate complete sets of orthogonal approximation functions, but only up to the sixth degree.In this work, a new stress basis (HHP) derived from the non-orthogonal polynomials from Pascal's pyramid is proposed for generating a 3-D hybrid-Trefftz finite elements.The numerical results obtained are compared to the results from Legendre and Chebyshev orthogonal and hierarchical stress approximation basis in terms of accuracy and linear system of equations sparsity. The results showed that the HHP stress basis produces accurate stress and displacement approximations.When compared to the results produced by LC functions in the first example, the vertical displacement in the center of the bottom face of the beam varied less than 1,5% in maximum when compared with a benchmark solution obtained from displacement finite elements found in commercial softwares, but most results were around 0% when considering 4 decimal places.In terms of sparsity, HHP produced a very sparse linear system.It stayed slightly below the levels of the LC functions but the sparsity levels generated are close enough to justify its use, along with very good accuracy of the results. The second numerical test showed that coarse meshes can produce very good stress intensity predictions.Approximation sets [5, 2], [7, 3] and [9, 4] produced good results, with less than 3% of error relative to the analytical solution and approximations [10, 3] and [10, 4] displayed the best results, with less than 1% of error when 24 elements or more are used. In conclusion, HHP functions are a valid option to derive the stress approximation basis for the hybrid-Trefftz formulation, as it produces very sparse linear systems, showed good results, is complete for every approximation degree and its polynomial terms are hierarchical and easily generated. On the sparsity of linear systems of equations for a new stress basis applied to three-dimensional Hybrid-Trefftz stress finite elements Felipe Alvarez Businaro et al.Latin American Journal of Solids Figure 4 : Figure 4: Convergence analysis of cracked plate with HHP functions. Table 1 . Expected number of independent fields of functions under the Trefftz constraint.Gray areas represent incomplete degrees.On the sparsity of linear systems of equations for a new stress basis applied to three-dimensional Hybrid-Trefftz stress finite elements Felipe Alvarez Businaro et al. Table 2 . Number of independent fields out of a total of 45.Each set of functions evaluated separately.Gray areas represent incomplete degrees. Table 3 . Number of independent fields out of a total of 45.Functions evaluated in pairs and in trio.Gray areas represent incomplete degree. *Incomplete stress basis *Incomplete stress basis Table 6 . Relative difference between LC and HHP approximation functions results. Table 7 . Stress intensity factor K for cracked plate using HHP stress approximation functions.
6,934.2
2020-01-01T00:00:00.000
[ "Mathematics" ]
Apolipoprotein E-Mimetic Peptide COG1410 Promotes Autophagy by Phosphorylating GSK-3β in Early Brain Injury Following Experimental Subarachnoid Hemorrhage COG1410, a mimetic peptide derived from the apolipoprotein E (apoE) receptor binding region, exerts positive effect on neurological deficits in early brain injury (EBI) after experimental subarachnoid hemorrhage (SAH). Currently the neuroprotective effect of COG1410 includes inhibiting BBB disruption, reducing neuronal apoptosis, and neuroinflammation. However, the effect and mechanism of COG1410 to subcellular organelles disorder have not been fully investigated. As the main pathway for recycling long-lived proteins and damaged organelles, neuronal autophagy is activated in SAH and exhibits neuroprotective effects by reducing the insults of EBI. Pharmacologically elevated autophagy usually contributes to alleviated brain injury, while few of the agents achieved clinical transformation. In this study, we explored the activation of autophagy during EBI by measuring the Beclin-1 and LC3B-II protein levels. Administration of COG1410 notably elevated the autophagic markers expression in neurons, simultaneously reversed the neurological deficits. Furthermore, the up-regulated autophagy by COG1410 was further promoted by p-GSK-3β agonist, whereas decreased by p-GSK-3β inhibitor. Taken together, these data suggest that the COG1410 might be a promising therapeutic strategy for EBI via promoting autophagy in SAH. INTRODUCTION Subarachnoid hemorrhage (SAH) constitutes ∼5% of all strokes with high mortality and morbidity (van Gijn and Rinkel, 2001;Bederson et al., 2009). The pathophysiological mechanism of SAH is not fully understood and recent studies mainly focus on early brain injury (EBI). EBI describes the brain injury typified by brain edema, elevated intracranial hypertension (ICP), and microcirculation compromise, within the first 72 h after SAH, which offers new approaches for the treatment of SAH (Kusaka et al., 2004). During EBI, subcellular organelle functions become disordered, and the autophagy-lysosomal system is activated (Chen et al., 2015). As the main intracellular recycling system during conditions of starvation and various stressors, autophagy is a critical for neuronal survival (Mizushima and Komatsu, 2011). It is well-documented that a properly enhanced autophagy serves as neuroprotection, and mainly achieved through anti-inflammatory processes in acute brain injury (Galluzzi et al., 2016), pharmacologic upregulation of autophagy could slow the progression of Huntington disease and tauopathy models (Mizushima and Komatsu, 2011). Furthermore, several pharmacological activators of autophagy could effectively decrease acute brain injury in the SAH model (Galluzzi et al., 2016). However, few of those agents are feasible for clinical application. The regulation of autophagy is a very complex process. As the direct downstream target of Akt, p-GSK-3β strongly induces the cytoplasmic autophagy (Park et al., 2013), and closely correlates with the alleviation of acute brain injury in SAH (Endo et al., 2006). Previous studies have shown that p-GSK-3β can be up-regulated by exogenous administration of apoE, thereby promoting neuronal survival (Han, 2004;Hayashi et al., 2007). However, apoE holoprotein is greatly restricted due to a molecular weight of 34 kDa, which makes it hard to cross the blood-brain barrier (BBB), thereby limiting its translational study. Recently, a modified apolipoprotein E (apoE) mimetic peptide called COG1410 has been widely investigated. COG1410 is an apoE-mimetic peptide derived from the receptor-binding region, which could activate downstream receptors and produce beneficial effects. With a composition of apoE residues 138-149, and the modification of two amino residues, COG1410 could effectively cross the blood brain barrier (BBB), and potentially have a long duration for the treatment of SAH . We recently showed that COG1410 could exert neuroprotective effects in EBI after experimental SAH, and activate downstream effectors including Akt . Based on these observations, we aimed to investigate the effect of COG1410 on autophagy and the potential mechanism, during EBI after experimental SAH. Animals A total of 130 adult male C57BL/6J mice were obtained from the Laboratory Animal Center of Chongqing Medical University, in which 98 mice were included in the experiment, and the other 32 mice died unexpectedly, as shown in the Supplementary Table 1. All the mice were aged 6-8 weeks, with a mean (±SD) weight of 20 ± 2 g. We followed the Guide for the Care and Use of Laboratory Animals of China, the number of mice was minimized as shown in the figures, and all the experimental mice were euthanized under deep anesthesia. Induction of SAH To better mimic EBI process, we chose the endovascular perforation SAH model, as previously described (Peng et al., 2017). Briefly, mice were anesthetized with sodium pentobarbital and placed in a supine position. In the condition of mechanical ventilation, the right carotid artery was exposed, and a 5-0 filament was inserted into the isolated right external carotid artery to rupture the bifurcation of the right internal carotid artery (ICA). For the sham-operated mice, the filament entered the ICA but did not pierce the bifurcation. A sharp increase in ICP was a reliable parameter to detect the induction of SAH in the mouse model, and we mainly judged the success of SAH by monitoring the occurrence of the typical Cushing response. Animal Groups To evaluate the time course of autophagy in EBI, typical SAH models were induced, and after 6, 24, 48, and 72 h, the brain specimens were removed under deep anesthesia and randomly divided into five groups; the sham group was included, and each group comprised 5 mice based on the sample size calculation method (Charan and Kantharia, 2013). To observe the effect of the apoE-mimetic peptide, 53 mice were assigned to four groups: sham-operated, SAH operated for 24 h, saline-treated and COG1410-treated. In each group, 6 mice were used for western blotting measurement, 6 mice were used for immunofluorescence staining analysis, and the other mice was used for transmission electron microscope. For the mice in the COG1410 group, the peptide was injected via the tail vein immediately after SAH (2 mg/kg; Laskowitz et al., 2012). To explore the role of p-GSK-3β, 50 mice were divided into five groups: SAH+saline, SAH+COG1410, SAH+COG1410+DMSO, SAH+COG1410+OA, and SAH+COG1410+LY. In each group, 5 mice were used for western blotting measurement, and the other 5 mice were used for immunofluorescence staining analysis. In the last three groups, dimethyl sulfoxide (DMSO), okadaic acid (OA), and LY294002 (LY) were injected through the lateral ventricle, the injections were performed 15 min before the SAH operation, with a microsyringe fixed to the stereotaxic frame, the coordinates were −0.9 mm anteroposterior, ±1.5 mm mediolateral, and −3.5 mm dorsoventral from the bregma (Yang et al., 2014;Baker and Götz, 2016). We acquired the DMSO (sc-358801), OA (sc-3513A), and LY294002 (CAS 936487-67-1) from Santa Cruz Biotechnology. Measurements of Neurological Deficit and SAH Severity Garcia score (Garcia et al., 1995) consisted of six programs, the scores of the first three programs were 0-3, and the scores of the last three programs were 1-3. Each score was an integer, with the minimum score equal to 3 and the maximum score equal to 18. A new grading system was utilized to grade the SAH model (Sugawara et al., 2008). Basal cisterns were divided into six segments, and each segment was graded from 0 to 3; grades 0, 1, 2, and 3 indicate no obvious subarachnoid blood clot, a minor blood clot, a moderate blood clot, and a large subarachnoid blood clot with an invisible cerebral arterial circle, respectively. Additionally, total grades of 0-7, 8-12, and 13-18 indicated mild, moderate, and severe SAH, respectively. Western Blot According to the previous reports, autophagy is more pronounced in the injured hemisphere, especially in the fronto-basal cortex (Lee et al., 2009), so the right hemisphere was quickly removed from the deeply anesthetized mouse and cut into pieces on ice. The pieces were subsequently homogenized in RIPA buffer that was previously mixed with a protease inhibitor (PMSF, 0.1 mM) and phosphatase inhibitor (sodium orthovanadate, 1 mM), and the mixture was centrifuged to obtain supernatant extracts. After the extracts were diluted with loading buffer, the protein extracts were transferred to polyvinylidene difluoride membranes after SDS-polyacrylamide gel electrophoresis and incubated with primary antibodies, which included Beclin-1 (1:500, Proteintech, 11306-1-AP), LC3B-II (1:200, Bioss Antibody, bs-4843R), p-GSK-3β (Serine 9) (1:200, Santa Cruz Biotechnology, sc-11757-R). Horseradish peroxidase-conjugated secondary antibodies were incubated with the primary antibodies, and the signal was detected by an ECL reagent (Thermo Scientific, #15168) and quantified using Image Lab software (Bio-Rad, ChemiDoc XRS+). Transmission Electron Microscopy Mice were deeply anesthetized and sacrificed. Samples from the glutaraldehyde prefixed right fronto-basal cortex were then quickly acquired, cut into small samples, placed in 2.5% phosphate-buffered glutaraldehyde and stored overnight at 4 • C. After dehydration in a graded ethanol series and subsequent fixation with osmium tetroxide, the samples were impregnated with epoxy resin and placed in an embedding medium. Sections that were 60-nm thick were sliced and stained with a saturated uranyl acetate solution, and the ultrastructure of neurons was observed using an electron microscope (JEOL, JEM-1400). Statistics The statistics were analyzed by SPSS software 17.0 (SPSS Inc., Chicago, IL, USA), significance was assessed by one-way analysis of variance (ANOVA) followed by the LSD, and the data are presented as mean ± SD. Autophagy Is Activated and Peaked at 24 H After Experimental SAH To investigate the alteration of autophagy ( Figure 1A) in EBI after Experimental SAH, we measured a time course of Beclin-1 and LC3B-II, two commonly used autophagic markers (Liang et al., 2008;Mizushima and Yoshimori, 2014). According to protein quantification results, both the Beclin-1 and LC3B-II ( Figure 1B) levels were elevated at 6 h and peaked at 24 h compared with the sham group. The results indicate that the autophagy level is enriched at 24-h, and thus, the time point is appropriate for the subsequent intervention study. The Specimen, SAH Grade, and Neurological Score of SAH Models In Figure 2A, blood clots were evident in the ventral subarachnoid space of SAH mice, but not in the control animals. The boxed region represents the right fronto-basal cortex areas for immunofluorescence staining and transmission electron microscopy. In Figure 2B, the SAH grade of mice in SAH group, SAH+Saline group, and SAH+COG1410 group were higher than that in Sham group, but were not statistically different when they compared with each other. In Figure 2C, neurological scores of the SAH group were strongly decreased relative to the sham group, but the neurological dysfunction can be significantly attenuated by the administration of COG1410. These data suggest that the SAH models are typical and steady, and the neurologic defect of mice models could be improved by COG1410 treatment. Transmission Electron Microscopy of COG1410 Promoted Neuron Autophagy Through analysis of electron microscope images captured by a blinded observer, we found that subcellular structures, including the nuclear membrane, mitochondria and endoplasmic reticulum, were relatively intact in the sham group ( Figure 3A). After experimental SAH, the mitochondria began to swell , but for the typical SAH models, the blood clots distributed into the basal cistern and nearly covered the circle of Willis, as indicated by black arrow heads (The right picture). The boxed region represents the right fronto-basal cortex area, which is the region of interest for immunofluorescence microscopy and transmission electron microscopy. (B) The total SAH grade in the SAH, SAH+saline and SAH+COG1410 groups showed no significant difference, the grades were 15.33 ± 1.21, 14.33 ± 1.63, and 13.83 ± 1.83, respectively (n = 6 animals per group). (C) Mice subjected to SAH for 24 h had significantly low Garcia scores compared with the sham-operated mice (SAH: 11.20 ± 2.05, vs. sham: 16.40 ± 1.14); after administration of COG1410, the neurological score significantly increased to 15.20 ± 0.84 (n = 6 animals per group). **P < 0.01 vs. sham group; ## P < 0.01 vs. SAH group. in the neurons, the nuclear membrane became shrunken, and the early autolysosome was evident ( Figure 3B). In the saline group, the multiple-membrane autophagosome became evident and contained cytoplasmic disintegration ( Figure 3C). After administration of COG1410, autophagy was increased, and neurons showed both autophagosomes and autolysosomes in the cytoplasm (Figure 3D). These results suggest that the administration of COG1410 could enhance neuronal autophagy, which may contribute to the breakdown of injured cellular components after SAH. COG1410 Improves Autophagy Level Beclin-1 and LC3B-II levels in right brain hemispheres, as quantified by western blot, were prominently elevated after SAH, and then picked up at 24 h. COG1410 treatment significantly increased Beclin-1 and LC3B-II levels when compared with the saline-treated group 24 h after SAH (Figures 4A,B). To confirm the major cell type of Beclin-1 and LC3B-II, immunofluorescence analysis that involved double staining with NeuN was performed, and the results were consistent with the quantitative data from the western blot analysis (Figures 4C-E). These data indicate that the administration of COG1410 elevates autophagy in the injured brain and in neuronal cells of an SAH model. The ApoE-Mimetic Peptide Elevates p-GSK-3β Level Given that the phosphorylation of GSK-3β is an important regulator for autophagy (Zhou et al., 2011;Park et al., 2013), we conducted western blot ( Figure 5A) and immunofluorescence ( Figure 5D) analyses for p-GSK-3β in the right cerebral hemisphere. According to the quantification results (Figure 5B), the total p-GSK-3β level increased sharply in the SAH group and administration of COG1410 further increased the p-GSK-3β level of ipsilateral hemisphere compared with SAH+Saline group. The number of co-staining neurons was also augmented in the SAH group and further increased by the administration of COG1410 (Figure 5C). This shows that p-GSK-3β expression FIGURE 4 | Autophagy of mice brain and neuron with the administration of COG1410. (A,B) The band intensities of Beclin-1 and LC3B-II were conspicuously increased in the SAH group compared with the Sham group, and further enhanced in the COG1410 group compared to the SAH group. No significant differences were observed between the SAH+Saline group and the SAH group, the relevant results are shown in the quantification (n = 6 animals per group). (C-E) The co-staining of NeuN with Beclin-1 and LC3B-II is indicated by white arrow heads and amplified in the upper-right corners of the merged images, and the relevant bio-markers are shown in the two smaller pictures below. The number of co-stained cells was also prominently increased in the SAH group and increased further in the SAH+COG1410 group (n = 6 animals per group). **P < 0.01 vs. the sham group; # P < 0.05 vs. the SAH group. FIGURE 5 | p-GSK-3β level of mice brain and neuron after administrating COG1410. (A,B) In the western blot analysis, p-GSK-3β level was increased substantially in the SAH group compared to the Sham group, and enhanced further in the SAH+COG1410 group compared to the SAH+Saline group, while there were no major differences between the SAH+Saline group and SAH group (n = 6 animals per group). (C,D) The co-staining of NeuN and p-GSK-3β indicates that the expression of p-GSK-3β was further elevated by COG1410, as the analysis result shown (n = 6 animals per group). **P < 0.01 vs. the sham group; # P < 0.05 vs. the SAH group. in the brain and neurons is also elevated after treatment with COG1410, which is closely related to the altered autophagy by administration of COG1410. The Phosphorylation and Dephosphorylation of GSK-3β Were Effectively Achieved As the main inactive state of GSK-3β, p-GSK-3β could be effectively achieved by a serine inhibitor OA, and greatly depressed by the PI3K inhibitor LY (Park et al., 2013). In western blot, the p-GSK-3β level was increased after administrating COG1410. Furthermore, the p-GSK-3β level was significantly elevated after the OA injection but apparently reduced after the LY injection (Figures 6A,B). Immunofluorescence further supported the results of western blot (Figures 6C,D). COG1410 Promotes Neuronal Autophagy by Promoting the Phosphorylation of GSK-3β COG1410 increased the protein levels of autophagy, and the band densities of Beclin-1 and LC3B-II were both further promoted in the SAH+COG1410+OA group. However, the protein level showed a striking decrease in the SAH+COG1410+LY group (Figures 7A,B). Additionally, the immunofluorescence staining for the co-staining analysis was also consistent with the western blot analysis (Figures 7C-E). DISCUSSION In this present study, we assessed the neuroprotective effects of COG1410 and explored the possible underlying mechanism of COG1410 in neuronal autophagy during EBI after SAH. With the administration of apoE-mimetic peptide COG1410, the autophagy was further promoted, accompanied by the improvement of neurological score. Additionally, we confirmed that the autophagy-promoting effect of COG1410 can be achieved by the phosphorylation of GSK-3β, both in the injured brain tissue and neuronal cell. As an essential pathway to recycle long-lived proteins and damaged organelles (Levine and Klionsky, 2004;Wu H. et al., 2016), autophagy is ubiquitously present in various diseases, and the current studies consider brain autophagy as important for neuronal survival in SAH (Galluzzi et al., 2016). The pharmacological promotion of autophagy significantly alleviates the acute brain injury of SAH (Jing et al., 2012;Zhao et al., 2013;Chen et al., 2014;Shao et al., 2014), whereas inhibition of autophagy using either 3-MA or wortmannincan worsen the brain injury (Jing et al., 2012;Zhao et al., 2013) or eliminate the neuroprotective effects of enhanced The band density of p-GSK-3β was noticeably increased in the SAH+COG1410 group compared to the SAH+Saline group. The OA injection increased the p-GSK-3β level sharply, whereas the level decreased linearly in LY-injected group compared with the DMSO injecting mice (n = 5 animals per group). (C,D) The positive double staining neurons were counted and quantified; the cell number increased noticeably in the SAH+COG1410 group. Furthermore, the co-staining-positive neurons increased drastically in the SAH+COG1410+OA group, but decreased noticeably in the SAH+COG1410+LY group (n = 5 animals per group). *P < 0.05 vs. the SAH+saline group; ## P < 0.01 vs. the SAH+COG1410 group; && P < 0.01 vs. the SAH+COG1410 group. autophagy (Shao et al., 2014). These observations suggest that properly augmented autophagy levels benefit neuronal survival and may affect the prognosis of EBI. In the current study, administration of COG1410 increased autophagic markers and p-GSK-3β levels, and the neurological scores of SAH models were improved. This may suggest that COG1410-promoted autophagy is a protective intervention for SAH models, and points to the potential involvement of p-GSK-3β in the process. As the downstream effector kinase of Akt, p-GSK-3β could be increased by the phosphorylation of Akt (Wang et al., 2017), and promote the survival of damaged neurons (Hetman et al., 2000;Liang and Chuang, 2007;Collino et al., 2008). The Akt/GSK-3β signaling could also alleviate acute brain injury following experimental SAH (Endo et al., 2006). Our previous study demonstrated that COG1410 could effectively promote the phosphorylation of Akt . Furthermore, phosphorylated GSK-3β profoundly induces the occurrence of neuronal autophagy (Zhou et al., 2011). Thus, we hypothesized that COG1410 promotes autophagy by indirectly facilitating the phosphorylation of GSK-3β. In this study, the apoE analog could promote autophagy when GSK-3β was phosphorylated by OA, but depress autophagy when GSK-3β was dephosphorylated by LY. In fact, the exogenous administrated apoE effectively phosphorylates GSK-3β through LRP1/GSK-3β pathway, and significantly activates downstream pathways by binding to 34]. Therefore, COG1410 may also promote the phosphorylation of GSK-3β through LRP1/GSK-3β pathway, but it remains further confirmation. According to our previous study, among the three subtypes of apoE protein-coding gene (APOE), the APOE4 isoform is FIGURE 7 | Effect of p-GSK-3β to the COG1410 promoted neuron autophagy. (A,B) Western blot analysis of Beclin-1 and LC3B-II indicates increased band densities for the SAH+COG1410 group compared to the SAH+Saline group, and the densities increased further in the SAH+COG1410+OA group but decreased dramatically in the SAH+COG1410+LY group compared with the SAH+COG1410+DMSO group, as the quantitative results shown (n = 5 animals per group). (C-E) The double staining of NeuN with Beclin-1 and LC3B-II yielded similar results as the analysis results shown (n = 5 animals per group). *P < 0.05 vs. the SAH+saline group; ##P < 0.01 vs. the SAH+COG1410 group; &&P < 0.01 vs. the SAH+COG1410 group. unfavorable for neuronal survival compared to the APOE3 isoform (Jiang et al., 2007(Jiang et al., , 2015, while COG1410 is derived from the apoE-receptor-binding region and free from the APOE polymorphism. Additionally, COG1410 can effectively pass through the BBB. By reducing the number of activated microglia, COG1410 effectively alleviates the neuroinflammatory response in TBI (Jiang and Brody, 2012), and decreases the neurologic deficit during intracranial hemorrhage (Laskowitz et al., 2012). As a neuroprotective agent, COG1410 can inhibit BBB disruption with the activating of CypA/NF-κB/MMP-9 pathway, and reduce apoptosis and neuroinflammation by activating downstream effectors in EBI following SAH . From these facts, COG1410 retains the receptor-binding property of apoE holoprotein, the peptides could activate the downstream effectors, and produce neuroprotection in EBI after experimental SAH. Although these data indicate the potential neuroprotection and the possible mechanism of COG1410-promoted autophagy, our study has some limitations. The other GSK isoforms are incompletely detected by the experiments, and the neurological effect of COG1410 in the late phase of SAH is worthy of future studies. Additionally, further investigation needs to be performed in the safety, efficacy, and clinical study of the apoE-mimetic peptide.
4,821.6
2018-03-05T00:00:00.000
[ "Biology", "Medicine" ]
EFFECT OF STORAGE MEDIA ON FRACTURE RESISTANCE OF REATTACHED TOOTH FRAGMENT : AN IN VITRO COMPARATIVE STUDY Dr. Nama Shilpa 1 , Dr. V V Rao 2 , Dr. Madhu Vasepalli 3 , Dr. Minor babu MS 4 , Dr. R Punithavathy 4 and Dr. Satyam M 4 . 1. post graduate student, Department of Pedodontics and preventive dentistry, Lenora institute of dental sciences, Rajahmundry, Andhra Pradesh. 2. professor and HOD, Department of Pedodontics and preventive dentistry, Lenora institute of dental sciences, Rajahmundry, Andhra Pradesh. 3. professor, Department of Pedodontics and preventive dentistry, Lenora institute of dental sciences, Rajahmundry, Andhra Pradesh. 4. Reader, Department of Pedodontics and preventive dentistry, Lenora institute of dental sciences, Rajahmundry, Andhra Pradesh. ...................................................................................................................... Manuscript Info Abstract ......................... ........................................................................ Manuscript History 1605 attention not only because of damage to dentition but also because of psychological impact it may have on patients and parents 1 . The treatment of an uncomplicated coronal fracture is a considerable challenge for the dentist.Many parameters are implicated in the successful outcome, such as bringing back the original form and dimension of tooth, opacity, translucency, fluorescence and opalescence of original tooth.Several techniques have been proposed for restoring fractured crowns, including stainless steel crowns, orthodontic bands, pin retained resin restorations, crowns, composite resins with acid etch adhesive techniques, porcelain veneers and jacket crowns, each of which show diverse degree of success 2 .However, these types of treatment did not always guarantee an adequate long lasting esthetics and require sacrifice of healthy tooth structure. Fragment reattachment is an excellent option when the fragment is available.It is a process of reattaching the traumatized fragment to the respective tooth using an adhesive material.It is simple, low cost method, provides better esthetics and increased wear resistance and thus improved function compared to other techniques 3,4.The major drawback of fragment reattachment technique is that, the reattached fragment is highly prone to fracture whenever it is subjected to new trauma or excessive masticatory forces 5 .So improvement of this technique is still necessary to overcome unexpected traumatic situations .Success of fragment reattachment technique depends on the time lapse between trauma and restoration and the patient's awareness of importance of the fragment storage, as the fractured part may lose its moisture after some time 6 .The restoration time can affect bond strength of these restorations because dentin moisture is essential for achieving high bond strength of composite resins with dentin 7 . Storing the fragment in storage media increases the bond strength and fracture resistance by preventing dehydration and dimensional changes 8,9,10 .It also improves esthetics 11 .This study is designed to look for a better storage media for preservation of fragment before reattachment and to know to how much extent the fracture resistance will be improved. Material and method:- A total of 48 extracted Permanent human maxillary central and lateral incisors were used in the study.Teeth that were freshly extracted for periodontal reasons were included and teeth having cracks, dental caries and other structural defects were excluded from the study.Collected teeth were autoclaved for 40 minutes as infection control procedure.Tissue remnants on root surface were removed with ultrasonic tips and curettes.Then the teeth were stored in distilled water until experimentation.The teeth were measured on the labial side from cervical to incisal edge with a digital calliper.The measured length in millimeters is divided by 3 and then teeth were marked at one third distance from the incisal edge.The root of each tooth was embedded in acrylic resin till the level of cervico enamel junction.This acrylic block was prepared according to zig size of universal strength testing machine.Specific numbering was given to each acrylic block. All the 48 teeth with acrylic blocks were randomly divided into 4 groups, following lot method of sampling (n = 12).Group I milk, group II normal saline, group III coconut water and group IV dry.In group IV teeth were kept dry and this group is negative control.All the teeth with acrylic blocks were stored in distilled water until sectioning. The teeth were cut on the marked line perpendicular to long axis of the tooth using low speed double sided diamond disk using saline as a coolant.Sectioned tooth fragments were stored in respective storage media according to numbering given on the acrylic block.All the teeth fragments were stored for 24 hours in respective storage media.Sectioned tooth fragments in the dry group were not stored in any of the storage media but were kept dry.Tooth remnants along with acrylic block were stored in distilled water until reattachment. Fragments were reattached by simple reattachment procedure without any additional preparation.37% phosphoric acid was applied to the tooth remnant and fragment for 15 seconds and rinsed with water for 10 seconds, this was followed by air drying for 5 seconds to remove excess water. Bonding agent (ADPER SINGLE BOND 2, 3M ESPE, st.Paul, MN, USA ) was applied in 2 consecutive coats.Then surfaces were dried for 5 seconds using an air syringe to allow solvent evaporation.The bonding agent was cured for 20 seconds in fractured fragment and 20 seconds in tooth remnant. 1606 The flowable composite (FILTEK FLOWABLE Z 350, 3M ESPE, USA) was applied on the fractured surface of fragment and tooth remnant.Fractured fragment was carried to the tooth remnant by means of sticky wax.After repositioning, light curing was done in 4 stages: 20 seconds in mesio buccal half, 20 seconds in mesio lingual half, 20 seconds in disto buccal half, 20 seconds in disto lingual half.After reattachment, specimens were stored again in distilled water until thermo cycling. All the restored teeth were kept in distilled water at 37 degree centigrade and subjected to 100 cycles of thermo cycling.The temperature range is 5-55 degree centigrade with a dwell time of 15 seconds and transfer time of 10 seconds.After thermo cycling, the specimens were again stored in distilled water until testing.All the samples were then subjected to fracture strength test using universal testing machine (AUTOGRAPH) in CIPET Hyderabad, at a cross head speed of 0.6 mm per minute.The force application was always at 90 degrees with respect to buccal surface.Force required to fracture each tooth was recorded in Newtons (N).The data was represented in the form of tables and then subjected to statistical analysis.In this study p value < 0.05 was considered as level of significance. Results:- The results were expressed as mean values along with their standard deviations.Mean comparison among groups was done with ANOVA test.Mean comparison between groups was done with Tukey post hoc test. Results showed that Coconut water group obtained highest fracture resistance of 129.95Newtons (N).Fragments stored in Milk demonstrated a fracture resistance of 121.53 N. Normal saline group demonstrated a fracture resistance of 95.23 N. Dry group demonstrated least fracture resistance of 64.71 N. One way ANOVA indicated differences in the amount of force required for fracturing at the reattachment site.( p value <0.001). Tukey post hoc test was performed for group wise comparison, results showed that there was statistically significant difference between groups I and IV( P value < 0.001), and groups II and IV (p value 0.003) and groups I and II (0.027).Comparison between groups II and III ( p value< 0.002) revealed statistically significant difference.There was no statistically significant difference between group I and III ( p value 0.465). Discussion:- Even though fragment reattachment is simple, easy, minimally invasive, fast, economical and and effective procedure, the prognosis may not be good at all times.Fragment de-bonding happens because of repeated trauma, non physiological use of the tooth, or horizontal pulling of the tooth 12 .The risk of de-bonding is higher for children since they have more exposure to traumatic situations because of more physical activity. Fracture resistance of a material is a measure of its ability to retard crack initiation and propagation.High fracture resistance is required in clinical situations where high impact stresses are experienced 13 .Incisal edge reattachment of anterior teeth is one such demanding situation. A plethora of studies reported that fracture resistance of reattached tooth fragment can be improved with use of new adhesive agents, bonding materials and tooth preparation techniques techniques 14,15,16,17,18,19,20,21 .Apart from all these improvements, hydration of fragment also plays an important role in improving fracture resistance 9 .Storage medium acts as one of the key determinants since hydration aids in maintaining the vitality, esthetic appearance, and improving the bond strength.This is based on the fact that ability of storage media to retain the collagen framework and intertubular porosity patent for subsequent infiltration of monomers 22.Bond strength may increase as the resin penetrates into the intact dentinal tubules. There is paucity of literature demonstrating the role of hydration or storage media in improving success of fragment reattachment.There are gold standard storage media like HBSS that demonstrate good amount of hydration and stability of collagen structure, but their availability is limited and expensive.So, this present study was undertaken to know the effect of commonly available storage media in improving fracture resistance of reattached teeth fragments.1607 Intact sound dentin which was stored in a dry environment for 24 hours retains only about 25% of the total amount of its moisture.It seems that this partial loss of dentin moisture and its shrinkage results in the reduction of the composite surface contact with dentin 23 .In this study bond strength of the fragments which were kept in a dry environment before reattachment had the lowest bond strength. Bond strength of fragments stored in milk was greater than that of fragments which were kept dry.This may be due to the fact that storage of fragments in moist environment prevents the collapse of collage fibers in dentin leading to better bond strength. Bond strength of fragments which were left dry was less than that of fragments stored in saline.This can be attributed to the following effect: if the fragment is dry there will be collapse of collagen net in the dentin, this will prevent the penetration of bonding agent into partly demineralized zone.This will result in relatively low bond strength. Bond strength of fragments stored in coconut water was greater than that of strength of fragments that were kept dry.This was because hydration of surfaces restores approximately 50% of the fracture strength of the original tooth 7 .A dried fragment has a lower bond strength compared to a fractured part which is kept in a moist environment or is rehydrated before reattachment. Bond strength of fragments stored in milk group was greater than that of fragments stored in normal saline.Calcium and phosphate are the main elements found in milk and these can stiffen and harden both demineralized and healthy dentin.This may be the reason behind enhancement of bond strength of fragments stored in milk 8 . In the present study, there is no statistically significant difference between the bond strengths obtained by the fragments stored in milk and coconut water.Coconut water has higher osmolality than milk.It can also be hypothesized that the water content of coconut water being greater than milk might have allowed better wetting of the dentin preventing the collapse of the collagen fibers which play a role in resin tag formation 24 . Fragments stored in normal saline obtained lower bond strengths than fragments stored in coconut water.Previous studies have compared the effectiveness of coconut water and normal saline as storage media for avulsed teeth, the authors found that normal saline was superior to pure coconut water as storage media 25 .The reason might be that factors required for PDL cell viability are different from that of fragment reattachment.The pH of normal saline is 5.9 and sufficiently acidic for some demineralization to occur.On the other hand it lacks calcium and phosphate ions for remineralization.It has been shown that the hardness and young's modulus of elasticity of dentin decreased when specimens were stored in normal saline.This is presumably due to loss of surface calcium resulting in hydrolysis of unprotected collagen fibrils. Limitations of the study and further research:- In this study maxillary central and lateral incisors were selected for the study, because these teeth were more prone to fracture.There will be difference in the cross sectional areas of central and lateral incisors.So the area of bonding differs for central and lateral incisors, this will have impact on bond strength values. The uncomplicated crown fracture occurs more commonly among young patients.Obtaining sound young permanent incisors is highly difficult as their extraction is unethical.In this study incisors extracted for periodontal reasons, which are usually teeth of older people were selected.Aging can cause alterations, especially in dentin, which will have negative influence on bond strength.This is one limitation of the present study. The extracted teeth were sectioned with double sided diamond disc to obtain fragments.The fractured surfaces obtained by this method may differ from the surfaces of natural fractures.Cutting with disc may also produce smear layer and there will be minute loss of tooth structure so the fragment may not fit well over the remaining tooth structure.This is a limitation of the present study. Long term clinical trials and in-vitro studies on larger number of samples and at larger scale need to be undertaken in this area of research to elucidate the effect of storage media on fracture resistance of reattached tooth fragment. Conclusion:- The effectiveness of coconut water as storage media was superior to that of normal saline and milk.Within the limits of study, it can be concluded that coconut water can be used as storage media for fragment reattachment of teeth.Conclusions drawn from the present study should be further evaluated by long term clinical studies.More research in this direction is needed in future as the role of storage media in fragment reattachment has been proved critical for the success of restoration. Table 1 : -Mean comparison among Groups using one way ANOVA Table 2 : -Mean comparison between group I and group II. Table 5 : -Mean comparison between group II and group III Table 7 : Mean comparison between group III and group IV
3,236.8
2017-04-30T00:00:00.000
[ "Medicine", "Materials Science" ]
APPLICATION OF THE BASIC MODULE'S FOUNDATION FOR FACTORIZATION OF BIG NUMBERS BY THE FЕRMАT METHOD At present, the issue of information security is one of the most relevant. One of the ways to solve it is information encryption. Among the ways of encryption, the asymmetric crypto-algorithm (ACA) RSA has acquired widespread application. Its cryptographic resistance is caused by the complexity of factorization of big numbers N= , p q ⋅ where p and q are prime numbers. In papers [1, 2], it was shown that the known examples of compromising the RSA algorithm work only for its specific implementations, and, as a rule, in the general case are not most effective for solving a factorization problem. Up to now, many factorization methods have been developed. The most frequently used methods include the methods of the number field sieve (GNFS), the quadratic sieve method (QS), the Pollard method and the Fermat method [3–6]. In this case, it is believed that each of these methods is the best (most effective in terms of computational complexity) for its application area. Thus, the Fermat method is most effective at sufficiently close values of prime factors p and q, The Pollard 16. Twisted Edwards Curves / Bernstein D. J., Birkner P., Joye M., Lange T., Peters C. // Lecture Notes in Computer Science. 2008. P. 389–405. doi: https://doi.org/10.1007/978-3-540-68164-9_26 17. Bessalov A. V., Kovalchuk L. V. Exact Number of Elliptic Curves in the Canonical Form, Which are Isomorphic to Edwards Curves Over Prime Field // Cybernetics and Systems Analysis. 2015. Vol. 51, Issue 2. P. 165–172. doi: https://doi.org/10.1007/s10559015-9709-x 18. Bessalov A. V., Dihtenko A. A. Cryptographically resistant Edwards curves over prime fields // Applied Radio Electronics. 2013. Vol. 12, Issue 2. P. 285–291. 19. Bespalov O. Yu., Kuchynska N. V. Kryva Edvardsa nad kiltsem lyshkiv yak dekartiv dobutok kryvykh Edvardsa nad skinchenymy poliamy // Prikladnaya radioelektronika. 2017. Vol. 16, Issue 3-4. P. 170–175. Introduction At present, the issue of information security is one of the most relevant.One of the ways to solve it is information encryption.Among the ways of encryption, the asymmetric crypto-algorithm (ACA) RSA has acquired widespread application.Its cryptographic resistance is caused by the complexity of factorization of big numbers N= , p q ⋅ where p and q are prime numbers.In papers [1,2], it was shown that the known examples of compromising the RSA algorithm work only for its specific implementations, and, as a rule, in the general case are not most effective for solving a factorization problem. Up to now, many factorization methods have been developed.The most frequently used methods include the methods of the number field sieve (GNFS), the quadratic sieve method (QS), the Pollard method and the Fermat method [3][4][5][6].In this case, it is believed that each of these methods is the best (most effective in terms of computational complexity) for its application area.Thus, the Fermat method is most effective at sufficiently close values of prime factors p and q, The Pollard Literature review and problem statement It is commonly known that the Fermat method is used only for factors p and q of number N that are close by values.The region of its application is quite narrow.The main ideas, associated with reduced computational complexity of the algorithm that implements it, were proposed and studied relatively long ago and are presented in paper [9]. According to the classic variant of the algorithm of the Fermat method [10,11], to derive the values of p and q, the equation is solved where X and Y are positive integers.Unknown Х is represented in the form of ( ) The solution to equation ( 1) is obtained by searching the values of k=0, 1, 2,…, until residue 2 X N -is complete square of integer.If solution ( 1) is obtained at p and q are determined according to ratios: The main disadvantage of the Fermat factorization method is the need for multiple performance of arithmetically complex operations of raising to square, subtraction and calculating the square root for big numbers, which determines its computational complexity.In this case, it is necessary to distinguish between the following components of the problem of high computational complexity of the basic algorithm: 1) a large number of X, for which the ratio (1) should be checked; 2) significant computational complexity of the operation of deriving square root of multidigit numbers; 3) high computational complexity of the operations of multiplication and addition of multidigit numbers. In most of the known variants of reduction of computational complexity of the Fermat factorization method, the procedure of preliminary sifting either of analyzed values X, or reduction of the check operations of square root calculation is used.One of the ways of solving the first problem is based on the results of analysis of values m of lower bits of a factorized number [12].Paper [9] considers the possibility of increasing the pitch of thinning, but the value of such a pitch is a permanent magnitude.Such permanent pitch can be equal to values of 2, 4, 6 and, in rare cases, of 12.However, it cannot significantly affect the reduction of computational complexity of the algorithm of the Fermat method.That is why the search for the ways of reduction of the number of check X, for which ration (1) is checked, is one of the tasks that are explored in this study. The options of the solution of the second problem were proposed in [13,14], where a reduction in the number of operations of square root calculation is achieved by the results of analysis of least significant bits y 2 .A modified version of these algorithms is presented in [15].In paper [16], the method for determining that the square root is not an integer without the procedure of root calculation was proposed. In papers [17,18], reduction of computational complexity of the Fermat factorization method is ensured through the use of modular arithmetic and the apparatus of continuous fractions, respectively. In [9], it is proposed to check whether the difference X 2 -N by module of a certain set of foundations of modules b that are prime numbers will be quadratic residue. At satisfaction of ratio (1) for an arbitrary foundation of module, the equality will be satisfied ( ) which is equivalent to satisfaction of the ratio It should be noted that if the ratio (1) is satisfied, there is equality (4) and ( 5) for an arbitrary b.The opposite is false, that is, satisfaction of (1) does not follow from satisfaction of (5).However, if the ratio (5) is not satisfied, the ratio (1) will not be satisfied.That is why for the X, for which (5) is not satisfied, it may not be possible to derive square root, as it may not be the exact square of an integer. Through the implementation of checks (5) for set of foundations of modules, computational complexity of the Fermat method decreases.If each of the modules is a prime number, then, as noted in [9], when using one additional module in (5), the number of X, for which difference X 2 -N can be complete square, decreases actually by two times.Such X will be subsequently called admissible. However, when using the first foundation of modules, which will be subsequently called basic and designated bb, the number of X, for which the ratios (5) will be analyzed at other values of modules, will decrease only approximately by half. Let us assume that bb is a foundation of module and bb * is the number of roots of the equation (5) at b=bb.If subsequently during analysis of the check X, only bb * of them will be analyzed, the number of analyzed X will decrease by the number of times equal to ( , ) / *, Z N bb bb bb = ( 6 ) where Z(N, bb) will subsequently be called acceleration coefficient. In the scientific literature, there are no methods of the use as the basic foundation of module bb of the numbers, which are products of primes numbers or powers of such prime numbers, at using of which the pitch for X will be non-uniform, and estimates of the value of Z(N, bb).Such idea was proposed by the authors in papers [19][20][21], where the values of bb were determined from the condition of ensuring the lowest possible reduction in the number of admissible X, and the impact of the number of N on the number of admissible X in (5) at b=bb was not assessed.Such research is one of the problems that are being solved. The hardware capabilities of calculations, such as graphic charts have considerably increased lately.While the algorithm of the Fermat method is easy to de-parallel, the task of number factorization could be performed using graphic processors.However, the types of data that are currently used in them do not imply the possibility of working with multidigit numbers.The methods for performing operations with numbers of long type instead of similar operations with big numbers are one of the most important tasks that are important when designing modifications of the Fermat algorithm. That is why it is advisable to conduct studies regarding the use of the basic foundation of the module in ratios (5) both in terms of achieving a significant reduction in the number of admissible X, and for reducing computational complexity of the operations of multiplication and addition of multidigit numbers based on the operations with numbers of the long type. The aim and objectives of the study The aim of this study is to ensure the reduction of computational complexity of the algorithm of the Fermat method of factorization of big numbers using the basic foundation of the module that is the product of powers of prime numbers.This will make it possible to design hardware-software means of conducting cryptanalysis ACA that are more effective in terms of performance speed and, consequently, to enhance the quality of evaluation of crypto-resistance of ACA of RSA. To accomplish the aim, the following tasks have been set: -to establish what prime numbers (multipliers bb) influence the value of acceleration coefficient Z(N, bb) at a fixed N, determined according to (6); -to find out how the values of Z(N, bb) are influenced by numbers N; -to offer the method to reduce the number of operations of multiplication and addition of multidigit numbers based on using the operation with the numbers of the long type when performing arithmetic operations with big numbers. Analysis of influence of number N and exponents of prime numbers in the structure of basic foundation The results of research into the impact of number N and powers of prime numbers in the structure bb were derived based on conducting numerical experiments.Analysis and generalization of the results are given below. A general idea of the change in acceleration coefficient at changes of bb and a fixed value of N can be obtained based on the data of Tables 1, 2, which show the information for all Nmodbb<60, coprime with 2, 3 and 5. Table 1 Value of acceleration coefficients Z(bb, Nmodbb) for various bb as products of number 60 on powers of 2, 3 and 5 for values of Nmodbb<60 coprime with 2, 3 and 5 Based on an analysis of data from Tables 1, 2, it is possible to draw two major conclusions: -at a change in bb, acceleration coefficient varies depending on the value of an additional multiplier in it; -acceleration coefficients take a series of the same values for a set of magnitude of Nmodbb, which at various bb is different. Thus, at the increase in bb by 3 times (bb=180), acceleration coefficient increases either by three times, or remains unchanged.Аt an increase in bb by 9 times (bb=540), acceleration coefficient increases either by 4.5 times, or remains unchanged.In this case, an increase takes place for the same Nmod3 as at bb=180.If bb increases by 5 or by 25 times, acceleration coefficient also increases for the same Nmod5.The increase in bb by 2 с1 *3 с2 *5 с3 times leads to an increase in acceleration coefficient that is equal to the product of accelerations, related to an increase in bb of the exponent of number 2 by с1, of the exponent of number 3 by с2 and the exponent of number 5 by с3. Thus, we can assume that for multipliers bb equal to p t , it is possible to determine a set of values Nmodp, for which the values of acceleration coefficients do not change at the change of exponent, and those for which they change.This assumption was checked using numerical experiments for multipliers bb -prime p from p=2 to p=31.The results of such research are given below.а) Multiplier bb р=2.Based on numerical experiments, it was found that for 2 t , it is advisable to use the value of exponent t³2, since at t=1, Nmod2 of acceleration is equal to 1. Table 3 shows the values of acceleration coefficients for all odd values of Nmod2 t at t=3÷7 that are coprime with bb. Based on data from Table 3, it is possible to make a conclusion that the character of a change in the values of acceleration coefficients for bb=2 t at t>3 is determined by magnitude of residue Nmod8, which was proved by additional numerical experiments with bb=2 t at t≤14.For such values of bb at t=1÷14, Table 4 shows the values of acceleration coefficients depending on Nmod8.According to data from Table 4, the magnitude of acceleration coefficient for bb=2 t at t>2 is determined by exponent t and the value of Nmod8.This is proved by the data in Table 3 at t=3¸7, where, for example, at t=7 and Nmod8=1 acceleration coefficients for values of Nmod2 7 , equal to 1, 9, 17, 25, 33, 41, 49, 57, 65, 73, 81, 89, 97, 105 and 113 will be the same (equal to 32/3), that is, for those that (Nmod2 7 )mod8=1.That is why the data from Table 4 allow estimating the constructions of the effective primary base bb for the cases when there is number 2 among prime multipliers bb.Thus, at Nmod8=3 and Nmod8=7 at bb=2 t and t≥3, acceleration coefficient always equals to 4 and there is no point using power 2 with the exponent higher than 3 in bb.If Nmod8=5, it will be optimal to use power 2 with exponent 5 in bb.But if Nmod8=1, it is possible to use power 2 with exponent 8 and more in bb. b) Multiplier bb р=3.In case multiplier 3 is included in bb, it was found that at Nmod3=2, the values of acceleration coefficients for bb=3 t at t³1 coincide, which is proved by numerical experiments with bb=3 t at t=1¸8.For such values of bb, Table 5 gives values for acceleration coefficients, depending on Nmod3 at t=1¸8.Table 3 Acceleration coefficients Z(bb, Nmodbb) for odd Nmod2 t at t=3¸7 bb Acceleration Z=Z(bb, Nmodbb) for all possible values of Nmodbb According to Table 5, it is possible to estimate the possibilities of construction of the effective primary base of bb for cases when there is number 3 among prime multipliers.Thus, at Nmod3=2 at bb=3 t and t>0, acceleration coefficient always equals to 3.There is no point using power 3 with the exponent higher than 1 in bb.If Nmod3=1, it is possible to use power 3 with exponent 4 and more in bb.Based on data from Table 6, it is possible to assess more accurately the possibilities of the effective primary base of bb for the cases when there is number 5 among prime multipliers Thus, at Nmod5=2 or Nmod5=3 at bb=5 t and t>0 (5 , mod5) 2.5. That is why it is advisable to use exponent t=1.If Nmod5=1 and Nmod5=4, it is possible to use in bb power 5 with exponent 2 and more.d) Multiplier bb р=7.If multiplier 7 is included in bb, it was found that the values of acceleration coefficients for bb=7 t at t³1 coincide for all Nmod7=3, Nmod7=5 and Nmod7=6, which was proved by numerical experiments with bb=7 t at t=1¸3.But at t>1 and Nmod7=1 Nmod7=2 and Nmod7=4, the value of acceleration coefficient increases and takes the same value.For such values of bb, Table 7 shows the acceleration coefficients depending on Nmod7 at t=1¸4. Based on data from Table 7, it is possible to assess the possibilities of constructing the effective primary base of bb for the cases when there is number 7 among prime multipliers of bb.Thus, at Nmod7=3, Nmod7=5 or Nmod7=6 at bb=7 t and t>0, acceleration coefficient is always equal to 2.3333.That is why there is no point using in bb power 7 with exponent more than 1.If Nmod7=1, Nmod7=2 or Nmod7=4, it is possible to use in bb power 7 with exponent 2 and more.We will show on the examples of numbers N, presented in Table 9, that taking into consideration the specifics of numbers N, namely, the residues of dividing N by 8 and by prime p from 3 to 23, makes it possible to construct a new bb, at which set D(bb, N) will contain the number of admissible X, which does not exceed its value for bb=277200, equal to 2880 cells of memory of type int.The primary foundation for bb=277200 is the product of powers of prime numbers 2, 3, 5, 7, 11: bb=2 4 *3 2 *5 2 *7*11 and is characterized by the value of acceleration coefficient for numbers N of magnitude of 96.25, shown in Table 9.According to the data given in Tables 4-8, we will construct new, more effective bb¢.The data are shown in Table 9. Taking into consideration the above recommendations on the selection of exponents of prime p -multipliers bb, Table 10 gives the refined values of bb ³ , which take into account the specificity of factorized number N. Table 9 Values of N j modp for p=2 3 =8 and prime p=3¸23 As it follows from data in Table 10 for refined bb¢, due to taking into account the specificity of number N, the amount of required memory of the computer decreased and the value of the acceleration coefficient simultaneously increased by 3.25÷5.091times, which approximately reduces factorization time by the same number of times.That is why it is advisable to consider the problem of searching for the optimal bb that takes into consideration the specificity of numbers N. Determining the optimal primary foundation of bb taking into account the specificity of the factorized number When setting the problem of searching for the optimal primary base of module bb, we will use the information about the structure of bb, about the properties of acceleration coefficients and the number of elements of set D(bb, Nmodbb): the number of elements of array D(bb, Nmodbb) is equal to: 1 / ( , ) / ( , ). According to ( 9) to (11), to determine bb, it is sufficient to determine the exponents of prime numbers -multipliers bb, where it is necessary to consider the relationship between the value of a prime number and an increase in acceleration at an increase in the exponent.For prime p from 2 to 23, the corresponding values of acceleration coefficients are determined according to data from Tables 4-8.When setting the task of searching for optimal bb with consideration of N, we will explore the possible types of the variants of values of exponents, depending on p, among which there will be an option when multiplier p is not used in bb and then z(p 0 , N)=1. For p=2, three types of variants are possible: 1) at Nmod8=3 or Nmod8=7, exponent t is always equal to 3; 2) at Nmod8=5, exponent t is always equal to 5; 3) at Nmod8=1, it is necessary to determine the value of exponent t. Thus, when choosing the exponent of prime p -multiplier of bb, the exponent is not determined only for the third variant.To assess the possible range of exponents in variant 3, we will use the function of relative increase in acceleration coefficient, reduced to a memory unit: which makes it possible to give an approximate estimate of the effectiveness of the primary base of module, related to additional multiplying bb by prime multiplier р.The values of function s(p, t) for prime р³2 and р≤31 for the series of variants of exponent are shown in Table 11. Sorting values s(p, t) in the descending order makes it possible to assess how effective the addition of multiplier p in bb will be.The more s(p, t), the higher the effectiveness.If s(p, t) is close to zero, at an increase in bb, acceleration coefficient increases slightly, but the amount of memory of a computer used to store increments for admissible X increases significantly.To search for the optimal bb taking into consideration the specificity of N and the methods for reducing computer memory, ratios (9) to (11) and limitations for the memory amount of the computer are used.The search for a maximum acceleration coefficient is through the search of admissible options of exponents of prime p -multipliers N. In the numerical calculations on determining the optimal bb with consideration of the admissible memory amount needed to store increments of admissible X, it was accepted that the primary base of module bb is the product of powers -for р=2 by Nmod8, we selected one of the possible types of variants, where in case if Nmod8=1, exponents of t=3¸12 were considered; -for p≥3 and p£31, two type of variants were selected: type 2 as well as one of the types 1 or 3, depending on the value of Nmodp. In addition, when determining the required memory amount of the computer, it was taken into account that bb is always divided into 4, that is, the cyclic sequence of increments for bb is repeated at least twice. In the numerical experiments for memory amount (magnitude Q max ) 10 2 , 10 3 , 10 4 , 10 5 , 10 6 , 10 7 , for each of the variants of the influence on acceleration coefficients, the maximum value of acceleration coefficient was determined.Since the number of such variants turns out to be quite large (equal to 3*2 8 =768), Table 12 shows the data only about Z min , Z max , and mean Z ср , which is equal to the mean value for all the variants, where coefficient 2 is assigned for reaching the equality of the variants at p=2 and the type of variant 1, and coefficient 1 is given for variants 2 and 3. Then the weighted sum of all the obtained maximum acceleration coefficients was divided by 1,024.Derived values of Z min , Z max and Z ср were shown in Table 12 in the form of a diagram in Fig. 1.According to data from Table 12, when using the optimal values of the basic foundation of the module, determined from the condition of the existing memory volume of computer, which does not exceed 10 7 cells of the long type, the number of check X will decrease in comparison with the basic algorithm of the Fermat method by 2.4×10 3 ¸ ¸4.2×10 4 times, where the mean value equals to 1.1×10 4 times.The computational complexity of the basic algorithm of the Fermat method will decrease by the same number of times.Further reduction of computational complexity can be achieved by reducing the number of operations of multiplication and subtraction of the numbers that exceed boundary values for the long type, which is considered below. A decrease in the number of arithmetic operations with the numbers that exceed boundary values for the long type data The algorithm of the Fermat method implies two basic operations for the check X: calculation of difference X 2 -N and checking whether this difference will be the square of an integer.Based on using only the values X, which are admissible for bb, as check values, the number of check X decreased.However, this does not exclude performance of the operations of the calculation of difference 2 X N -and checking whether it will be the square of an integer.In addition, the roots of the equation ( 5) at b=bb can be big numbers, which can be seen in data from Table 12.That is why during the implementation of the modified algorithm of the Fermat method, it is proposed: 1. To use increments -differences of two nearest values of the roots of the equation ( 5) instead of the roots of the equation ( 5) at b=bb. 2. For each of the foundations of the modules of set to determine the roots of equation ( 5) at b=b k (k=1¸m) and form array M K of features for numbers from 0 to b k -1, in which 1 will mean that ( ) is the square residue, but zero, which is not true. The first proposal makes it possible to represent the current admissible for bb check Х in the form of: where X i+1 (X i ) is the following (previous) check Х, admissible for bb; Dx i is the increment for current admissible Х; X * is some intermediate fixed value, admissible for bb, which changes when magnitude DХ і is close to the boundary for the data of the long type.In such cases, we perform operations: X * =X * +DХ і , DХ і =0, as well as calculate residues s k =X*modb k (k=1¸m). Another proposal makes it possible to significantly decrease the number of operations of root calculation in relation to the basic algorithm of the Fermat method.To do this, in assessing the possibility that difference 2 X N -can be complete square, the values are calculated * , mod ( )mod where the sum k i s X + D is the number of the long type, and numbers r k,i are smaller than b k .It allows finding value Under condition that value M K [r k, i ]=1 for all k=1¸m, we calculate square root of 2 .X N -At this algorithm, the calculation of the value of check Х (multidigit number) is performed in two cases: -during calculation of the root of 2 X N -, when 2 ; X N is calculated; -during recalculation of X * =X * +DХ і , where DХ і is the number of the long type. Since at a sufficient number of modules in set MB b = = the first variant occurs so rarely that it can be neglected, the computational complexity of the proposed modified algorithm is determined by the operations of assigning value X * +DХ і to X * , where DХ і is a long type number and by calculations of residues s k =X*modb k (k=1¸m). When presenting multidigit numbers by its coefficients for the base 1,000 or 1,024, during calculating the value of X * +DХ і , on average 4÷5 operations of addition the numbers of long type and 8÷10 operations of division of the sum by the foundation will be performed.We will estimate how often the cases of computation of values X * +DХ і occur.The mean value of increments Dx i is estimated by the magnitude of acceleration coefficient.If we use the optimal values of bb, obtained at restrictions for the amount of available memory of the computer of the order of 10 7 cells of the long type, it is possible to reach the magnitude of the boundary value of the number of long type (»2.147×10 9 ) by the number of steps in the range from 50,000 to 900,000, where the average number of steps is equal to 2.147×10 9 /11,088.624»193,630.Now we will estimate the average number of operations with numbers of long type for values of N close to 2 1024 , performed in the proposed modified Fermat method at the number of check values for its basic algorithm, equal to 2.147×10 9 , when computation complexity of calculation operations 2 X N -and the root from 2 X N -can be neglected.For the modified algorithm of the method, we have: 1.One operation X * +DХ і ; on average 4-5 operations of addition of the numbers of the long type and 8-10 operations of dividing the sum by the foundation will be performed. 2. During performing operation X * +DХ і one time, we calculate the values of where numbers s k , DХ і and the sum sk+DХ і do not go beyond the boundaries of data of long type.That is the summary number of operations of m additions and m calculations of residues ( )mod ( 1 ). Primary values of ( 1) are calculated before the entrance of check values X to the main cycle of the search of admissible values for bb and are not taken into account here. 3. On average, around 1.95×10 5 operations DХ і =DХ і-1 + +Dx i , each of which is performed for long type numbers, which during representations of multidigit numbers by the array of coefficients, during factorization by foundation 1,024 requires every time on average 2 operations of addition and division. The total average number of operations with long type numbers at m<100: addition -around 4×10 5 , division and determining residue of division -around 4×10 5 , which is approximately equal to 2×2.147×10 9 /Z(N, bb). 1. 2.147×10 9 operations of increasing X by unity, complexity of which will be neglected. 2. 2.147×10 9 operations calculate the value of 2 X N -.We will consider that due to the ratio Thus, on average, for one check value of X among 2.147×10 9 operations, the modified algorithm requires 2/Z(N, bb) operations of adding and dividing long type numbers (including division with residue), and the basic algorithm of the Fermat method -O(logN+log 2 N) operations.If we consider that the mean value of Z(N, bb) is equal to 1.1×10 4 , and N is close to 2 1024 , computational complexity of the modified algorithm of the Fermat method decrease not less than by 10 7 times. Discussion of results of reducing the computational complexity of the modified algorithm of the Fermat factorization method An analysis of the factors that essentially influence computational complexity of the algorithm of the Fermat factorization method and its modifications revealed that they may include: а) a relative number of values of k in the ratio (2), at which it is possible to establish beforehand that ( ) 2 0 x k N + -will not be square of an integer (mean value is (z(N,bb)-1)/z(N,bb)); b) a relative number of values of k in the ratio (2), at which based on checking the ratios (4) for a series of foundations of modules, it is possible to establish that ( ) 2 0 x k N + -will not be the square of integer (mean value is (1/z(N, bb)*(1-2 -m ), where m is the number of integers in coprime modules that are used in the ratios (4)); c) a relative number of values of k in the ratio (2), at which if the ratios (4) for the selected set of values of foundations of modules are satisfied, it should be checked it multidigit number ( ) 2 0 x k N + -will be the square of the integer based on calculation of the root from it (mean value is equal to (1/z(N, bb)*2 -m , where m is the number of prime numbers in coprime modules, which are used in the ratios (4)); d) a relative number of steps of the algorithm, at which it is necessary to perform operations with multidigit numbers. The studies, the results of which are presented in the article, were directed at achieving maximum values for the factor a) (that is, a maximum value of acceleration coefficient z(N, bb)), as well as the smallest possible number of steps, at which the operations with multidigit numbers (factor g) are performed. To obtain maximum values of accelerated coefficient, we found the ratio (10) for z(N, bb) and (11) for the amount of memory required to store the increments to admissible X as the difference between their two consecutive values (in ascending order).Based on them, we stated the problem of determining the optimal primary foundation of bb with consideration of specificity of the factorized number, in which one of the conditions is a restriction for the amount of avail-able computer memory.The use of increments to admissible X instead of their values made it possible to replace most of the operations, which in the algorithm of the Fermat method are performed with multidigit numbers, with the operations with long type numbers.It is possible to solve the challenges, represented in section 2. It should be noted that in the modified algorithms of the Fermat method, presented in [9], the problems concerning factor b) are solved. The amount of available memory of a computer (or distributed computing systems) is the main constraint in practical use of the proposed modified Fermat algorithm, because the following problems are solved based on this information: -determining the optimal primary foundation of bb taking into consideration the specificity of N, which corresponds to maximum value of z(N, bb); -determining a set of foundations of modules that are used to check the ratios (4), where the information about square residues is stored for each of the foundations. The restrictions can also include the fact that at z(N, bb)>1000, its increase by two times is possible, as a rule, when a new integer appears in bb.And since bb is the product of powers of prime numbers, it is impossible in practice to obtain acceleration coefficients Z max >10 7 , let along Z min >10 7 or Z ср >10 7 . The presented solutions and the general algorithm of the modified Fermat algorithm were focused at the possibilities of their use both for single-processor computers, and for modern high performance distributed computation systems.In the latter case, it is important to take into consideration the amount of available memory for each of the processors, which was not explored in the study. Methods of factorization of multidigit numbers can be considered as an iteration procedure, by which the satisfaction of a certain condition is checked at the check step k.In the case of the Fermat method, the condition is checked if the difference ( ) 2 0 x k N + -will be the square of an integer.In this case, it was found for the Fermat method, that it is not necessarily to determine the square root of (the conditions of factor c)), and in most cases it is enough to use the algorithms of thinning check values (factors a) and b)).It is possible to set the task of finding the ways to use this idea for other factorization methods as well.However, in the case of each of the factorization methods, it is necessary to iden-tify conditions that should be checked and find the ways of thinning the check k. Conclusions 1.It was established that powers of prime numbersmultipliers of bb form an integral of acceleration coefficient, which does not depend on other prime numbers or powers of such numbers, and total acceleration coefficient Z(N, bb) at a fixed N is the product of such coefficients for powers of prime numbers. 2. Based on numerical experiments, it was found that depending on residues of dividing N by prime numbersmultipliers N, for each of the prime numbers p, sets Nmodp are formed, for which at the fixed bb, acceleration coefficient Z(N, bb) takes one and the same value.For prime numbers p from 3 to 31, it was shown based of numerical experiments that there are only 2 groups of values Nmodp.For the first of them, containing the values Nmodр=1, inequality Z(N, p 2 )>Z(N, p) is true, and for the second one, equality Z(N, p k )=Z(N, p), where k>1 is true.Three groups are formed for powers of prime p=2: 1) Nmod8=1; 2) Nmod8=5; 3) Nmod8=3 and Nmod8=7.Then, Z(N, 2 5+k )>Z(N, 2 5 ) for variant 1, Z(N, 2 5+k )=Z(N, 2 5 ) for variant 2, Z(N, 2 3+k )=Z(N, 2 3 ), where k>0 for variant 3.This made it possible to significantly reduce the number of possible variants of values of bb while solving the problem of determining its optimal value. 3. When using large values of basic foundation of module, it proved to be appropriate to represent the roots of the equation (5) through the difference between two consecutive values (in ascending order).This made it possible in most cases to perform operations with long-type numbers instead of multidigit numbers. 4. Based on the obtained results, we described the modified algorithm of the Fermat method, which in comparison with the basic algorithm ensures the reduction of computational complexity on average no less than by 10 7 times for the numbers of 2 1024 order when using the optimal values of Z(N, bb), provided that it is possible to record up to 10 7 of the long-type numbers in the computer memory. Fig. 1 . Fig. 1.Values of acceleration coefficients Z(bb) at limitations for amount Q (max) of available memory for the current check Х the difference 2 X N -cannot be a complete square and the transition to a new check Х, admissible for bb, is performed.At M K [r k, i ]=1, a similar magnitude for k+1-th module from set 1 { } m k k MB b = = . Table 5 Z (3 t, Nmod3) for coprime with 3 Nmod3 at bb=3 t and t=1¸8 If multiplier 5 is included in bb, it was found that the values of acceleration coefficients for bb=5 t at t³1 coincide for all Nmod5=2 and Nmod5=3, which was proved by the numerical experiments with bb=5 t at t=1¸6.But at t>1 and Nmod5=1 and Nmod5=4, the value of acceleration coefficient increases.Such value at every t is the same at Nmod5=1 and Nmod5=4.For such values of bb, Table6shows the values of acceleration coefficients, depending on Nmod5. Table 7 Acceleration coefficients for Nmod7, coprime with 7 For prime р -multipliers of bb, equal to 11, 13, 17, 19, 23, it was found that the character of a change in the values of acceleration coefficients for bb=р t at t³1 is determined by the value of Nmodр, which was proved by numerical experiments with bb=р t at t=1¸4.For such values of bb, Table8shows the values of acceleration coefficients, depending on Nmodр at t=1, 2. Table 10 Refined primary foundations of bb ¢ , formed for numbers N with respect to data from Tables4-6 Table 11 Values of function s(p, t) for prime р³2 і р≤31 equal to 2,3, 5, 7, 11, 13, 17, 19, 23 29, 31, where the set of options of exponent of prime numbers p -multipliers bb is selected based on the data from Tables 5-8 and 11 on condition that s(p, t)>0.03(t is an exponent for p).In this case, the following options of the influence on acceleration coefficient:
9,065
2018-12-13T00:00:00.000
[ "Mathematics" ]
Preserving the Shape of Functions by Applying Multidimensional Schoenberg-Type Operators : The paper presents a multidimensional generalization of the Schoenberg operators of higher order. The new operators are powerful tools that can be used for approximation processes in many fields of applied sciences. The construction of these operators uses a symmetry regarding the domain of definition. The degree of approximation by sequences of such operators is given in terms of the first and the second order moduli of continuity. Extending certain results obtained by Marsden in the one-dimensional case, the property of preservation of monotonicity and convexity is proved. Introduction The theory of splines approximation was founded by Schoenberg and became one of the main chapters of approximation theory. Now there is a vast literature dedicated to spline approximation. We refer the reader to the monograph of Schumaker [1] for historical notes. The success of this type of approximation is due both to the nice mathematical theory and to the great efficiency in practical applications. In practice, the spline approximation is more efficient then the polynomial approximation. In [2], Schoenberg considered also a particular method of approximation of functions by splines, with the aid of certain positive linear operators, which are named the Schoenberg operators. Important contributions in the study of these operators are due to Marsden [3]. The subject of multidimensional spline is developed in many papers. We can specify here the paper [13] where the approximation of functions using multivariate splines is presented and the monograph [14], which is dedicated to the theory of multivariate splines. We mention also the paper [15] where the multivariate polynomial interpolation is approached, the paper [16] where a computationally effective way to construct stable bases on general non-degenerate lattices is presented, Reference [17] where the subject of Hermite-vector splines and multi-wavelets is developed and the paper [18] in which a generalization of bases, namely B-spline frames, is approached. Estimates of approximation by linear operators in the multidimensional case are established in [19]. As exemplification of the application of the Schoenberg operators in practice we mention the recent paper [20] where one-dimensional Schoenberg spline operators were used, obtaining a substantial improvement of the clear sky models which estimate the direct solar irradiance. The present paper is a continuation of paper [12], where two-dimensional Schoenberg operators were considered. Now we extend this definition in multidimensional case and we establish certain properties of them. Several important connections with symmetry exist in this study. Because these operators present a symmetry in their construction, the computation of their moments is made by symmetry. The symmetry is also used in establishing the estimates with second order moduli, which are defined with the aid of finite symmetric differences. On the other hand, we study the property of preservation of convexity and this property can be described using the Hessian of functions, which is a symmetric quadratic form. Multidimensional Schoenberg-Type Operators on Arbitrary Nodes We consider the integers j, m, 1 ≤ j ≤ m; n j > 0; k j > 0; the vector (x 1 , . . . , x m ) ∈ [0, 1] m and the knots sequences ∆ n j ,k j The Greville abscissas associated with division ∆ n j ,k j , 1 ≤ j ≤ m have the next form When The next relations take place and Definition 1. Multidimensional Schoenberg-type operator associated with ∆ has the form where f : [0, 1] m → R, and x = (x 1 , . . . , x m ) ∈ [0, 1] m . Remark 1. (i) Symmetrizing the knots ν i j on each components by function σ(x) = 1 − x, x ∈ [0, 1] one obtains also a Schoenberg-type operators of the same degree. If the knots are equidistant, one obtains the same Schoenberg-type operators. (iv) S ∆ is a polynomial of degree at most k j in each variable x j , (vi) Multidimensional Schoenberg-type operators admit partial continuous derivatives on [0, 1] m , since We consider the next functions: We use the next notations: e 1 (t) = t, t ∈ [0, 1]; e 0 for the constant function equal to 1, on [0, 1] m , 1 ≤ j ≤ m and ∆ j , 1 ≤ j ≤ m denotes the knot sequence use to one-dimensional Schoenberg operators. (9) to converge uniformly on [0, 1] m to continuous function f , it is sufficient that for any η > 0 Proof. We consider (10) is fulfilled. From f continue function on [0, 1] m , we have ∀ε > 0, ∃η ε > 0 such that for any Let n ε j ∈ N, such that: for n ≥ n ε j . For such n we obtain The norm of the division ∆ is We use the first order modulus of continuity: where Theorem 2. For any f ∈ C([0, 1] m ), operators S ∆ given in (7) satisfy inequality where ϑ = 1 2 max 1≤j≤m {k j + 1}. Proof. Let the continuous function f and (x 1 , . . . , Therefore, It results converge uniformly on [0, 1] m to f , for any continuous function f if ∆ → 0. Preservation of Monotonicity and Convexity by Multidimensional Schoenberg-Type Operators with Equidistant Knots In this section, we will extend some results obtained by Marsden in the case of onedimensional Schoenberg operators. Let k ∈ N. We denote by (S ∆ ϕ)(x) the one-dimensional Schoenberg operators of degree k associated with the knot sequence Using these notations, the following relations are given in [3]: In the following theorems it is considered that n and k are variable. The convergence is uniform on compact subsets of (0, 1). We are interested in generalizing these above results in the case of multidimensional Schoenberg-type operators. Let We consider now multidimensional Schoenberg-type operators with equidistant knots on D of the form where f : On D consider the following partial order. If a = (a 1 , . . . , a m ) ∈ D, b = (b 1 , . . . , b m ) ∈ D, we write a ≤ b, iff a i ≤ b i , for 1 ≤ i ≤ m. A function f : D → R is said to be increasing if for any a, b ∈ D, such that a ≤ b, we have f (a) ≤ f (b). Theorem 6. For any integers m ≥ 1, and k ≥ 1, if f : D → R is increasing then S m n,k f is increasing on D, for any n ≥ 1. Proof. Show that In order to show (17) it suffices to show that function g(t) = (S m n,k f )(a + tv), t ∈ [0, 1], is increasing, and for this it suffices to have d Because v i ≥ 0, 1 ≤ i ≤ m it suffices to show: Using formula (14) we obtain (17) is true. In the next two theorems we give generalizations of Theorem 4. We mention that (18), we obtain (denoting i = i j ): Using formula (15) it follows Consider the moduli of continuity of functions D 2 h and In addition, since f and has continuous partial second derivatives on D, it results that is uniformly continuous and consequently uniformly with regard to indices i 1 , . . . , i j−1 , i j+1 , . . . i m and x ∈ [a, b] m . On the other hand, consider the sequence of one-dimensional Schoenberg operators L n : This sequence of positive linear operators approximates uniformly on [a, b] any function ϕ ∈ C[0, 1]. Moreover, L n e 0 = e 0 , Le 1 = e 1 . Using the well known estimate of Shisha and Mond, we obtain Since lim n→∞ L n e 2 − e 2 = 0 we get uniformly with regard to x j ∈ [a, b] and indices i 1 , . . . , i j−1 , i j+1 , . . . i m . From (22) and (23) we deduce uniformly with regard to x ∈ [a, b] m and indices i 1 , . . . , i j−1 , i j+1 , . . . i m . Taking into account relations (20) and (21) we obtain the uniform limit with regard to x ∈ [a, b] m : Since one obtains the uniform majorization with regard to Finally, it results that (19) is true. Theorem 9. If f : D → R is strictly convex and has continuous partial second derivatives on D, then for any compact convex set K ⊂ • D there exists an indice n 0 , depending on f and K, such that S m n,k ( f ) is convex for each n ≥ n 0 on K. Proof. From the hypothesis we obtain the following symmetric positive definite quadratic form: Denote B = {x ∈ R m , x = 1}. Because D is compact and B is compact we obtain that D × B is compact. Because the function F : D × B → R is continuous and strictly positive on the domain of definition, from the Weierstrass theorem one obtains that there exists µ > 0 such that Using Theorems 7 and 8 we obtain for any indices 1 ≤ i, j ≤ m. Then it results uniformly for x ∈ K and for v ∈ B. Therefore, there exists n 0 ∈ N, such that Inequality (27) says that S m n,k ( f ) is convex on K. Multidimensional Schoenberg-Type Operators of Degree Three on Equidistant Knots Let one consider the case with k j = 3; n j = n; the equidistant knots The Greville abscissas are The B-splines are with 1 ≤ j ≤ m. Multidimensional Schoenberg-type operators with equidistant knots, denoted in the sequel by S m n,3 , for k j = 3, n j = n, 1 ≤ j ≤ m, are: In this section we present certain special results for the cubic splines, which can be proved analogously as in [11]. for x j ∈ 2 n , n−2 n and 1 ≤ j ≤ m. Using Lemma and the inequality given in [21]: where S n,k denotes the Schoenberg one-dimensional operator of order k with equidistant knots, one obtains: From Lemma 1 and Lemma 2 and the fact that Schoenberg preserves linear functions, one can deduce the following Voronovskaja-type result, in a similar mode as in [11]. Theorem 10. The following limit is true: for any f ∈ C 2 ([0, 1] m ), (x 1 , . . . , x m ) ∈ (0, 1) m . Because Schoenberg preserves linear functions there exists the possibility of expressing the degree of approximation in a more refined mode, using second order moduli of continuity. The following estimates can be obtained similarly to [11] by applying certain general estimates with moduli of continuity proved in [19]. Firstly, consider the usual second order modulus where f ∈ C(D), h > 0 One obtains: where f ∈ C([0, 1] m ), h > 0, (x 1 , . . . , x m ) ∈ [0, 1] m , n ∈ N, n ≥ 5. Consequently: A global second modulus of continuity can be defined by: Using this modulus one can obtain an estimate which is independent on the dimension m: Conclusions The Schoenberg operators are practical tools to approximate functions, knowing the values of them in a finite number of points. Schoenberg operators attach to a function a particular type of spline of a freely chosen degree. It is not necessary to use high-degree splines in order to obtain a desired approximation order. It is usually sufficient to use 3rd order splines. This makes the calculation volume substantially lower than in the case of polynomial approximation. In approximation of functions, the degree of approximation is not the unique objective. The preservation of certain shape properties of functions is also worth studying. Among these supplementary preservation proprieties, two special types are usually studied: the possibility of simultaneous approximation of functions and of their derivatives of different orders and the preservation of convexity of different orders, including the monotonicity and the usual convexity. These types of properties are known to be true for the one-dimensional case of Schoenberg operators. We put in evidence that they are true in great measure for the multidimensional case. It is well known also that maybe the more important polynomial approximation operators, namely the Bernstein operators, have very good properties for preserving different behaviors of functions. In fact, it is natural that the Schoenberg operators, which can be regarded as generalizations of Bernstein operators, maintain at least in part these good properties. On the other hand, by taking into account that Schoenberg operators offer a great improvement of the order of approximation for the same order of computation, they turn out to be a very powerful tool in the theory of function approximation. In this direction, other properties of preserving certain classes of functions or the simultaneous approximation can be taken into account for further studies. The results obtained in this paper are connected to the notion of symmetry in several aspects, namely, in the construction of operators, in using the symmetric tools in estimates and in property of convexity, which is given using symmetrical expressions. We believe that this paper can offer a useful tool to specialists with concerns in many areas of practical approximation.
3,186.4
2021-06-05T00:00:00.000
[ "Mathematics" ]
Microarray Analysis of the Ler Regulon in Enteropathogenic and Enterohaemorrhagic Escherichia coli Strains The type III protein secretion system is an important pathogenicity factor of enteropathogenic and enterohaemorrhagic Escherichia coli pathotypes. The genes encoding this apparatus are located on a pathogenicity island (the locus of enterocyte effacement) and are transcriptionally activated by the master regulator Ler. In each pathotype Ler is also known to regulate genes located elsewhere on the chromosome, but the full extent of the Ler regulon is unclear, especially for enteropathogenic E. coli. The Ler regulon was defined for two strains of E. coli: E2348/69 (enteropathogenic) and EDL933 (enterohaemorrhagic) in mid and late log phases of growth by DNA microarray analysis of the transcriptomes of wild-type and ler mutant versions of each strain. In both strains the Ler regulon is focused on the locus of enterocyte effacement – all major transcriptional units of which are activated by Ler, with the sole exception of the LEE1 operon during mid-log phase growth in E2348/69. However, the Ler regulon does extend more widely and also includes unlinked pathogenicity genes: in E2348/69 more than 50 genes outside of this locus were regulated, including a number of known or potential pathogenicity determinants; in EDL933 only 4 extra-LEE genes, again including known pathogenicity factors, were activated. In E2348/69, where the Ler regulon is clearly growth phase dependent, a number of genes including the plasmid-encoded regulator operon perABC, were found to be negatively regulated by Ler. Negative regulation by Ler of PerC, itself a positive regulator of the ler promoter, suggests a negative feedback loop involving these proteins. Introduction Enteropathogenic (EPEC) and enterohaemorrhagic (EHEC) Escherichia coli are two pathotypes of this important gastrointestinal bacterium that can cause serious diarrhoeal disease in humans [1]. Many EHEC and EPEC strains possess a type III secretion system (T3SS) encoded by a pathogenicity island called the locus of enterocyte effacement (LEE) that is also found in the related bacterium Citrobacter rodentium, a mouse pathogen that is widely used as a model for the EHEC and EPEC strains [2]. Pathogenicity factors encoded within the LEE, specifically the type III secretion system and secreted effector proteins, are responsible for formation of the attaching and effacing (AE) lesion on the gut epithelium that is characteristic of these strains and required for intimate attachment of the bacteria [3]. The 41 genes of the LEE are arranged in 5 major polycistronic operons called LEE1-5 along with a number of smaller transcriptional units [4]. Attaching and effacing pathogens, including EPEC strains such as E2348/69, O157:H7 EHEC strains and non-0157 EHEC strains, have distinct evolutionary histories but carry an overlapping core repertoire of pathogenicity genes, including the LEE and many effector genes outside the LEE, that have been acquired via horizontal gene transfer [5,6,7]. However, there are significant differences in overall pathogenicity between EHEC and EPEC strains, for example EHEC strains cause a more severe bloody diarrheal disease (haemorrhagic colitis) that is often accompanied by the life threatening complication, haemolytic uraemic syndrome (HUS) [8]. Such differences are presumably mainly determined by the differing contributions of the extra-LEE factors. Examples include differing arrays of T3SS effector proteins and the fact that the EHEC genome encodes a Shiga-like toxin responsible for serious pathology in the human host, while EPEC does not [8]. In addition to variation in the genomic arsenal of determinants, appropriate control of gene transcription may be critical in optimising pathogenicity [9,10]. Type III secretion systems (T3SS) are generally acquired through horizontal gene transfer and therefore should employ a means of regulation that is easily integrated into the existing regulatory networks of the cell [11]. One way to achieve this integration is to have T3SS gene expression under the control of a master regulator, which multiple environmental signaling pathways can feed into. The master regulator for the LEE is the Ler protein, encoded by the first gene in the LEE1 operon [12]. Ler is a transcriptional activator of the LEE: a homologue and also an antagonist of the genome organizer and silencer H-NS [13]. In addition to its H-NS-dependent role in activating most promoters of the LEE, Ler can activate the LEE5 promoter in an H-NS-independent manner (reviewed in [14]). Ler has also previously been shown to act as a specific autorepressor of the LEE1 promoter [15] while the LEE encoded regulator GrlA and the plasmid encoded regulator PerC (EPEC), or its EHEC homologues PchABC, have been shown to specifically activate ler transcription [13,16,17,18,19]. LEE gene expression is responsive to population status, via the AI-3 quorum sensing system activating the LysR type regulator QseA which in turn activates LEE1 (ler) transcription [20,21,22]. Expression of the LEE is also known to be responsive to many environmental factors (reviewed in [23,24]). One example is temperature: transcription of the LEE is up-regulated at 37uC and repressed (by H-NS) at 27uC [25]. Expression of the LEE is also dependent on the physiological state of the cell, for example growth phase. In glucose MOPS minimal medium, gene expression as assessed by microarray transcriptomics is maximal in late exponential phase and down-regulated during the transition to stationary phase [26]. Under some other growth conditions (LB broth) expression from LEE promoters, measured via transcription of a lacZ reporter gene, seemed to increase during the transition to stationary phase [20,27]. Extra-LEE genes that are known to be members of the Ler regulon in EPEC include espC, encoding an autotransporter (Type V) extracellular serine protease, that is thought to play various roles in pathogenicity [28,29,30,31]. The espC gene has previously been shown to be strongly activated by Ler, however in contrast the EHEC homologue of this gene espP was not found to be Lerregulated [12]. Extra-LEE members of the Ler regulon in EHEC include stcE, a pO157-borne gene encoding a metalloprotease that is involved in intimate adherence of bacterium to gut epithelium [32] and nleA, encoding a T3SS-secreted effector protein [33]. However some of the many extra-LEE T3SS effectors of EHEC were previously thought not to be regulated by Ler e.g. EspJ and TccP [6,34]. In addition, expression of long polar fimbriae of EHEC has been found to be reciprocally regulated by H-NS repression and Ler antagonism [35]. Here we will characterise and compare the Ler regulons for EPEC strain E2348/69 and EHEC strain EDL933. The regulon for Ler has previously been loosely defined at the transcriptional level for the closely-related Sakai strain of EHEC [36] where Ler regulation was mostly found to be confined to horizontallytransferred DNA. The LEE is inserted at the same selC locus in both EDL933 and E2348/69 strains, the most parsimonious interpretation being a single insertion event in a common ancestral strain [37]. Any differences in the Ler regulon between these two strains, within or outside the LEE, will reflect divergent adaptation to subsequent changes in the genome, for example plasmid acquisition, and are of interest from a regulon evolution point of view. Results We constructed two validated mutant strains of E. coli: LBEC1 (EDL933 Dler) and LBEC2 (E2348/69 Dler), grew cultures of WT and parental mutant strains under conditions known to be inducing for the LEE to two different growth phases (mid and late log phase), harvested RNA and used this to perform microarray analysis of the transcriptomes. Microarray data has been deposited with the GEO database (http://www.ncbi.nlm. nih.gov/geo) with accession code GSE38876. Enteropathogenic E. coli In mid-log phase cells a total of 85 genes are transcriptionally regulated: 62 genes, at 14 different loci, are activated by Ler (table 1) while 23 genes, at 6 loci, are repressed by Ler (table 2). Of the activated genes 49 (79%) are carried on or directly adjacent to mobile genetic elements (MGEs: prophage, integrative element or plasmid), while 11 of the repressed genes (48%) are carried on MGEs. If one compares the genes that are activated and repressed, two repressed genes (E2348C_0084 and E2348C_2114) are potentially expressed from promoters that are immediately divergent from an activated promoter. In late log phase cells 97 genes in total are regulated by Ler. Of these, 85 genes at 23 genetic locations are activated, of which 62 genes (73%) are carried on or directly adjacent to MGEs (table 3). Twelve genes are repressed by Ler, of which only 1 is adjacent to a MGE (table 4). The strongest activation was generally observed for LEE genes, with the mostly high activated genes being eae at mid-log phase (58-fold) and orf29 in late log phase (100-fold). Extra LEE genes with comparable levels of activation included espC (mid-log only), pagP and the gene encoding the T3SS secreted effector NleA. The maximum fold repression observed outside of the LEE was approximately 9-fold in mid-log phase (fimD) and approximately 6fold in late log phase cells (chuT-hmuV heme utilization operon). Enterohaemorrhagic E. coli In mid-log phase cells, only one gene passed the Benjamini and Hochberg MTC filter as being repressed (2-fold) by Ler. This was Z2974 on prophage CP-933T, encoding an unknown protein. In late-log phase cells, 39 genes were found to be transcriptionally activated by Ler (2-fold or more; table 5). Thirty five of these genes are within the LEE (representing all major transcriptional units; activation between 4 and 32-fold). The remaining 4 extra-LEE activated genes encode: StcE (4-fold), EtpC (3-fold), SfpA (5-fold) and the putative cytochrome YhaI (36-fold). The stcE and etpC genes are located on plasmid pO157; SfpA is prophageencoded and yhaI is not associated with a mobile genetic element. Discussion It is clear that in both the EPEC and EHEC strains of E. coli examined here, the LEE is the primary target for Ler activation: all major transcriptional units of the LEE are regulated by Ler, although the regulation of LEE1 is growth phase dependent in EPEC, as noted below. Otherwise, in EPEC the Ler regulon is quite small, covering about 2% of the genome; in EHEC the regulon is even smaller and contains very few genes outside of the LEE. As the positive regulatory activity of Ler is known to be due to antagonism of H-NS repression (where studied) we would predict that all activated members of the Ler regulon are repressed by H-NS. However the H-NS regulon is very large and clearly not all H-NS repressed genes are activated by Ler [38]. An important question that therefore remains to be answered is: what provides specificity to Ler regulation? The specificity of action that we have observed (i.e. most of the strongly regulated genes are located within the LEE) is in agreement with the observations of Abe et al. relating to EHEC [36]. This specificity is consistent with Ler binding to a specific DNA structural motif, via an indirect readout Many but not all of the extra LEE members of the EPEC Ler regulon are located on mobile genetic elements (MGEs) and it is particularly striking that Ler negatively regulates a disproportionately high number of plasmid-borne genes, at least in mid-log phase EPEC: 9 genes from 3 different operons (10% of the total of 90 genes) on plasmid pMAR2 are shown to be regulated, while only 0.3% of the chromosomal genes (14 genes) are repressed. However by late log phase, no plasmid-borne genes are repressed by Ler. Similarly it is striking that 4 of the 5 extra-LEE genes found to be Ler-regulated in EHEC (likely members of the same operon) are located on a MGE (plasmid or prophage). In both bacteria the GC contents of chromosome is Across the genome, the Ler regulon is notably growth phase dependent in EPEC: in mid log phase (OD 600 = 0.4) 27 extra-LEE genes are activated by Ler, while in late log phase (OD 600 = 0.9) the number of activated extra-LEE genes is 43. In EPEC the regulation of the LEE1 operon, but not the other operons of the LEE, differs between mid-log and late-log growth phases: at late log phase, all 41 genes within the LEE are strongly activated by Ler, along with the flanking predicted sugar transporter gene yicJ, while at mid-log phase the 7 genes in the LEE1 operon before escU are not strongly (.2-fold) regulated ( Figure 1; note that we do not comment on the regulation of the ler gene itself as the coding sequence is partly deleted in the mutant). While it is possible that we have introduced some artefactual corruption of LEE1 regulation during mutation of ler, the observed activation of latelog phase cells suggests that there is no gross defect in the Ler regulatory circuit. This result indicates that the regulation of the LEE1 promoter is somewhat different to that of other LEE promoters, possibly due to a complex balance between Ler autoregulation and activation. It is noteworthy that, while previous reporter gene analysis of the LEE1 promoter has indicated that it is autorepressed by Ler, our results indicate that it may be activated, a difference that may reflect the growth phase dependence of the effects observed here [15]. No corresponding differential regulation of the LEE1 operon was observed in late log phase EHEC; in the mid-log phase cultures none of the LEE genes passed the MTC filter, but if the filter is not applied then LEE1 seems to be similarly regulated in mid-log and late-log phase cultures. While Sperandio et al. found that the LEE4 operon (sepL-espF) was constitutively expressed at a high level in EHEC and insensitive to Ler regulation [20], we have found it to be clearly Ler-dependent in both EHEC and EPEC strains. The observed difference could have resulted from selection of a promoter fragment for reporter gene assays that lacks the full complement of H-NS binding sites. There are a number of Ler-activated genes in the EPEC regulon that are outside of the LEE but may be involved in pathogenicity. As noted above, espC is already known to be ler-regulated and is one of the mostly highly (22-fold) activated genes in mid-log phase cells. PagP, the palmitoyl transferase for lipid A is strongly regulated at both mid and late log phases (9-fold and 15-fold respectively). Palmitoylated lipid A supposedly protects bacteria from host immune defences (e.g. CAMPs) and attenuates their activation through the TLR4 signal transduction pathway [41]. E2348C_0684, strongly regulated along with its downstream neighbour, encodes a SfpA (systemic factor protein A)-like protein: SfpA is a porin involved in systemic disease in Yersinia enterocolitica [42]. A homologue of sfpA (ECs0814) in the Sakai strain of EHEC was previously observed to be Ler regulated [36]. The rcsA gene, which encodes a positive regulator of the serotype-specific group I K (capsular) antigen is activated by Ler in late log phase, although not at the earlier growth point [43]. This may reflect an impact of capsule production on the intimate attachment of EPEC bacteria to the gut epithelium, however, no regulation of the wza promoter (target for RcsA in E. coli K-12) was apparent. It is worth noting at this point that a ler mutant of EPEC was previously found to be defective for colonisation of Caenorhabditis elegans [44]. This requirement for Ler was found to be independent of T3SS encoded by the LEE. This effect is presumably due to one or more of these extra-LEE members of the Ler regulon which are essential pathogenicity factors in a C. elegans infection but are not involved in T3S (and are not effectors delivered by the T3SS). Several non-LEE encoded effector genes, whose products are secreted via the T3SS, are Ler-regulated in EPEC, including the operon of five genes from nleI/G to nleF (Ler regulation of a homologue of nleA is already known to occur in EHEC [33]) and a homolog of the espG gene located next to the espC gene which is also Ler regulated (see above). There is also clear evidence for the transcriptional regulation of nleH and espJ homologues at late log phase. While it may be unsurprising that effectors secreted via the T3SS are coregulated with the LEE, previous studies in EHEC and C. rodentium have not found these two genes to be regulated by Ler [34,45]. Only 4 extra-LEE genes were identified as part of the EHEC Ler regulon: sfpA, as discussed above; stcE, encoding a protease that is known to be involved in intimate adherence and inhibition of complement-mediated lysis [32,46]; etpC, located immediately downsteam of stcE and encoding a component of the pO157-encoded type II secretion system for StcE is also known to be involved in adherence and intestinal colonization [47] and the putative cytochrome gene yhaI. Assuming that etpC is in the same operon as stcE, only the last of these is a novel observation. We have also identified a number of EPEC genes that are repressed by Ler, including the ''plasmid-encoded regulator'' operon perABC, located on the EPEC adherence factor (EAF) plasmid pMAR2 [48]. PerA protein activates transcription of the bfp operon, encoding bundle-forming pili [49]. These pili are involved in formation of an initial attachment between EPEC cells and the gut epithelium that occurs prior to AE lesion formation, therefore down-regulation of bfp expression with LEE expression is consistent with the known program of infection [50]. PerC protein is known to activate ler [17,18,51] and therefore this result suggests the existence of a negative feedback loop, previously undescribed, that ultimately autoregulates expression of Ler (and therefore the LEE) and may be involved in a down-regulation of ler transcription Table 5. EHEC genes activated 2-fold or more by Ler at late-log phase (OD 600 = 1.1). after the initial stages of infection [52]. Regulation of the per operon by Ler, the gene for which is known to be regulated by quorum sensing (QS), would account for the previously observed ''indirect'' QS regulation of perA [20]. The repressive effect of Ler on perA presumably also explains the up-regulation of the bundleforming pili (bfp) operon in the ler knockout mutant. Neither of these phenomena (which were only observed in mid-log phase cells) have so far been reported in the literature, although Elliot et al. reported Ler regulation of non-BFP fimbriae, while Leverton and Kaper described an inverse relationship between expression of ler and bfpA in the presence of HEp-2 cells [12,52]. Ler repression of acid resistance genes -previously noted by Abe et al. in the Sakai strain [36] -may reflect an accessory mechanism to assist in tight regulation of these genes, preventing inappropriate expression in the lower regions of the GI tract where acid resistance is not required. Overall the data reported here suggests that the Ler regulon for enteropathogenic and enterohaemorrhagic strains of E. coli is mainly focused on the type III secretion system genes in the LEE, but also includes unlinked pathogenicity genes. The regulon is growth phase dependent and, at least in strain E2348/69, is composed of both positively and negatively regulated genes. Additionally, in enteropathogenic E. coli, the observed negative regulation by Ler of PerC, itself a positive regulator of the ler promoter, suggests the existence of a negative feedback loop involving these two proteins. Bacterial strains and plasmids Bacterial strains used or constructed during this study are detailed in Construction and validation of E. coli mutant strains and ler expression plasmid The ler expression plasmid pSI04 was derived from pJW15D1-100 [53] by cloning EHEC ler CDS as a NsiI-HindIII fragment under the control of the melR promoter and SD site [54]. Non-polar ler knockout mutants of E. coli strains EDL933 and E2348/69 were constructed using a lRed based method (GeneDoctoring) [55] to replace the majority of the ler gene with a kanamycin resistance cassette. Recombination cassettes designed to replace the central portion of ler with a kanamycin resistance gene (aphA) were amplified from a pDOC-K template using a conserved forwards primer (LER-KO-F: taatagcttaaaatattaaag-cATGCGGAGATTATTTATTATGAATATGG-TGGCTGGA-GCTGCTTCGAA) in combination with strain specific reverse primers (LER-KO-EHEC-R: catttaattatttcatgTTAAATATTTTT-CAGCGGTATTATTTCTTCT-CTCGAGATATGAATATCC-TCCTTAG and LER-KO-EPEC-R: catttaattattttatgTTAAA-TATTTTTCAGCGGTATTATTTCTTCT-CTCGAGATATG-AATATCCTCCTTAG) and a proofreading DNA polymerase (Velocity, Bioline). The blunt-ended cassettes were ligated into EcoRV-digested donor plasmid pDOC-C to generate donor plasmids carrying EHEC and EPEC specific ler2 aphA+ knockout cassettes. These plasmids were used together with pACBSCE to replace the Ler coding sequence with the kanamycin resistance gene cassette. The antibiotic resistance cassette was subsequently removed via flanking Flp recombination target (FRT) sites using the temperature sensitive FLP expression plasmid pCP20 [56]. The Dler locus in the resulting unmarked mutants encoded the first and last 9 aa of Ler (first 3 aa of the shorter Ler protein as described by Mellies et al. [57]) sandwiching a central ''scar region'' derived from the FLP recombinase sites encoding 29 (non-Ler) amino acids. Loss of all three plasmids (pDOC-derived donor plasmids and pACBSCE, pCP20) involved in mutagenesis was confirmed by antibiotic resistance profiling. The DNA sequence surrounding the recombination site was checked by sequencing across the knockout locus from primers designed to bind flanking sites. Recombinant strains were designated LBEC1 (EDL933 Dler) and LBEC2 (E2348/ 69 Dler). The absence of gross unwanted deletions in the mutant strains was confirmed by comparative genomic hybridization (CGH) of labeled genomic DNA extracted from wild-type and mutant (Dler) strains. No missing loci, other than the desired deletion of ler, were apparent. Growth curves were assessed for LBEC1 and LBEC2 strains in comparison to parental wild-types and no gross defects in growth were observed (a small growth advantage consistent with predicted increased fitness due to reduced expression of T3SS was sometimes observed for the mutant strains on growth in inducing Dulbecco's Modified Eagle Medium (DMEM) medium, but this was neither statistically significant nor reproducible). The ler mutation in strains LBEC1 and LBEC2 was successfully complemented using the ler expression plasmid pSI04 resulting in the restoration of a functional T3SS, as confirmed by the fluorescent actin staining (FAS) test (i.e. via microscopic assessment of AE lesion formation (table 8) [58]. Subconfluent HeLa cell monolayers on glass coverslips were infected for 4 hours at 37uC with a 1:100 dilution of an overnight LB broth culture of E. coli diluted in DMEM buffered with 25 mM HEPES. Following fixation in 4% formalin for 20 minutes and permeabilization in 0.1% Triton in PBS for 4 minutes, cells were stained with 12 mg/ ml FITC conjugated phalloidin (Sigma) for 20 minutes at room temperature [54]. Bacterial cells were simultaneously stained with 10 mg/ml propidium iodide (Invitrogen). RNA Purification Quadruplicate overnight cultures of WT and Dler strains, grown in LB broth (Miller formulation) were diluted 1/100 into DMEM buffered with 25 mM HEPES and incubated at 37uC, with aeration by shaking at 200 rpm (i.e. inducing conditions for expression of the LEE). Samples were harvested at mid and late log phases of growth (OD 600 of 0.4 and 0.9 for EPEC; 0.5 and 1.1 for EHEC). Messenger RNA was stabilized immediately by pipetting the samples directly into RNAprotect Bacteria reagent (Qiagen) before purification of total RNA using the RNeasy Mini Kit with on-column DNase digestion (Qiagen). Microarray labelling and hybridization The concentration of RNA was determined using a spectrophotometer (ND-1000; NanoDrop). Five hundred nanograms of total RNA was used for labelling, and aRNA was synthesized with the Ambion MessageAmp TM II-Bacteria RNA Amplification Kit according to the recommendations of the manufacturer and labeled with the Cy3 or Cy5 monoreactive dye pack (GE Healthcare). Labeled aRNA was purified with Qiagen RNeasy MinElute clean up kit according to the manufacturer's instructions and quantified using a spectrophotometer (ND-1000; NanoDrop). The 8615,000 (15K) DNA high-density microarrays of E2348/69 and EDL933 were designed by Oxford Gene Technology (Oxford OX5 1PF, United Kingdom) and validated by the University of Birmingham E. coli Centre (UBEC) (United Kingdom). During validation, three 60-mer probes per predicted gene were designed for all the open reading frames (ORFs) in the chromosome and plasmids of each one of the two E. coli strains used in this study. For each of the designed probes, a mismatch probe (containing 3 mismatches per 60-mer probe at positions 10, 25, and 40) was also generated. These mismatch probes and the perfect-match probes designed against each strain were placed on an array (4644k) in triplicate. This array was hybridized with genomic DNA and a pool of mRNA representing conditions in which as many genes as practicable would be induced (derived from an equimolar pool of total RNA from E. coli grown in morpholinepropanesulfonic acid (MOPS) minimal medium at 30uC mid-log phase, 37uC for midlog phase, and 37uC for stationary phase). The results were processed to select the best-performing probe for each gene. This derived and optimized probe set was printed in a random pattern in triplicate by Agilent Technologies on an 8615K array for each strain and used in this study. For each of the four biological replicates equal quantities (300 ng) of Cy5-and Cy3-labeled aRNA were added to hybridization solution, and hybridization was performed using the Gene Expression hybridization kit (Agilent Technologies). Analysis of Microarray Data The microarray images were analyzed using GenePix software v6 (Axon Instruments). The data were imported into GeneSpring, version 7 (Agilent). A Lowess curve (locally weighted linear regression curve) was fitted to the plot of log intensity versus log ratio, and 40% of the data were used to calculate the Lowess fit at each point. The curve was used to adjust the control value for each measurement. If the control channel signal was below a threshold value of 10, then 10 was used instead. For each strain data set a list of genes was prepared showing at least 2-fold differential expression levels between the ler and wild type samples for each one of the two growth conditions by using Student's t-test and applying the Benjamini and Hochberg false discovery rate (multiple testing correction, MTC) test with a p value cut off of 0.05.
5,930.4
2014-01-14T00:00:00.000
[ "Biology", "Medicine" ]
Hardware Reliability Analysis of a Coal Mine Gas Monitoring System Based on Fuzzy-FTA Featured Application: This project is applied on the computer monitoring project of the coal mine gas monitoring system in Shaanxi, China. Abstract: The hardware reliability of a gas monitoring system was investigated using the fuzzy fault tree analysis method. A fault tree was developed considering the hardware failure of the gas monitoring system as a top event. Two minimum path sets were achieved through qualitative analysis using the ascending method. The concept of fuzzy number of the fuzzy set theory was applied to describe the probability of basic event occurrence in the fault tree, and the fuzzy failure probabilities of the middle and top events were calculated using fuzzy AND and OR operators. The results show that the proposed fuzzy fault tree is an effective method of reliability analysis for gas monitoring systems. Results of calculations using this method are more reasonable than those obtained with the conventional fault tree method. Introduction Coal mine gas monitoring systems play an important role in improving the safety of China's coal mines, reducing personnel and property losses, and improving the production efficiency and modernization level of the mines. However, in recent years, gas accidents (especially serious and extremely serious gas explosion accidents) have occurred frequently even in coal mines equipped with mine gas monitoring systems, resulting in many casualties and heavy losses. Therefore, in order to ensure the reliable operation of the coal mine gas monitoring system, the reliability analysis of the gas monitoring system is extremely important. At present, the reliability analysis of the gas monitoring system mainly uses evaluation methods such as the fault tree analysis (FTA), pre-hazard analysis method, and safety checklist method for analysis. However, the results obtained are quite different from the actual field. At present, researchers are adopting some new theories. For example, the analytic hierarchy process [1], cloud model evaluation method, extension theory [2], quantitative improvement HAZOP, etc. are applied to the reliability analysis of the gas monitoring system [3]. The main problem encountered by these new methods in reliability analysis is the need for a large amount of precise and accurate data. However, in the field of gas monitoring systems, it is difficult to obtain the accurate probability value of the occurrence of basic events. Generally, mines do not have long-term unified equipment failure record data. Due to the fact that the sample size is too small, the value cannot be used as an accurate probability value that requires a large sample size. In order to solve the problem that there is no accurate data and only fuzzy data in the gas monitoring system, analyzed. These reasons may be the result of other reasons, called the intermediate cause event (or intermediate event), and should continue to be analyzed until the reasons cannot be analyzed further. These reasons are called the basic cause event (or basic event). The causal relationships in the figure are connected by different logic gates to get an image of an upside-down tree. The basic procedure of the FTA method is shown in Figure 1. The first step is to get familiar with the system that needs to be analyzed, by understanding the system status and various parameters, and drawing the process flow chart or floor plan. Second, the information on the accident cases (which occurred in the same industry and on similar devices at home and abroad) is collected, and the accidents that have serious consequences and can easily occur as top events are identified. Based on the experiences, lessons, and accident cases, after the statistical analysis, the probability (frequency) of the accident is calculated, and the target value of the accident to be controlled is determined. Then, the fault tree is constructed from the top event according to the logical relationships among the events. If the constructed fault tree does not conform to the on-site conditions, it is also necessary to collect the missing information, re-analyze system failures and abnormalities, until the established fault tree is in line with the actual on-site conditions. Finally, a qualitative analysis is performed to determine the structural importance of each basic event, the probability is calculated, and a quantitative analysis is performed. (1) Construction of the fault tree The construction of the fault tree starts from the top event. The direct, indirect, necessary, and sufficient causes of the top event are determined by deduction and reasoning. Usually, these reasons are not basic events but intermediate events that need further development. (2) Qualitative analysis of the fault tree The qualitative analysis is based only on the structure of the fault tree and the causal relationships of the events leading to the accident. The analysis includes finding the minimum cut set, the minimum path set, and the structural importance of basic events. (3) Quantitative analysis of the fault tree The purpose of this analysis is to calculate the occurrence probability of the top event and evaluate the safety and reliability of the system. Specifically, the method consists of comparing the calculated probability of the top event with the predetermined target (1) Construction of the fault tree The construction of the fault tree starts from the top event. The direct, indirect, necessary, and sufficient causes of the top event are determined by deduction and reasoning. Usually, these reasons are not basic events but intermediate events that need further development. (2) Qualitative analysis of the fault tree The qualitative analysis is based only on the structure of the fault tree and the causal relationships of the events leading to the accident. The analysis includes finding the minimum cut set, the minimum path set, and the structural importance of basic events. (3) Quantitative analysis of the fault tree The purpose of this analysis is to calculate the occurrence probability of the top event and evaluate the safety and reliability of the system. Specifically, the method consists of comparing the calculated probability of the top event with the predetermined target value. If the latter is exceeded, the necessary system improvement measures should be taken to reduce it below the target value. The final step is to compile the result file. The analyst should provide the description of the analysis system, a discussion of the problems, the fault tree model itself, the minimum cut set, the minimum path set, and the structural importance analysis. Moreover, the analyst should give relevant conclusions for the fault tree analysis of coal mine gas monitoring system. The main principles of the Fuzzy-FTA method are as follows. Although the data recorded on site are insufficient, managers and experts with rich experience in the front line have a relatively accurate perceptual grasp of the failure probability. Therefore, confidence intervals for the occurrence of basic events can be estimated by analyzing the responses to the questionnaires administered to them. Then, through the qualitative analysis of the FTA method, the confidence interval of the occurrence probability of top events is obtained at a certain confidence level. In summary, the Fuzzy-FTA analysis method is found capable of qualitatively and quantitatively analyzing the reliability of a gas monitoring system hardware and calculating its failure probability [16]. Fuzzy Set and Membership Degree Definition: Set on the universe U, specify a mapping for any x ∈ U, µ A → [0, 1] , existence: where µ A (µ) is the membership degree A of a set (or the membership degree of A), and the mapping is a membership function of a set A. When the membership degree can only assume the values 0 or 1, the fuzzy set degenerates into a classical set. λ. Cut Sets Definition: Set up A F(U), find any λ [0.1], record: where A λ is the λ cut set of A, and λ is the threshold or confidence level. If λ is within [0, 1], the classical set family in U is obtained. When a certain level λ is given, the fuzzy set A is refined to A λ . Fuzzy Numbers and Their Properties If you want to blur a crisp number, there are generally two ways, which are to express the crisp number as an interval fuzzy number or a triangular fuzzy number. In fact, the concepts of triangular fuzzy numbers and interval fuzzy numbers are generalizations of the crisp numbers. Moreover, the concept of triangular fuzzy numbers is an extension of the concept of interval fuzzy numbers. However, compared with those of intervals, the operational laws of triangular fuzzy numbers are more mature and easier to use. Therefore, this paper chose the triangular fuzzy number to blur the precise number. A triangular fuzzy number consists of three numbers, including the minimum membership degree, the medium membership degree, and the maximum membership degree. In any situation of subjective decision making, this triple number can be respectively regarded as the lower (conservative), the medium, and the upper (optimistic) estimation with regards to a judgment. Definition 1. The fuzzy number A λ is a continuous fuzzy subset of universe R on (−∞, +∞), and its convex membership fuzzy function satisfies Appl. Sci. 2021, 11, 10616 5 of 13 Definition 2. L, R is the reference function of a fuzzy number if Then, the fuzzy number is renamed as the L-R fuzzy number and recorded as A = (m, α, β), where m is the average of a and (α, β) is the upper and lower confidence limit of A. A is not a fuzzy number when α, β are both 0. A is fuzzier when the distribution of α, β is large. If a triangular fuzzy number, is an interval number, according to the classical expansion principle, the following expansion formula holds: For ∀λ ∈ [0, 1]: (1) Addition: Formula (6) is the addition calculation rule of two triangular fuzzy numbers A (2) Subtraction: Formula (7) is the subtraction calculation rule of two triangular fuzzy numbers A . Multiplication: Formula (8) Formula (9) is the division calculation rule of two triangular fuzzy numbers A In Fuzzy-FTA, the fuzzy mathematics theory is introduced by: (1) Fuzzifying the logical relationship between different levels. This indicates that when the fault tree of the system is established, the logic of each level causing the top event is not clear, but fuzzy [17]; (2) Fuzzifying the occurrence probability of basic events. This indicates the replacement of the exact value of basic events with fuzzy numbers. The fuzzy number forms of basic events mainly include normal, triangular, and ladder types [18]. Basic Operators (1) AND gate fuzzy operator If the probability of occurrence of the i-th basic event is a fuzzy number, which is denoted as P i , the AND gate fuzzy operator can be denoted as: where In Formula (10), if the probability of the occurrence of n events is an AND relationship, then the total probability P AND of the simultaneous occurrence of n events is the multiplication of the probability of every occurrence of n events alone. P AND is the total probability, P i is the probability of a single event occurring alone. m i is the mean value of the fuzzy numbers, and is specified as a real number, while α i , β i are the left and right extensions of the mean, respectively [19]. (2) OR gate fuzzy operator If the probability of occurrence of basic events is a fuzzy number, then the OR gate fuzzy operator can be written as: where In formula (11), if the probability of occurrence of n events is an OR relationship, then the total probability P OR of n events occurring at the same time is equal to the result of the probability of 1 minus the multiplication of n events not occurring [20]. After the fuzzy operator of fault tree analysis is established, the fuzzy analysis of the fault tree can be carried out. Moreover, the resulting probability of the top event is a fuzzy number. With different confidence levels, different confidence intervals for the probability of the top event can be obtained. Interval Analysis of the Occurrence Probability of the Top Event According to the principle of fuzzy mathematics, the occurrence probability value of each basic event of the gas monitoring and prediction system is processed into a triangular fuzzy number [21], and the reference function is: The membership function A corresponding to the triangular fuzzy number is: Appl. Sci. 2021, 11, 10616 7 of 13 A = (α, m, β), where m is the mean, α, β respectively are the upper and lower limits of the confidence interval. The distribution of α, β is larger, A is more blurred. When A is equal to 0, A is the non-fuzzy number. The λ cut set of A can be expressed as: According to the principle of fuzzy mathematics, the probability value of each basic event in a gas monitoring and forecasting system is treated as a triangular fuzzy number [22]. Let s 1 and s 2 be the input events of the AND/OR gate, and s the output event, then, F s 1 and F s 2 represent the fuzzy failure probabilities of s 1 and s 2 , respectively, as shown in Figure 2. According to the principle of fuzzy mathematics, the probability value of each basic event in a gas monitoring and forecasting system is treated as a triangular fuzzy number [22]. Let and be the input events of the AND/OR gate, and s the output event, then, and represent the fuzzy failure probabilities of and , respectively, as shown in Figure 2. Therefore, the fuzzy failure probability of the output event s of the AND gate/OR gate in the fault tree can be obtained by the following formula: AND gate Structure OR gate Structure Formula (14) introduced the AND gate relationship between the output with the inputs and . (2) OR gate structure Formula (15) introduced the OR gate relationship between the output with the inputs and . In this way, according to the structural function of the fault tree, the cut interval of the occurrence probability of the top event can be calculated. By assigning different values to , the probability interval of the top event occurrence under different fuzzy degrees can be obtained [23]. Therefore, the fuzzy failure probability F s of the output event s of the AND gate/OR gate in the fault tree can be obtained by the following formula: (1) AND gate structure Formula (14) introduced the AND gate relationship between the output F s with the inputs F s 1 and F s 2 . (2) OR gate structure Formula (15) introduced the OR gate relationship between the output F s with the inputs F s 1 and F s 2 . In this way, according to the structural function of the fault tree, the λ cut interval of the occurrence probability of the top event can be calculated. By assigning different values to λ, the probability interval of the top event occurrence under different fuzzy degrees can be obtained [23]. Failure Factor Analysis of Gas Monitoring System The main causes of hardware failure in coal mine gas monitoring systems are industrial computers and monitoring substation failures, sensor failures, communication line failures, power failures, etc. According to the analysis of data from mine gas monitoring systems, a number of accidents were taken as the top events, and their causes were analyzed [24]. In Table 1, B1-B5 are the intermediate events of the accident. B1-B4 are the hardware factors of the gas monitoring system failure, B5 is the human factor of the gas monitoring system failure. B1 is the monitoring computer failure, including the industrial computer host failure, display failure, and monitor substation failure. B2 is the monitoring system sensor failure, including the gas sensor failure, temperature sensor fault, wind speed sensor fault, power sensor failure, switch sensor failure, breaker failure, and PLC controller failure. B3 is monitoring system line failure, including the broken line, self-aging short circuit, and poor line contact. B4 is the monitoring system power supply and communication failure, including the abnormal power failure, high communication error rate, and short circuit to ground. B5 is the monitoring staff mistakes, including the lack of responsibility of the staff, system lack of spare parts, and insufficient maintenance technology. Establishing the Fault Tree Model The fault tree can be established by analyzing the aforementioned fault factors, as shown in Figure 3. When the gas monitoring system has a hardware failure and the staff fails to identify and deal with it in time, the gas monitoring system will malfunction [25]. Figure 3 shows that the number of OR gates is large, while that of AND gates is small, suggesting a potentially large failure rate of the system [22]. In fact, the OR gates produce When the gas monitoring system has a hardware failure and the staff fails to identify and deal with it in time, the gas monitoring system will malfunction [25]. Figure 3 shows that the number of OR gates is large, while that of AND gates is small, suggesting a potentially large failure rate of the system [22]. In fact, the OR gates produce an output for any basic event, whereas the AND gates produce an output only when all of the basic events occur simultaneously. Qualitative Analysis of Fault Tree As shown in Figure 3, the Boolean expression of its fault tree is listed, and its structural function expression is as follows: The fault tree is transformed into a dual success tree by changing the OR gates into AND gates, and the AND gates into OR gates, as well as turning the events into their dual event. Then, the structural function of the success tree is written and subsequently simplified to obtain the structural function of the success tree represented by the minimum cut set. Two minimum path sets can be obtained as follows: Quantitative Analysis of Fault Tree In the framework of Fuzzy-FTA, and based on the theory of fuzzy sets, the L-R fuzzy numbers are used to represent the occurrence probability of basic events, based on which the fuzzy fault tree is analyzed and operated. Obtaining the Triangular Fuzzy Number of the Basic Events of the Fault Tree (1) Obtain the triangular fuzzy number of the basic event of the fault tree In order to obtain the probability of occurrence of 19 bottom events such as the coal mine gas monitoring system X1-X19, 10 on-site monitoring experts were selected to fill in the expert questionnaire. An expert questionnaire is shown in Table 2. Through the feedback of the 10 expert questionnaires from the field, the MATLAB software was used to program the arithmetic mean of the X1-X19 probability interval. In addition, the average value of the probability interval and the left and right distribution of the occurrence of each bottom event were calculated, as shown in Table 3. Calculation of the Fuzzy Probability of the Top Event The λ cut set is shown in Table 1 according to the triangular fuzzy probability of each basic event. After obtaining the cut-off set of each basic event, the probability of the top event is calculated using Equation [26]. The probability of the top event is calculated as follows: The fuzzy probability of the top event is obtained by evaluating the structural function of the fault tree. Moreover, the high-order items ignored in the calculation process are obtained. The λ cut set form of the occurrence probability of the top event is calculated as follows: The probability intervals of the top event according to the different values of λ are presented in Table 4. It can be seen from Table 4 that when λ = 1, the fuzzy degree of the occurrence probability of the top event is the smallest, and its occurrence probability is approximately 0.000115. When λ = 0, the probability of occurrence of the top event is the fuzziest, and its probability interval is [0.000095, 0.000135], which shows that the system has a better reliability performance than the traditional fault tree. Structural Importance Analysis The structural importance analysis is used to analyze the contribution of each basic event to the occurrence of the top event from the fault tree structure. This involves analyzing the influence of each basic event on the occurrence of the top event without considering the occurrence probability of the basic event or assuming that the occurrence of each basic event is almost equal. The greater the importance of the basic event structure, the greater its influence on the top event, and vice versa. There are many methods for analyzing the importance of a structure. Here, the permutation method is used, and the solution results are arranged as follows: Analysis of Fault Tree Results By analyzing the results of the fault tree, the following conclusions can be drawn. (1) The main reasons for the hardware failure of a coal mine gas monitoring system are associated with the control system hardware failure as follows: (i) The factors that cause the failure of the industrial computer include the industrial computer motherboard failure, display failure, and accessory failure; (ii) the factors that cause sensor failures include the gas sensor failure, temperature sensor failure, wind speed sensor failure, power sensor failure, switch sensor failure, power-off device failure, and PLC controller failure; (iii) the factors that cause communication line failures include the mechanical pull and open circuit, self-aging short circuit, and poor contact; (iv) the factors that cause power outages include the abnormal power outages, power line open circuits, and power line short circuits; (v) the factors that cause poor maintenance are insufficient sense of responsibility, lack of spare parts, and insufficient maintenance technology. By analyzing the fault tree structure, we can see that the number of logic OR gates is far greater than the logic AND gates, making the probability of accidents very high. (2) Through the qualitative analysis, it can be seen from the minimum path set that there are only four ways to prevent the occurrence of T events. Coal mine gas monitoring system accidents are prone to occur, and there are a few ways to prevent accidents. (3) There are 19 basic events that lead to accidents. However, since the current hardware equipment has a long mean time between failures (MTBF), the probability of accidents is very low. The accident probability of other basic events related to human operations is generally higher. Therefore, in the failure of coal mine gas monitoring systems, human factors are the main cause of failure [27]. (4) From the analysis of structural importance, it can be seen that the three basic events of insufficient sense of responsibility, lack of spare parts, and insufficient maintenance technology occupy the most important weight, and therefore are the top priority for accident prevention. (5) The quantitative analysis shows that the probability of failure of the coal mine gas monitoring system under normal conditions is approximately 0.011%, which is highly dangerous. In summary, the management of coal mine gas monitoring systems should be based on human factors, such as the awareness of the job responsibilities of the monitoring personnel, the training and supervision of the monitoring personnel, and the technical ability to maintain the system should be strengthened. This reduces the failure of the coal mine gas monitoring system as much as possible, thereby reducing mine gas outburst accidents. Comparison of FTA and Fuzzy-FTA In FTA, the probability of occurrence of various accidents X1-X19 at the gas monitoring system site comes from the expert questionnaire filled out by the on-site monitoring personnel. Each probability data required to fill in the expert questionnaire is specific and accurate, but this requirement is difficult for the on-site monitoring personnel, who are more inclined to fill in a probability interval for failure [28]. In Fuzzy-FTA, they only need to fill in the probability interval of failure. In the process of data calculation, FTA is calculated according to simple rules, and the result obtained is a specific probability value. As an example, take the data in Table 1, then take λ = 1, and take the probability of occurrence of various accidents X1-X19 as the intermediate value, the probability of occurrence of the top event is 0.00011. The Fuzzy-FTA is calculated according to the fuzzy mathematics rule, taking λ = 0.5, we can get the probability of the top event in the interval: [0.000105, 0.000125]. From this, we can know that the minimum probability of the top event is 0.000105, the maximum probability of the top event is 0.000125, and the high probability of the occurrence of the event belongs to the interval: [0.000105, 0.000125]. From the comparison of FTA and Fuzzy-FTA, it can be seen that Fuzzy-FTA can more easily obtain data close to the actual on-site, and obtain calculation results that are more in line with the actual on-site. However, compared with FTA, Fuzzy-FTA requires a large amount of data and more complex calculations [29]. Conclusions The Fuzzy-FTA method represents an improvement over the conventional FTA, as it can accurately estimate the probability of each basic event. Through the fuzzy interpretation of data obtained by interviewing the on-site monitoring personnel with a rich management experience, the probability interval of the hardware reliability of a gas monitoring system based on different confidence levels can be obtained. It should be pointed out that the use of fuzzy mathematics to describe the probability of event discovery can not only reduce the difficulty of obtaining the accurate value of the probability of large and complex system events. At the same time, it can also combine field data with the experience of engineers and technicians. This method has greater advantages in flexibility and adaptability than the traditional FTA. In the future, the difficulty of big data calculation of the Fuzzy-FTA method can be reduced by software programming, in order for the method to be applied to more scenarios.
6,018.6
2021-11-11T00:00:00.000
[ "Computer Science", "Engineering" ]
Solving the chemical master equation using sliding windows Background The chemical master equation (CME) is a system of ordinary differential equations that describes the evolution of a network of chemical reactions as a stochastic process. Its solution yields the probability density vector of the system at each point in time. Solving the CME numerically is in many cases computationally expensive or even infeasible as the number of reachable states can be very large or infinite. We introduce the sliding window method, which computes an approximate solution of the CME by performing a sequence of local analysis steps. In each step, only a manageable subset of states is considered, representing a "window" into the state space. In subsequent steps, the window follows the direction in which the probability mass moves, until the time period of interest has elapsed. We construct the window based on a deterministic approximation of the future behavior of the system by estimating upper and lower bounds on the populations of the chemical species. Results In order to show the effectiveness of our approach, we apply it to several examples previously described in the literature. The experimental results show that the proposed method speeds up the analysis considerably, compared to a global analysis, while still providing high accuracy. Conclusions The sliding window method is a novel approach to address the performance problems of numerical algorithms for the solution of the chemical master equation. The method efficiently approximates the probability distributions at the time points of interest for a variety of chemically reacting systems, including systems for which no upper bound on the population sizes of the chemical species is known a priori. Background Experimental studies have reported the presence of stochastic mechanisms in cellular processes [1][2][3][4][5][6][7][8][9] and therefore, during the last decade, stochasticity has received much attention in systems biology [10][11][12][13][14][15]. The investigation of stochastic properties requires that computational models take into consideration the inherent randomness of chemical reactions. Stochastic kinetic approaches may give rise to dynamics that differ significantly from those predicted by deterministic models, because a system might follow very different scenarios with non-zero likelihoods. Under the assumption that the system is spatially homogeneous and has fixed volume and temperature, at a each point in time the state of a network of biochemical reactions is given by the population vector of the involved chemical species. The temporal evolution of the system can be described by a Markov process [16], which is usually represented as a system of ordinary differential equations (ODEs), called the chemical master equation (CME). The CME can be analyzed by applying numerical solution algorithms or, indirectly, by generating trajectories of the underlying Markov process, which is the basis of Gillespie's stochastic simulation algorithm [17,18]. In the former case, the methods are usually based on a matrix description of the Markov process and thus primarily limited by the size of the system. A survey and comparisons of the most established methods for the numerical analysis of discrete-state Markov processes are given by Stewart [19]. These methods compute the probability density vector of the Markov process at a number of time points up to an a priori specified accuracy. If numerical solution algorithms can be applied, almost always they require considerably less computation time than stochastic simulation, which only gives estimations of the measures of interest. This is particularly the case if not only means and variances of the state variables are estimated with stochastic simulation, but also the probability of certain events. However, for many realistic systems, the number of reachable states is huge or even infinite and, in this case, numerical solution algorithms may not be applicable. This depends mainly on the number of chemical species. In low dimensions (say <10) a direct solution of the CME is possible whereas in high dimensions stochastic simulation is the only choice. In the case of stochastic simulation estimates of the measures of interest can be derived once the number of trajectories is large enough to achieve the desired statistical accuracy. However, the main drawback of simulative solution techniques is that a large number of trajectories is necessary to obtain reliable results. For instance, in order to halve the confidence interval of an estimate, four times more trajectories have to be generated. Consequently, often stochastic simulation is only feasible with a very low level of confidence in the accuracy of the results. In this paper, we mitigate the performance problems of numerical solution algorithms for the CME. Instead of a global analysis of the state space, we propose the sliding window method, which comprises a sequence of analyzes local to the significant parts of the state space. In each step of the sequence, we dynamically choose a time interval and calculate an approximate numerical solution for a manageable subset of the reachable states. In order to identify those states that are relevant during a certain time period, for each chemical species, we estimate an upper and lower bound on the population size. This yields the boundaries of a "window" in which most of the probability mass remains during the time interval of interest. As illustrated in Figure 1, the window "slides" through the state space when the system is analyzed in a stepwise fashion. In each step, the initial conditions are given by a vector of probabilities (whose support is illustrated in light gray), and a matrix is constructed to describe the part of the Markov process where the window (illustrated by the dashed rectangular) is currently located. Then the corresponding ODE is solved using a standard numerical algorithm, and the next vector (illustrated in dark gray) is obtained. We focus on two specific numerical solution methods, the uniformization method and the Krylov subspace method. We compare their efficiency when they are used to solve the ODEs that arise during the sliding window iteration. We also compare the sliding window method to the numerical algorithms applied in a global fashion, that is, to all reachable states (not only to the states of the window), for systems of tractable size. We are interested in the probability distribution of the Markov process and not only in means and variances. These probabilities are difficult to estimate accurately with stochastic simulation. Therefore, we compare the solution obtained by the sliding window method only to numerical solution algorithms but not to stochastic simulation. Recently, finite state projection algorithms (FSP algorithms) for the solution of the CME have been proposed [20,21]. They differ from our approach in that they are based solely on the structure of the underlying graph, whereas the sliding window method is based on the stochastic properties of the Markov process. The FSP algorithms start with an initial projection, which is expanded in size if necessary. The direction and the size of the expansion is chosen based on a qualitative analysis of the system in a breadth-first search manner. It is not clear how far the state space has to be explored in order to capture most of the probability mass during the next time step. Thus, if the projection size is too small, the computation has to be repeated with an expanded projection. Moreover, for most models, the location of the main portion of the probability mass follows a certain direction in the state space, whereas the expansion is done in all directions. Therefore, unnecessary calculations are carried out, because the projection contains states that are visited with a small probability. By contrast, in the sliding window approach, we determine the location and direction of the probability mass for the next computation step based on the reaction propensities and the length of the time step. The projection that we obtain is significantly smaller than the projection used in the FSP whereas the Figure 1 The sliding window method. In each iteration step, the window W i captures the set S i of states in which the significant part of the probability mass is initially located (light gray), the set S i+1 of states that are reached after a time step (dark gray), as well as the states that are visited in between. accuracy of our approach is similar to the accuracy of the FSP. In this way we achieve large memory and computational savings, since the time complexity of our window construction is small compared to the calculation of the probability distribution of the window. In our simulations we never had to repeat the computation of the probabilities using a window of larger size. The Fokker-Planck equation is an approximation of the CME, for which a solution can be obtained efficiently [22,23]. This approximation, however, does not take into account the discrete nature of the system, but changes the underlying model by assuming a continuous state space. Other approaches to approximate the probability distributions defined by the CME are based on sparse grid methods [24], spectral methods [25], or the separation of time scales [26,27]. The latter approach uses a quasisteady state assumption for a subset of chemical species and calculates the solution of an abstract model of the system. In contrast, we present an algorithm that computes a direct solution of the CME. Our method is also related to tau-leaping techniques [18,28], because they require estimates of the upper and lower bounds on the population sizes of the chemical species, just as our method. The time leap must be sufficiently small such that the changes in the population vector do not significantly affect the dynamics of the system. Our method differs from the calculation of the leap in predicting the future dynamics for a dynamically chosen time period. More precisely, we determine the length of the next time step while approximating the future behavior of the process. Here, we present the sliding window method in more detail and provide an additional comparison between uniformization and Krylov subspace methods for the solution of the window. Moreover, we have improved our implementation of the algorithm and evaluated it on more examples, such as the bistable toggle switch, which is reported in detail. The remainder of this paper is organized as follows. We first describe the theoretical framework of our modeling approach in the Background Section. In the Results Section we present the sliding window method, and numerical solution approaches for the CME. Experimental results are given at the end of Section Results. Stochastic model We model a network of biochemical reactions as a Markov process that is derived from the stochastic chemical reaction kinetics [16,29]. A physical justification of Markovian models for coupled chemical reactions has been provided by Gillespie [17]. We consider a fixed reaction volume with n different chemical species that is spa- , the propensity functions are As above, the set of states reachable from the initial state y = (y 1 , y 2 , y 3 , y 4 ) is finite because of the conservation laws y 1 = x 1 + x 3 and y 2 = x 2 + x 3 + x 4 , where we assume that y 3 = y 4 = 0. Example 2 We consider a gene expression model [12], Chemical master equation We define a time-homogeneous, regular Markov process [30] (CTMC) (X(t), t ‫ޒ‬ ≥0 ) with state space S . We assume that the state changes of X are triggered by the chemical reactions. Let y be the initial state of X, which means that Pr(X(0) = y) = 1. We assume that the probability of a reaction R m occurring in the next infinitesimal time interval [t, t + τ), τ > 0 is given by For x S we define the probability that X is in state x at time t by p (t) (x) = Pr(X(t) = x | X(0) = y). The chemical master equation (CME) describes the behavior of X by the differential equation [29] In the sequel, a matrix description of Eq. (2) is more advantageous. It is obtained by defining the infinitesimal generator matrix Q = (Q(x, x')) x, x' S of the CTMC X by where we assume a fixed enumeration of the state space. Note that the row sums of the (possibly infinite) matrix Q are zero and λ x = -Q(x, x), the exit rate of state x, is the reciprocal value of the average residence time in x. Let T (0) be equal to the identity matrix I, and, for τ > 0, let T ( τ ) be the transition probability matrix for step τ with entries T ( τ ) (x, z) = Pr(X(t + τ) = z | X(t) = x). The elements of T ( τ ) are differentiable and Q is the derivative of T ( τ ) at τ = 0. If Q is given and X is known to be regular, T ( τ ) is uniquely determined by the Kolmogorov backward and forward equations with the general solution T ( τ ) = e Q τ. Let p (t) be the row vector with entries p (t) (x) for x S. Then the vector form of the CME is If sup x S λ x < ∞, Eq. (4) has the general solution where the matrix exponential is given by e Qt = . If the state space if infinite we can only compute approximations of p (t) . But even if Q is finite, several fac- [19,32,33]. Most popular are methods based on uniformization [34,35], approximations in the Krylov subspace [36], or numerical integration [37,38]. We will describe the former two methods in more detail in the section on numerical solution methods. Sliding window method The key idea of the algorithm proposed in this paper is to calculate an approximation of p (t) in an iterative Let Q j be the matrix that refers to W j , i.e., we define Note that for the simplicity of our presentation we keep a fixed enumeration of S and assume that each Q j has the same size as Q. However, the implementation of the method considers only the finite submatrix of Q j that contains entries of states in W j . For τ j = t jt j-1 , we define where = (1 y ) T and D j is the diagonal matrix whose main diagonal entries are one for x W j and zero otherwise. The row vector (1 y ) T is one at position y and zero otherwise. In the j-th step, the matrix contains the probabilities to move in τ j time units within W j from one state to another. As initial probabilities, Eq. (7) Probability mass is "lost", because we do not consider the entries for x W j-1 \W j , as we multiply with D j . In addition, we lose the probability to leave W j within the next τ j time units because is a substochastic matrix. If, for all j, during the time interval [t j-1 , t j ) most of the probability mass remains within W j , then the approximation error is small for all x S. The probability mass that is lost after j steps due to the approximation is given by Thus, if Eq. (7) is solved exactly, the total approximation error of the sliding window method is η r . Note that the error in Eq. (8) is the sum of the errors of all components of the vector . Window construction In each step of the iteration the window W j must be chosen such that the error η j is kept small. This is the case if W j satisfies the following conditions: (a) with a sufficiently high probability X(t j-1 ) W j , (b) the probability of leaving W j within the time interval [t j-1 , t j ) is sufficiently small. Requirement (a) implies that W j contains a significant part of the support of , that is, a subset S j S such that is small. In the first step we set S 1 = {y}. For j > 1, the window W j is constructed after is calculated. We fix a small δ > 0 and choose S j = {x | > δ}. If the support of is large and distributes almost uniformly, it may be necessary to construct S j such that is smaller than some fixed threshold. However, our experimental results show that using a fixed threshold yields good results, which makes the additional effort of sorting the support of unnecessary in practice. Note that requirement (a) implies that W j and W j-1 intersect. Thus, in each step we "slide" the window in the direction that the probability mass is moving. The sequel of this section focuses on requirement (b), where it is necessary to predict the future behavior of the process. One possibility to find a set W j that satisfies the requirements is to carry out stochastic simulation for t j - 100 0 01 100 0 01 2 Regarding the choice of the time step Δ, we suggest to choose Δ dynamically such that for each m the interval in Eq. (10) covers at least, say, 80% of the probability mass of the corresponding Poisson distribution. Clearly, the accuracy of the method increases in the case of larger intervals covering more probability mass. For our experimental results, we chose Δ such that λ x ·Δ = 1 yielded sufficiently accurate results. Sliding window algorithm In the right column of Table 1 We can calculate the overall loss of probability mass from the output p r by η r = 1x p r (x). This value includes both approximation errors of the algorithm: (1) the probability of leaving window W j during the time interval [t j-1 , t j ) and (2) the probability that is lost due to the sliding of the window, obtained by the multiplication with D j (cf. Eq. (7)). Note that it is always possible to repeat a computation step in order to increase the obtained accuracy. More precisely, we can determine a larger window by increasing the confidence of the interval in Eq. (10), i.e. by choosing the time step Δ such that for each m the maximal/minimal number of transitions of type R m lies in the interval with a certain confidence (e.g. with a confidence of 80%). For our experimental results, however, we did not repeat any computation step since we always obtained sufficiently accurate results. Time intervals For our experimental results, we compare two different time stepping mechanisms for Algorithm sWindow (see Table 1, right). We either choose equidistant time steps τ j = τ, for all j, or we determine τ j during the construction of the window W j (adaptive time steps). The latter method yields faster running times. Depending on the dynamics of the system, long time steps may cause three problems: (1) the window is large and the size of the matrix Q j may exceed the working memory capacity, (2) the dynamics of the system may differ considerably during a long time step and Q j has bad mathematical properties, (3) the window may contain states that are only significant during a much shorter time interval. If, on the other hand, the time steps are too small then many iterations of the main loop have to be carried out until the algorithm terminates. The windows overlap nearly completely, and even though each step may require little time, the whole procedure can be computationally expensive. One possibility is to fix the size of the windows and choose the time steps accordingly. But this does not necessarily result in short running times of the algorithm either. The reason is that the time complexity of the solution methods does not depend only on the size of the matrix representing the window but also on its mathematical properties. The problems mentioned above can be circumvented by calculating τ 1 , ..., τ r during the construction of the window W j as follows. We compute the number of the states that are significant at time t j-1 and pass it to ContDetApprox in line 9 (see Table 1). We run the while loop in Algorithm ContDetApprox (see Table 1, left) until (1) the window has at least a certain size and (2) the number of states in the window exceeds twice the number of the states that are significant at time t j-1 . The first condition ensures that the window exceeds a certain minimum size of, say, 500 states. The second condition ensures that the new window is just big enough to move the probability mass to a region outside of S j . More precisely, it ensures that the sets S 1 , S 2 ,...are not overlapping and that subsequent sets are located next to each other (as illustrated in Figure 1). Note that this ensures that the resulting window does not contain many states that are only significant during a much shorter time interval. On termination of the while-loop, we pass the value of the variable time from ContDetApprox to sWindow and set τ j to the value of time. Obviously, in sWindow we add a variable representing the time elapsed so far, and the for loop in line 2 is replaced by a while loop that stops when the time elapsed so far exceeds t. Later, we present experimental results of the sliding window method where we use adaptive time steps in the way described above. Numerical solution methods In this section, we present the theoretical basis of two numerical solution algorithms, namely the uniformization method and the Krylov subspace method. We approximate a global solution of the CME (cf. Eq. (5)), as well as the local solutions that are required in line 13 of Algorithm sWindow (see also Eq. (7)). Uniformization The uniformization method goes back to Jensen [34] and is also referred to as Jensen's method, randomization, or discrete-time conversion. In the performance analysis of computer systems, this method is popular and often preferred over other methods, such as Krylov subspace methods and numerical integration methods [19,39]. Recently, uniformization has also been used for the solution of the CME [40][41][42]. Global uniformization Let (X(t), t ‫ޒ‬ ≥0 ) be a CTMC with finite state space S. The basic idea of uniformization is to define a discrete-time Markov chain (DTMC) and a Poisson process. The DTMC is stochastically identical to X, meaning that it has the same transient probability distribution if the number of steps within [0, t) is given, and the Poisson process keeps track of the time as explained below. Recall that λ x is the exit rate of state x S, and I is the identity matrix. We define a uniformization rate λ such 5) can be rewritten as [19,32,43] Eq. (13) has nice properties compared to Eq. (5). There are no negative summands involved, as P is a stochastic matrix and λ > 0. Moreover, w (k) can be computed inductively by If P is sparse, w (k) can be calculated efficiently even if the size of the state space is large. Lower and upper sum- mation bounds L and U can be obtained such that for each state x the truncation error [44] can be a priori bounded by a predefined error tolerance ? > 0. Thus, p (t) can be approximated with arbitrary accuracy by as long as the required number of summands is not extremely large. All analysis methods (simulation-based or not) encounter serious difficulties if the underlying model is stiff. In a stiff model the components of the underlying system act on time scales that differ by several orders of magnitude and this arises in various application domains, especially in systems biology. For a stiff model, the uniformization rate λ ≥ max x S λ x will correspond to the fastest time scale. By contrast, a significant change of the slow components can be observed only during a period of time that corresponds to the slowest time scale. The uniformization method is then extremely time consuming because of a very large stiffness index [45] In the sequel, we show how uniformization can be applied in a local fashion such that stiffness has a less negative effect on the performance of the method. In other words, the sliding window technique enables uniformization to perform well even for stiff systems. Local uniformization We now combine uniformization and the sliding window method. Assume that S may be infinite, and that we iteratively apply uniformization to solve Eq. (7). More specifically, in line 13 of Algorithm sWindow (see Table 1, right), we invoke the uniformization method to approximate , where λ = sup x S λ x and ν j is the number of nonzero elements in P j . Krylov subspace Krylov subspace methods are widely used for large eigenvalue problems, for solving linear equation systems, and also for approximating the product of a matrix exponential and a vector [46,47]. We are interested in the latter approximation and show how it can be used to solve the CME, either in a global fashion or in combination with the sliding window method. Recently, Krylov subspace methods have been applied to the CME by Sidje et al. [21]. Global Krylov subspace method Recall that a global solution of the CME is given by p (t) = p (0) e Qt . In the sequel, we describe the approximation of We choose which yields the approximation error [46] where ρ = ||A|| 2 is the spectral norm of A. The approximation in Eq. (19) still involves the computation of the matrix exponential of H m , but as H m is of small dimension and has a particular structure (upper Hessenberg), this requires a smaller computational effort. For the matrix exponential of small matrices, methods such as Schur decomposition and Padé approximants may be applied [31]. Assume now that the time instant t is arbitrary, i.e., we want to approximate e tA v for some t > 0. In order to control the approximation error, we calculate e tA v stepwise by exploiting that for τ 1 , τ 2 ≥ 0. reject u i , replace ? i-1 with ? i , and go to step (1). For our experimental results, we used the Expokit software [48] where the small exponential, , is computed via the irreducible Padé approximation [49]. Local Krylov subspace method Assume now that we use the Krylov subspace method in line 13 of Algorithm sWindow (see Table 1, right), to approximate (cf. Eq. (7)). By letting v = , A = , and t = τ j we can apply the same procedure as in the global case. Note that this yields a nested iteration because the time steps τ j are usually much bigger than the time steps of the Krylov subspace method. For the Krylov subspace method, using the matrix Q j instead of Q offers important advantages. The Arnoldi process is faster as Q j usually contains fewer nonzero entries than Q. As well, the sliding window method is likely to provide matrices with a smaller spectral norm ||Q j || 2 . This allows for bigger time steps during the Krylov approximation, as can be seen in our experimental results. Experimental results We coded both algorithms in Table 1 in C++ and ran experiments on a 3.16 GHz Intel dual-core Linux PC. We discuss experimental results that we obtained for Example 1 and Example 2, as well as Goutsias' model [50] and a bistable toggle switch [51]. Goutsias' model describes the transcription regulation of a repressor protein in bacteriophage λ and involves six different species and ten reactions. The bistable toggle switch is a prototype of a genetic switch with two competing repressor proteins and four reactions. All results are listed in Figure 2. As explained in detail below, we also implemented the method proposed by Burrage et al. [21] in order to compare it to our algorithm in terms of running time and accuracy. Moreover, for finite examples we compare our method to a global analysis, i.e. where in each step the entire state space is considered. We do not compare our method to Gillespie simulation or approximation methods based on the Fokker-Planck equation. The former method provides only estimates of the probability distribution and becomes infeasible if small probabilities are estimated [52]. The latter type of methods do not take into account the discreteness of the molecule numbers and are known to provide bad approximations in the case of small populations as considered here [53]. Parameters We fixed the input ? = 10 -8 of Algorithm sWindow for all experiments (see Table 1, right). We chose the input δ in a dynamical fashion to ensure that in the j-th step we do not lose more probability than 10 -5 ·τ j /(t rt 0 ) by restricting to significant states, that is, we decrease δ until after line 4 of Algorithm sWindow the set S j contains at most 10 -5 · less probability than the former set S j-1 . In Figure 2, we list the average value that we used for δ. In the sequel, we give details about the parameters used for the results that we obtained for Example 1 and Example 2. For the remaining two examples, we list the corresponding chemical reactions and the parameters that we chose for the results in Figure 2. Enzyme example We tried different parameter sets, referred to as pset a)-c), for Example 1 (see Figure 2). For parameter combination a) we have c 1 = c 2 = 1, c 3 = 0.1 and start with 1000 enzymes and 100 substrates. In this case the number of reachable states is 5151. For parameter set b) and c) we have c 1 = c 2 = c 3 = 1 and start with 100 enzymes and 1000 substrates and 500 enzymes and 500 substrates, which yields 96051 and 125751 reachable Note that the parallelogram in Figure 3 was induced by the conservation laws of the system. In general, conservation laws should be taken into account since otherwise the window may be inconsistent with the conservation laws, i.e. it may contain states that are not reachable. Gene expression example In Figure 2 we present results for Example 2. The difference between parameter set a) and parameter set b), referred to as pset a) and pset b), is that for a) we start with the empty system and for b) we start with 100 mRNA molecules and 1000 proteins. For both variants, we choose rate constants c 1 = 0.5, c 2 = 0.0058, c 3 = 0.0029, c 4 = 0.0001. The time steps that we use are determined by the condition in the section on time intervals. Note that we cannot solve this example using a global method because the number of reachable states is infinite. The column error contains the total error η r (see Eq. (8)) and times in sec refers to the running time in seconds. In column perc. we list the percentage of the total running time that was spent for the window construction. The column average wind. size refers to the average number of states in the window. For the gene expression example, we use four branches: We maximize the number of mRNA molecules by choosing and and minimize it with and . Reactions R 2 and R 4 are irrelevant for this species. We maximize the protein population by choosing A for example will have less of the first reaction and more of the second. Discussion In this section, we discuss our algorithm w.r.t. accuracy and running time where we consider different solution methods and different time step mechanisms. Moreover, we compare our method with a global analysis. Accuracy The column labeled by error in Figure 2 shows the total error η r (cf. Eq. (8)) of the sliding window method plus the uniformization error (which is bounded by ? = 10 -8 ). The error using the Krylov subspace method instead yields the same accuracy because for both, uniformization and the Krylov subspace method, the error bound is specified a priori. For all examples, the total error does not exceed 1 × 10 -4 , which means that not more than 0.01 percent of the probability mass is lost during the whole procedure. It would, of course, be possible to add an accuracy check in the while loop of Algorithm sWindow, expand the current window if necessary, and recalculate. But as the method consistently returns a small error, this has been omitted. We also considered relative errors, that is, In order to support our considerations in the window construction section, we carried out experiments in which we exclusively chose the average in line 17 of Algorithm ContDetApprox (see Table 1, left). More precisely, for the construction of the window we do not consider the deviations in the numbers of reactions but only the average number. In this case, we called the method Cont-DetApprox with input 2τ to make sure that on average the probability mass moves to the center of the window and not too close to the borders. For this configuration, the total error is several orders of magnitude higher, e.g., for parameter set a) of the enzyme example the total error is 0.0224. Finally, we test the size of the windows constructed in lines 7-10 of Algorithm sWindow. We change Algorithm sWindow by decreasing the size of the window by 5% between lines 10 and 11. In this case, the total error η r increases. For instance, η r = 0.35% for parameter set a) of the enzyme example. These results substantiate that the size and the position of the sliding window is such that the approximation error is small whereas significantly smaller windows result in significantly higher approximation errors. Running time For the time complexity analysis, we concentrate on three main issues. • (3)) with the global method in Figure 6. Observe that the total error of the global uniformization method is smaller (compare the columns labeled by error) since the only error source is the truncation of the infinite sum in Eq. (13). In the column with heading #states we list the number of states that are reachable. During the global solution we consider all reachable states at all time whereas in the sliding window method the average number of states considered during a time step is much smaller. This is the main reason why the sliding window method is much faster. Moreover, in the case of uniformization, the rate for global uniformization is the maximum of all exit rates, whereas for local uniformization, we take the maximum over all states in the current window. Note that the global maximum can be huge compared to the local maxima. This explains the bad performance of the global uniformization method. When the Krylov subspace method is used for a global solution, the running times of the global solutions are also higher than the times of the local Krylov subspace method (sliding window method combined with the Krylov subspace method). Again, the reason is that a smaller number of states is considered during the sliding window iteration. Moreover, the matrices Q j have numerical properties that facilitate the use of bigger, and thus, fewer time steps. The total number of iteration steps used to solve Eq. (6) with the Krylov subspace method and the sliding window method is indeed small when compared to the global Krylov subspace method (on average around 20 times fewer steps). We now focus on a comparison between our sliding window method and another local method, called doubling window method. For the latter, we compute the probability vectors in a similar way as Sidje et al. [21]. We start with an initial window and apply the Krylov algorithm. We do not iterate over the time intervals [t j-1 , t j ) but use the step sizes of the Krylov subspace method. After each time step, we remove those parts of the window that will not be used for the remaining calculations. We expand the size of the window if the error exceeds a certain threshold. Since the performance of the method depends heavily on the initial window and the directions in which a window is expanded, we start initially with the same window as the sliding window method and expand always in the directions that are most advantageous for the computation. For this we used information about the direction in which the probability mass is moving that we obtained from experiments with the sliding window method. The expansion of a window is realized by doubling the length of all of its edges. We applied the doubling window method to the enzyme example and the gene expression. For all parameter sets that we tried, the sliding window method outperforms the doubling window method w.r.t. running time (with an average speed-up factor of 5). The total number of iterations of the Krylov subspace approximation is up to 13 times smaller in the case of the sliding window method compared to the doubling window method (with an average of 6.5). Note that for arbitrary systems the doubling window method cannot be applied without additional knowledge about the system, i.e., it is in general not clear, in which direction the window has to be expanded. Our results indicate that the sliding window method achieves a significant speed-up compared to global analysis, but also compared to the doubling window method. Moreover, while global analysis is limited to finite-state systems and the doubling window methods requires additional knowledge about the system, our method can be applied to any system where the significant part of the probability mass is located at a tractable subset of states. If the dimension of the system is high, then the significant part of the probability mass may be located at intractably many states and in this case the memory requirements of our algorithm may exceed the available capacity. Solution method During the sliding window iteration different solution methods can be applied in line 13 of Algorithm sWindow. We concentrate on the uniformization method and on the Krylov subspace method. The running times in Figure 2 (compare the columns labeled by sWindow + uniformization with the columns labeled by sWindow + Krylov) show that the Krylov subspace method performs better (average speed-up factor of around 1.5). The reason is that the Krylov subspace method is more robust to stiffness than uniformization. For non-stiff systems, uniformization is known to outperform the Krylov subspace method [19,39]. However, since biochemical network models are typically stiff, the Krylov subspace method seems to be particularly well suited in this area. Time intervals In order to confirm our considerations in the section on time intervals, we also applied the sliding window method using equidistant time steps. For all examples, using equidistant time steps results in longer computation times compared to using adaptive time steps (with an average speed-up factor of 3.5). A adaptive choice of the time steps has also the advantage that we can control the size of the windows and avoid that the memory requirements of the algorithm exceed the available capacity. Conclusions The sliding window method is a novel approach to address the performance problems of numerical algorithms for the solution of the chemical master equation. It replaces a global analysis of the system by a sequence of local analyzes. The method applies to a variety of chemically reacting systems, including systems for which no upper bound on the population sizes of the chemical species is known a priori. The proposed method is compatible with all existing numerical algorithms for solving the CME, and also a combination with other techniques, such as time scale separation [26,27], is possible. We demonstrated the effectiveness of our method with a number of experiments. The results are promising as even systems with more than two million states with significant probability can be solved in acceptable time. Moreover, for examples that are more complex than those presented here, it is often sufficient to consider only a relatively small part of the state space. The number of molecules in the cell is always finite and, usually, a biochemical system follows only a small number of different trends. Stated differently, it is rarely the case that in biochemical systems a large number of different scenarios have significant likelihoods. Thus, we expect that the sliding window method can be successfully applied to systems with many chemical species and reactions as long as the significant part of the probability mass is always located at a tractable subset of states. In addition, further enhancements are possible, such as a splitting of the windows, which will be particularly useful for multi-stable systems. Moreover, we plan to automate our algorithm in a way that besides the initial conditions and the set of reactions no further input from the user is necessary, such as combinations of reactions that maximize/minimize certain populations. Authors' contributions VW and TAH designed the research. VW, RG, MM, and TAH developed the algorithm and the implementation was carried out by RG and MM. VW, MM, and TAH wrote the manuscript, which has been read and approved by all authors.
9,626.4
2010-04-08T00:00:00.000
[ "Chemistry", "Computer Science" ]
Modelization of Nutrient Removal Processes at a Large WWTP Using a Modified ASM2d Model The biodegradation of particulate substrates starts by a hydrolytic stage. Hydrolysis is a slow reaction and usually becomes the rate limiting step of the organic substrates biodegradation. The objective of this work was to evaluate a novel hydrolysis concept based on a modification of the activated sludge model (ASM2d) and to compare it with the original ASM2d model. The hydrolysis concept was developed in order to accurately predict the use of internal carbon sources in enhanced biological nutrient removal (BNR) processes at a full scale facility located in northern Poland. Both hydrolysis concepts were compared based on the accuracy of their predictions for the main processes taking place at a full-scale facility. From the comparison, it was observed that the modified ASM2d model presented similar predictions to those of the original ASM2d model on the behavior of chemical oxygen demand (COD), NH4-N, NO3-N, and PO4-P. However, the modified model proposed in this work yield better predictions of the oxygen uptake rate (OUR) (up to 5.6 and 5.7%) as well as in the phosphate release and uptake rates. Introduction In the available literature, the potential use of external substrates for the enhancement of the main Biological Nutrient Removal (BNR) processes has been described, paying special attention to the application of soluble and readily biodegradable substrates [1][2][3][4]. Nowadays, the major strategic priorities of European Union (EU) policy strongly support research activities related to the development of innovative environmental technologies for wastewater treatment plants (WWTP). One of the proposed activities is the use of slowly biodegradable internal carbon (C) sources, such as the particulate substrate (X S ), to enhance the nutrient removal from the wastewaters. The final aim of this activity is to fulfil the regulations imposed to the full-scale WWTPs by the EU Directive [5]. In this work, with the aim to study the effect of these internal C sources over the BNR processes taking place in the WWTP, the process kinetics of the biological phosphorous and nitrogen removal were studied in both: the full-scale WWTP as well as in batch tests carried out with actual wastewaters from the full-scale WWTP. The information obtained from these tests could be used in two ways: to optimize the WWTP, and to provide guidelines for retrofitting the activated sludge reactors currently operated in these plants. The modeling of the effects of X S at a WWTP implementing BNR processes is complicated due to the fact that municipal wastewater is composed by a complex mixture of particles, colloids, and soluble pollutants with variable compositions and concentrations. Because of that, different chemical oxygen demand (COD) fractions have to be quantified in order to adequately characterize the wastewater for its subsequent use as input data for modeling purposes. Ekama and Marais [6] divided the biodegradable COD fractions of the wastewater into two distinct fractions. The readily biodegradable fraction (S S ) mainly consists of soluble organic compounds and the slowly biodegradable particulate fraction (X S ) which consists of particles, colloids, and large molecules. The impact of the S S in BNR processes has been extensively investigated [7], but there is still not enough study of X S as a major part of internal C source in the wastewater on BNR processes. The biodegradation of X S starts by a hydrolytic process usually described in the activated sludge models [3]. This concept of hydrolysis is in use for over 20 years in activated sludge models (ASM) developed by international water association (IWA) Task Scientific Group, but still requires attention because of its relevance in advanced computer simulations platforms. According to the literature, in a conventional activated sludge process, the five-day biochemical oxygen demand to nitrogen and phosphorous ratio (BOD 5 :N:P) in order to avoid nutrient deficiency is 100:5:1 [8]. However, depending on the characteristics of the C source consumed, its value could change [9,10]. A widely used experimental approach to study the hydrolysis is to evaluate this process by using different types of batch tests fed with wastewater as a source of internal C source containing different substrates, mainly biodegradable S S and X S , and seed the reactor with heterotrophic biomass. Experimental results obtained in these tests can be then evaluated by mathematical modeling and advanced computer simulations to describe the hydrolysis process. Finally, the results obtained should be verified in full scale experiments. Because of that, this work was divided into two stages: a first stage consisting of experimental research, and a second stage consisting on mathematical modelling using advanced computer simulation and actual data from the full-scale WWTP. In the first stage of the study, an innovative procedure for the evaluation of the X S fraction over the BNR processes was implemented [11]. The results of laboratory tests were further used in the second stage to evaluate the hydrolysis process using the activated sludge model "ASM2d" [3]. This model takes into account the biological carbon removal (biological oxidation to CO 2 ) as well as nutrient removal processes (biological phosphorus and nitrogen removal by means of biomass storage and nitrification-denitrification respectively) including the denitrifying activity of the phosphorus accumulating organisms. Additionally, based on the dual hydrolysis model concept presented in the literature [12], a modified version of ASM2d model was used and evaluated in this work with the aim to study the effective use of internal C source to enhance BNR processes at second largest WWTP in northern Poland, the Debogorze WWTP. Full-Scale WWTP The Debogorze WWTP (54 • 34' 38.3046" N, 18 • 25' 52.8924" E) was expanded and modernized on June 2009 before the studied periods. This facility has four activated sludge reactors, each with a volume of 12,000 m 3 , implementing the Johannesburg BNR process configuration and six secondary clarifiers. The Johannesburg BNR configuration allows the activated sludge to remove nitrogen, by the nitrification-denitrification process, and to the removal of phosphorus, by means of the accumulation inside the biomass as poly-phosphate. The Johannesburg process configuration has a separate anoxic zone for the denitrification of the return activated sludge before it is introduced into the anaerobic compartment. Operating in this way, it is reduced the nitrate load entering the anaerobic compartment which reduced the competence for the organic substrate between the nitrogen and phosphorus removal processes. This operational configuration leads to an enhancement of the biological phosphorous removal yield. During the fall (September-November) and spring (May) study periods, the Debogorze WWTP operated at temperatures ranging from 15.4-17.8 • C. The mixed liquor suspended solids (MLSS) concentrations was maintained at 4750 g/m 3 and the sludge age was 29 d. The monthly average concentrations of the main pollutants in the wastewater are presented in Table 1. More details concerning wastewater characteristics and operating parameters from the study plant can be found elsewhere [11,13]. Batch Laboratory Study. Laboratory tests were carried out by using activated sludge from the bioreactor of the Debogorze WWTP, and actual settled wastewater (SWW) from the average daily time-proportional sampler. According to the procedure described in the literature [15], two kinds of wastewaters were used: SWW without any pretreatment and pretreated with coagulation-flocculation (C-F) method. The wastewater pretreated with the C-F was prepared following the procedure described in the literature [16]. This wastewater only contained the soluble organic fraction. Both wastewaters were used to carry out different types of laboratory batch tests in order to determine the oxygen uptake rate (OUR), ammonia uptake rate (AUR), nitrate uptake rate (NUR), phosphorous uptake rate (PUR), and phosphorous release rate (PRR). The original and the modified ASM2d models were calibrated using results obtained from the batch tests carried out with the SWW [17,18]. The same set of model parameters were further evaluated using steady state data from Debogorze WWTP to compare predictions of the original and the modified ASM2d models at Debogorze WWTP. Organization of the modeling is presented in Figure 1. Batch Laboratory Study. Laboratory tests were carried out by using activated sludge from the bioreactor of the Debogorze WWTP, and actual settled wastewater (SWW) from the average daily time-proportional sampler. According to the procedure described in the literature [15], two kinds of wastewaters were used: SWW without any pretreatment and pretreated with coagulation-flocculation (C-F) method. The wastewater pretreated with the C-F was prepared following the procedure described in the literature [16]. This wastewater only contained the soluble organic fraction. Both wastewaters were used to carry out different types of laboratory batch tests in order to determine the oxygen uptake rate (OUR), ammonia uptake rate (AUR), nitrate uptake rate (NUR), phosphorous uptake rate (PUR), and phosphorous release rate (PRR). The original and the modified ASM2d models were calibrated using results obtained from the batch tests carried out with the SWW [17,18]. The same set of model parameters were further evaluated using steady state data from Debogorze WWTP to compare predictions of the original and the modified ASM2d models at Debogorze WWTP. Organization of the modeling is presented in Figure 1. The modified ASM2d used in this work considered a two-step hydrolysis process with two rates (khyd, khyd,r), includes a new component (XSH), defined as the substrate readily hydrolysable and three new hydrolysis processes of XSH carried out under aerobic, anoxic and anaerobic conditions. A scheme of the hydrolysis concept is presented in Figure 2. Operating data quality control and wastewater characterization. STEP 3. Estimation of the biomass composition. Initial conditions for batch tests simulations. STEP 4. Calibration of the ASM2d model based on batch test results. STEP 5. Validation of the ASM2d model based on batch test results. STEP 6. Validation of the ASM2d model based on full-scale results. The modified ASM2d used in this work considered a two-step hydrolysis process with two rates (k hyd , k hyd,r ), includes a new component (X SH ), defined as the substrate readily hydrolysable and three new hydrolysis processes of X SH carried out under aerobic, anoxic and anaerobic conditions. A scheme of the hydrolysis concept is presented in Figure 2. Analytical Methods The wastewater characterization was carried out according to the standard methods [19]. Most of the total and soluble fractions of the COD, nitrate and phosphate were characterized by using a Xion 500 spectrophotometer (Hach Lange GmbH, Düsseldorf, Germany). Only the total nitrogen concentration was determined by using a TOC/TN analyzer (SHIMADZU Corporation, Tokyo, Japan). The gravimetric analyses also were performed in accordance with the Standard Methods procedure [19]. Results and Discussion The application of the C-F method removed the colloidal and particulate COD fractions from the SWW. This removal resulted in processes rates reduction ranging in most of the cases from 10-60%. Similar results were previous obtained and reported in the literature [11]. The comparison of both examined models' predictions vs. sample results of principal processes are presented in Figure 3. The ASM2d predictions were calibrated to the experimental NURs tests by fitting two parameters: the maximum growth rate of heterotrophs (μH) and hydrolysis rate constant (kh). No further fitting were required to calibrate the NUR in the anoxic stage of the PRR/anoxic PUR test (Figure 3 a,b). Six parameters were fitted to calibrate the PRR and PUR tests. These parameters were the rate constant for storage of PHA (qPHA), half saturation coefficient of SA for PAOs (KSA, PAO), polyphosphate saturation coefficient for PAOs (KPP), the reduction factor of the anaerobic hydrolysis (ηfe), and particulate COD saturation coefficient (KX). The nitrogen removal process, based on the PRR and the PUR batch tests, was calibrated fitting the maximum growth rate of autotrophs (μA) and the NH4-N saturation coefficient (KNH4,A). The critical step of OUR batch test fitting was to adjust average values of both stoichiometric and kinetic parameters in the modified ASM2d. To do this fitting, 10 different scenarios were evaluated. The initial ARD were determined, being the ARD of the COD profile lower than 15% in all the cases, whereas higher ARD were obtained when predicting the OUR values which presented values up to 45%. The very high ARD in the OUR parameter could be explained because of the low consumptions rates, about 0.02 g g −1 VSS h −1 , which lead to high ARD even when the absolute variations were small [20]. In order to reduce these errors, the OUR predictions were optimized by using the Nelder-Mead method [21]. Finally, using equal sets of model parameters from previous batch tests, the modified ASM2d were further evaluated by steady state and dynamic simulations of OUR batch test (Figure 3 c,d). Analytical Methods The wastewater characterization was carried out according to the standard methods [19]. Most of the total and soluble fractions of the COD, nitrate and phosphate were characterized by using a Xion 500 spectrophotometer (Hach Lange GmbH, Düsseldorf, Germany). Only the total nitrogen concentration was determined by using a TOC/TN analyzer (SHIMADZU Corporation, Tokyo, Japan). The gravimetric analyses also were performed in accordance with the Standard Methods procedure [19]. Results and Discussion The application of the C-F method removed the colloidal and particulate COD fractions from the SWW. This removal resulted in processes rates reduction ranging in most of the cases from 10-60%. Similar results were previous obtained and reported in the literature [11]. The comparison of both examined models' predictions vs. sample results of principal processes are presented in Figure 3. The ASM2d predictions were calibrated to the experimental NURs tests by fitting two parameters: the maximum growth rate of heterotrophs (µ H ) and hydrolysis rate constant (k h ). No further fitting were required to calibrate the NUR in the anoxic stage of the PRR/anoxic PUR test (Figure 3a,b). Six parameters were fitted to calibrate the PRR and PUR tests. These parameters were the rate constant for storage of PHA (q PHA ), half saturation coefficient of S A for PAOs (K SA, PAO ), polyphosphate saturation coefficient for PAOs (K PP ), the reduction factor of the anaerobic hydrolysis (η fe ), and particulate COD saturation coefficient (K X ). The nitrogen removal process, based on the PRR and the PUR batch tests, was calibrated fitting the maximum growth rate of autotrophs (µ A ) and the NH 4 -N saturation coefficient (K NH4,A ). The critical step of OUR batch test fitting was to adjust average values of both stoichiometric and kinetic parameters in the modified ASM2d. To do this fitting, 10 different scenarios were evaluated. The initial ARD were determined, being the ARD of the COD profile lower than 15% in all the cases, whereas higher ARD were obtained when predicting the OUR values which presented values up to 45%. The very high ARD in the OUR parameter could be explained because of the low consumptions rates, about 0.02 g g −1 VSS h −1 , which lead to high ARD even when the absolute variations were small [20]. In order to reduce these errors, the OUR predictions were optimized by using the Nelder-Mead method [21]. Finally, using equal sets of model parameters from previous batch tests, the modified ASM2d were further evaluated by steady state and dynamic simulations of OUR batch test (Figure 3c,d). From these tests, the values of the hydrolysis rate constants (khyd and khyd,r) of the two stephydrolysis model proposed in the modified ASM2d model were mathematically determined, being their values 2.0 and 10 d −1 , respectively. In the case of the conventional ASM2d model the single hydrolysis rate constant was 2.5 d −1 . The very different hydrolysis rate of the particulate slowly biodegradable substrates, khyd,r 10 d −1 , indicates that two different fractions can be identified as products of the hydrolysis stage. Because of that, the mechanism of the hydrolysis stage is better described by taken into account two separately rates. When identifying these two stages of the hydrolysis, the model predictions of the COD and nutrient profiles were more accurate. This higher accuracy in the nutrient profiles can be explained because the nutrient removal rates and yields clearly depends on the nature of the substrate used [22,23]. In this way, the use of the modified ASM2d model proposed in this work is an interesting option to obtain more accurate predictions at WWTP. These better predictions were expected in all the biological processes taking place in the WWTP, but a more significant effect on the nutrient removal processes was observed, mainly in the phosphate removal, because the organic substrate and the subsequent poly-phosphate accumulation inside the biomass is very sensitive to the characteristics of the organic C source used [24]. Concerning the stoichiometric parameters, they were very similar in all the cases. In the case of the heterotrophic biomass yield (YH) obtained in the modified ASM2d model was 0.68, being the corresponding value in the original ASM2d model slightly higher, 0.625. The similar results obtained when using both models, indicates that the model modifications, carried out in the modified ASM2d model proposed in this work, did not significantly influence the stoichiometric of the processes. The predictive capabilities of the original and modified ASM2d have been confirmed by ARD, which were much lower for the simulation with the modified ASM2d. For comparison, the ARD for the original vs. modified ASM2d simulation of the OUR tests accounted for 11.3-29.5% and 18.9-45.8% vs. 9.7-15.8% and 11.8-30.3% for the settled wastewater without pretreatment and after From these tests, the values of the hydrolysis rate constants (k hyd and k hyd,r ) of the two step-hydrolysis model proposed in the modified ASM2d model were mathematically determined, being their values 2.0 and 10 d −1 , respectively. In the case of the conventional ASM2d model the single hydrolysis rate constant was 2.5 d −1 . The very different hydrolysis rate of the particulate slowly biodegradable substrates, k hyd,r 10 d −1 , indicates that two different fractions can be identified as products of the hydrolysis stage. Because of that, the mechanism of the hydrolysis stage is better described by taken into account two separately rates. When identifying these two stages of the hydrolysis, the model predictions of the COD and nutrient profiles were more accurate. This higher accuracy in the nutrient profiles can be explained because the nutrient removal rates and yields clearly depends on the nature of the substrate used [22,23]. In this way, the use of the modified ASM2d model proposed in this work is an interesting option to obtain more accurate predictions at WWTP. These better predictions were expected in all the biological processes taking place in the WWTP, but a more significant effect on the nutrient removal processes was observed, mainly in the phosphate removal, because the organic substrate and the subsequent poly-phosphate accumulation inside the biomass is very sensitive to the characteristics of the organic C source used [24]. Concerning the stoichiometric parameters, they were very similar in all the cases. In the case of the heterotrophic biomass yield (Y H ) obtained in the modified ASM2d model was 0.68, being the corresponding value in the original ASM2d model slightly higher, 0.625. The similar results obtained when using both models, indicates that the model modifications, carried out in the modified ASM2d model proposed in this work, did not significantly influence the stoichiometric of the processes. The predictive capabilities of the original and modified ASM2d have been confirmed by ARD, which were much lower for the simulation with the modified ASM2d. For comparison, the ARD for the original vs. modified ASM2d simulation of the OUR tests accounted for 11.3-29.5% and 18.9-45.8% vs. 9.7-15.8% and 11.8-30.3% for the settled wastewater without pretreatment and after coagulation-flocculation, respectively. As a summary, the most relevant stoichiometric and kinetic coefficients values in the original and the modified ASM2d models are presented in Table 2. After the calibration step, the quality of the predictions of the original and modified ASM2d were determined by evaluating the average relative deviation (ARD). Table 3 contains the ARD obtained with the original and the modified ASM2d models in the main processes evaluated with the SWW with and without the C-F pretreatment at Debogorze WWTP. As can be seen in Table 3, the very similar results were obtained in both models when evaluating the COD consumption, phosphate release, and ammonia utilization whereas the largest one were found for nitrate utilization, up to 9.6%, phosphate uptake, up to 11.3%, and oxygen uptake, up to 5.7%. These results indicate that the modified model yields more accurate predictions. Conclusions In the present work, a modified ASM2d model, based on a novel two stage hydrolysis concept, has been evaluated. From the obtained results, the following conclusions can be extracted: • The modified ASM2d model presented in this work allows reaching more accurate predictions of the behavior of the activated sludge systems taking place in a full scale WWTP than the original ASM2d model. Additionally, more accurate assessment of wastewater biodegradability in terms of the COD fractions was obtained which is crucial for the BNR processes modelization. • When comparing the original and the modified ASM2d models it was observed that the largest differences in the ARD values were obtained in the predictions of nitrate utilization, up to 9.6%, phosphate uptake, up to 11.3%, and oxygen uptake, up to 5.7%. • When comparing the original and the modified ASM2d only minor effect were observed on the behavior of COD consumption, phosphate release, and ammonia utilization. • The effective use of internal C sources, such as slowly biodegradable substrate (X S ) for denitrification and biological phosphorous removal may help to reach the quality standards stablished in the EU regulations for large WWTPs. • From the modelling results, it was observed that the colloidal and particulate organic fractions play a crucial role the enhancement of the denitrification and EBPR at the Debogorze WWTP.
5,049.8
2018-12-01T00:00:00.000
[ "Engineering" ]
Geometric control of diffusing elements on InAs semiconductor surfaces via metal contacts Local geometric control of basic synthesis parameters, such as elemental composition, is important for bottom-up synthesis and top-down device definition on-chip but remains a significant challenge. Here, we propose to use lithographically defined metal stacks for regulating the surface concentrations of freely diffusing synthesis elements on compound semiconductors. This is demonstrated by geometric control of Indium droplet formation on Indium Arsenide surfaces, an important consequence of incongruent evaporation. Lithographic defined Aluminium/Palladium metal patterns induce well-defined droplet-free zones during annealing up to 600 °C, while the metal patterns retain their lateral geometry. Compositional and structural analysis is performed, as well as theoretical modelling. The Pd acts as a sink for free In atoms, lowering their surface concentration locally and inhibiting droplet formation. Al acts as a diffusion barrier altering Pd’s efficiency. The behaviour depends only on a few basic assumptions and should be applicable to lithography-epitaxial manufacturing processes of compound semiconductors in general. Local geometric control of basic synthesis parameters, such as elemental composition, is important for bottom-up synthesis and top-down device definition on-chip but remains a significant challenge. Here, we propose to use lithographically defined metal stacks for regulating the surface concentrations of freely diffusing synthesis elements on compound semiconductors. This is demonstrated by geometric control of Indium droplet formation on Indium Arsenide surfaces, an important consequence of incongruent evaporation. Lithographic defined Aluminium/Palladium metal patterns induce welldefined droplet-free zones during annealing up to 600°C, while the metal patterns retain their lateral geometry. Compositional and structural analysis is performed, as well as theoretical modelling. The Pd acts as a sink for free In atoms, lowering their surface concentration locally and inhibiting droplet formation. Al acts as a diffusion barrier altering Pd's efficiency. The behaviour depends only on a few basic assumptions and should be applicable to lithography-epitaxial manufacturing processes of compound semiconductors in general. Compound semiconductors are central for optoelectronic, mobile, and power technologies and strong candidates for future quantum technologies [1][2][3][4][5][6][7][8][9] . Synthesis and epitaxy rely on controlling the concentration ratio of each involved element at the growth surface, as well as other parameters like substrate temperature and crystal structure. Local geometric control of the growth is important as it allows bottomup synthesis in specific regions when manufacturing a complete device on a chip. Various concepts have been developed attempting to resolve this, such as template-assisted growth, metal seed particleinduced growth or the use of surface crystal facet [10][11][12][13][14][15][16] . While proven highly useful, these concepts all rely on altering the growth substrate by either blocking or promoting synthesis in specific areas. An interesting alternative is to locally vary the concentration of free (diffusing) synthesis elements on the surface without altering the growth substrate directly, as we pursue in the present work. Using lithographically defined metal stacks for this purpose would be highly desirable as they are already an intimate part of device fabrication in the form of electrical contacts. The formation of droplets during device manufacturing steps can also be unwanted as it can lead to poor control of the growth quality and lack of lateral control 15,44,45 . In this paper, we explore the geometric control of the surface concentration of freely diffusing In atoms on the InAs(111)B surface. This is evidenced by a dramatic change of In-droplet formation during annealing, observed around lithographically defined Al/Pd-stacks. This results in droplet-free zones at much higher temperatures than normally possible in vacuum. The behaviour can be controlled by the Al/Pd stack composition and position, as well as annealing time and temperature. The deposited metal stacks remain localised within their original boundaries during all process steps. A theoretical model is developed, which explains our observations based on simple and general assumptions. The As evaporates rapidly at elevated temperatures leaving free In to diffuse on the surface. The Pd-layer acts as a sink for these free In atoms. As a result, the overall In-density around the metal patterns is lowered, impeding the formation of droplets. Figure 1a depicts the concept. Adding Al between the InAs substrate and the Pd reduces the vertical alloying of Pd into the substrate and alters the efficiency of Pd to act as a sink. Finally, we determine limits for this effect as governed by the finite amount of In incorporation possible into the Pd. The general concept presented here opens up significant opportunities to design manufacturing processes for compound semiconductor devices, pushing up annealing temperatures for noncongruent systems and simultaneously controlling the formation of structures across the surface. Indium droplet formation on InAs near metal patterns as a function of temperature To observe the influence of metal patterns on the formation of Indroplets on InAs(111)B substrates due to annealing, we implemented circular and square-shaped metal stacks on the surface in varying sizes (diameters and side lengths 20-160 μm). The metal stacks consisted of 5 nm Al topped with 20 nm Pd. The patterns were repeated over the sample (Fig. 1b). Every annealing step was performed in an ultra-high vacuum (UHV) chamber with a base pressure in the 10 −10 to low 10 −9 mbar regime. For all samples, the native oxide on the surfaces was removed by a standard cleaning method of annealing at 400°C using atomic Hydrogen for~20 min 46 , after which the Hydrogen was turned off and the temperature lowered to 350°C. At this point, no droplets were found. To observe the temperature dependency for In-droplet formation, annealing was carried out on separate samples at 500, 550, 600 and 650°C with a heating rate of 3°C/s. The temperature was directly quenched upon reaching the target value. Subsequently, the samples In atoms will diffuse on the surface, either forming droplets in the droplet zone (DZ) or diffusing into the metal pattern in the droplet-free zone (DFZ), dependent on their position on the substrate. b SEM image of the lithographic pattern after metal evaporation (5 nm Al and 20 nm Pd) and lift-off, c SEM image after heating the sample rapidly to a temperature of 600°C. The coloured lines indicate the three distinct zones forming due to the presence of the metal pattern. d Two adjacent metal patterns with coalescent droplet-free zones. The gain in the droplet-free area is indicated by the arrows. e Dependency between the radius of individual metal patterns (r m ) and the resulting radius of the droplet-free and metal zone (r i ) for different-sized square and circular metal stacks. The error bars indicate the estimated, conservative error for determining the size of each DFZ. The inset shows a more detailed image of an In-droplet found after annealing the sample to 600°C. Source data are provided as a Source Data file. were studied using Scanning Electron Microscopy (SEM) (see "Methods" for more details). We find that nm-sized droplets start to appear after annealing to 550°C. μm-sized droplets form at 600°C and 650°C. Three distinct areas are found on the substrate: (1) the metal zone (MZ) with the deposited Al/Pd-stack, (2) the droplet-free zone (DFZ) on the InAs(111)B substrate around the metal and (3) the droplet zone (DZ) displaying μm-sized In-droplets further away from the metal stacks. Figure 1c shows an overview of the location of all areas on the sample surface. In the DZ, we observe μm-sized In-droplets forming with an average size of 1.43 ± 0.53 μm after annealing to 600°C. The size of the DFZ is determined by fitting an ellipse to the border of the surrounding area displaying In-droplets. By calculating the dependency between MZ and DFZ (excluding regions exhibiting droplets), we find that r i = 2.5 ⋅ r m (r i -radius of the DFZ, r m -of the metal pattern), see Fig. 1e. In general, around 14.0% of the combined DFZ and MZ indicated in Fig. 1c (blue, dashed outline) is covered by the Al/Pdstack and about 86.0 ± 3.8% belongs to the DFZ. This relation is independent of the shape and size of the metal pattern. For bothcircular and square-metal stacks, the DFZ appears to be circular and centred around them; see Supplementary Note 2 for more details. The compact metal stacks employed in our study allow for a direct link between the radii of the individual zones. This provides an easy connection to the theoretical model proposed below. For more complicated or elongated structures, see Supplementary Fig DFZs coalesce and increase slightly (see the blue outlined area in Fig. 1d) when the respective metal stacks are positioned close enough. We do not see any long-range dependencies between different metal stacks when the distance increases further. However, implementing a smart layout of metal structures over a larger area yields the possibility of avoiding any droplet formation for processes involving sample temperatures significantly higher than normally possible. Finally, when annealing samples to above 650°C, the DFZ around the metal pattern vanishes (see Supplementary Fig. 1). Instead, droplets, several tens of μm large form, move along the surface and coalesce. As will be discussed below, the disappearance is likely due to an In-saturation of the Pd, eliminating the sink effect. This explanation would also indicate that prolonged annealing at low temperatures should reduce the ability of the Pd to inhibit droplet formation, which is indeed observed in the Supplementary Video and discussed in Supplementary Note 7. Morphology and composition at the metal-semiconductor boundary Atomic Force Microscopy (AFM) measurements were performed on the border of the Al/Pd-stack and InAs surface before and after annealing to 600°C (see Fig. 2a-c). It is observed that upon heating, the boundary between the Al/Pd-stack and the InAs stays intact. The size and outline of individual metal patterns appear laterally unaltered on the surface. However, the annealing induces a substantial The intensity for all spectra was scaled for better comparison (for the original intensities see Supplementary Fig. 6). Source data are provided as a Source Data file. morphological transformation inside the metal stack itself. Prior to annealing, its surface is flat with a roughness below a few nm and a total height of 30 nm. After annealing to 600°C, μm-sized protrusions and holes are formed (up to 80 nm in height variation). Their level is both above and below that of the InAs substrate, respectively, indicating that vertical alloying has occurred with a significant rearrangement in the Al/Pd-stacks. To gain a better understanding of the chemical and elemental changes induced in the metal and semiconductor, synchrotron-based X-ray Photoemission Electron Microscopy (XPEEM) and μ-X-ray Photoemission Spectroscopy (XPS) measurements were performed in UHV directly after sample treatment. The experimental setup is explained in detail in the "Methods". XPEEM measurements show the presence of In and As in the metal after annealing, with a substantial variation in both amount and chemical environment (see Supplementary Note 3). Pure In was detected within some areas (core level peaks similar to the droplets found on the InAs substrate), as well as in areas where Pd, Al and As were present (Fig. 2d). While Pd-In alloying is consistent with the binary phase diagram, intermixing of Al and In is not 47,48 . However, both Pd and Al are observable on the surface of the metal, indicating that intermixing of the original Pd-and Al-layer must have occurred (Fig. 2f). Otherwise, only the top Pd-layer would be measurable in surface-sensitive XPS. Thus, our measurements show a large intake of In into the metal as well as intermixing of Al and Pd, which is consistent with the roughening observed in AFM and SEM. In is likely binding primarily to Pd as well as forming areas of pure, metallic In. Although some of it originates from the InAs below the Pd, a significant In-intake from the surrounding surface is possible. We can roughly estimate the intake of In into the Pd of a metal stack (see Supplementary Note 4 for the dominating domain in the field of view), giving a ratio of about 0.1 In to Pd atoms. Importantly, we observe no traces of Al or Pd atoms outside the original metal pattern via μ-XPS after annealing, confirming the lateral integrity of the layout once more. Theoretical modelling of the observed DFZ The formation and shape of the DFZ can be understood based on a few basic assumptions: First, an excess of free In atoms is present on the surface when annealing the sample above the congruent evaporation temperature. Second, alloying of In with Pd is favourable; thus, areas with deposited Pd will act as a sink for free In atoms until saturation occurs. The formation and size of the DFZ can now be modelled by a mass-balance and nucleation approach. We use mass balance to estimate the surface decomposition rate F at which In (and As) atoms are released from the InAs surface during annealing. Due to the low vapour pressure of In 49,50 , we assume that the released In stays on the surface and, if close enough to the Al/Pd-stack, diffuses into the Pd. The atomic fraction of In in Pd is given by x In = N In /(N In + N Pd ), where N In is the number of In and N Pd the number of Pd atoms. The latter is given by N Pd = πr 2 m h pd ρ Pd , where r m is the radius, h Pd the thickness and ρ Pd the number density of the Pd-layer. Similarly, N In = πðr 2 i À r 2 m ÞFt, where r i is the radius of the area of interest (depicted in Fig. 1c) and t the time it takes to reach x In . Combining both expressions with x In results in the proportionality which is experimentally observed and illustrated in Fig. 1e. From the fraction of In in the In/Pd alloy (x In = 0.1), the time (t = 20 min) and the initial thickness of the Pd-layer (h Pd = 20 nm), we can estimate the surface decomposition rate F = 2.4 × 10 12 cm −2 s −1 (see Supplementary Information for experimental values). To explain and understand the formation of the DFZ, we apply a nucleation approach. A reduced set of rate equations is used 51 , where we consider the concentrations of In atoms n In and stable droplets n c . Meanwhile, a continuous flux of In atoms into the Pd is present. As a result, a concentration gradient arises, depleting the area around the metal of In atoms and therefore suppressing the formation of droplets. Subsequently, we observe the formation of the DFZ. The reduced rate equation model is explicitly outlined in Supplementary Note 5. In Fig. 3, we show a fit of the model to the observed r i and droplet density far away from the Pd-layer of radius r m = 40 μm. It is evident that the droplet density is essentially zero for r < r i and approaches a constant value for larger r in accordance with our experimental observations. Adding functionality by changing the metal stack The functionality of the lithography pattern can be tuned by modifying the thickness of the layers in the metal stack. To exemplify this, patterns with different Al thickness were implemented. First, we investigate samples with a 15 nm-thick Al-layer topped by 20 nm Pd. The increased thickness of Al suppresses the formation of the DFZ around the metal when annealing to 600°C. In contrast, it causes an accumulation of larger In-droplets on the Al rim along the Pd pattern (see Fig. 4a, b). Due to the nature of the optical lithography and subsequent metal evaporation process, a slight edge of Al is protruding beyond the boundary of the Pd pattern already for the 5 nm deposition of Al prior to any annealing step (inset and white arrows in Fig. 2a). With an increased thickness of 15 nm, the Al-layer imposes a larger boundary to be overcome by free surface In atoms in order to reach Pd and alloy. In aotms can diffuse back from the Al-layer onto the substrate, assuming a lower diffusion coefficient on Al-oxide than on InAs, and no Al-In alloying occurs. Accordingly, the In concentration on the InAs surface is not substantially changed by the metal, allowing droplet formation everywhere. Slight contrast changes can be detected along the edge of the Pd (see the metal area in Fig. 4b). We attribute this to some In atoms overcoming the Al barrier and diffusing into Pd. We do not observe vertical alloying of the metal into the InAs(111)B substrate as seen on the previous samples (Fig. 2c). The three times thicker Al-layer imposes a boundary for substrate or Pd atoms to penetrate through. Thus, Al acts as a barrier preventing alloying of Pd with free In atoms. The complete removal of Al is also investigated. For this, circular (diameter 30 μm) and square metal patterns (side length 30 and 15 μm) are fabricated by depositing only 20 nm Pd. After annealing to 600°C, the DFZ emerges once more. Additionally, Fig. 4c shows that the metal pattern alloys almost to the full extent into the InAs(111)B substrate (compare with Supplementary Fig. 8 for 5 nm Al). By fitting an ellipse to the boundary of the DFZ and comparing the ratio between the DFZ and the area of the deposited metal, we observe the same dependency as for the metal stack consisting of 5 nm Al and 20 nm Pd with the DFZ occupying about 84.2 ± 0.7% of the overall area of interest. Interestingly, the presence of a crystal line defect within the InAs substrate does not appear to alter the DFZ, as can be seen in Fig. 4c. While defects can nucleate and accumulate droplets, we do not observe any additional droplets in the DFZ due to this crystal imperfection. This indicates that the control of In droplet formation via metal patterns is robust towards significant crystal defects in the InAs substrate. Based on these results, we state that solely the presence of Pd in the metal stack is the driving force for the creation of the DFZ. However, introducing a thin layer of Al can mitigate the otherwise strong alloying of Pd vertically into the InAs substrate. This is relevant for device fabrication, where vertically stacked functional layers are crossing each other. Thicker layers of Al can also suppress the formation of the DFZ, a result of the immiscibility of Al and In and a significant decrease of In diffusion across Al. The presence of Pd will no longer affect free In atoms. Instead, the density of In at the Al boundary increases, leading in the end to droplet formation at the rim. By adapting a new design for the metal pattern, the DFZ can be increased by 5.6 ± 0.1% compared to the original pattern. An example can be seen in Fig. 4d, which again exhibits 5 nm Al below 20 nm Pd. This can be attributed to the opening of the inner part of the metal pattern. Therefore, we can conclude that the main alloying of the excess In around the metal only affects the border region. The inner area of the pattern solely alloys with the underlying InAs substrate. Otherwise, we would see a decrease in the size of the DFZ region for the new layout. Additionally, In-droplets are accumulating on the edge of the metal patch. The new pattern was fabricated via e-beam lithography in contrast to optical lithography used before. This results in a structural deviation of the edge of the metal stacks and can lead to nucleation points for In-droplets, as well as a larger barrier to overcome, as the width and border of the Al-layer might be different. The effect is an In accumulation at the metal edge discussed above. Outlook Control over droplet formation, a common phenomenon on compound surfaces, is fundamentally interesting and essential for epitaxial processes. We use this to demonstrate the significant possibilities for bottom-up structure formation by geometrically controlling the elemental surface concentration on an InAs surface using metal stacks. The observed effects should be transferable to other semiconductor compounds such as InP, InSb, GaAs or GaN, where droplet formation above the congruent melting temperature is also a naturally occurring phenomena 31,38,52,53 . Metal stacks containing Au, Ti or Pt are commonly employed as electrical contacts on electronic devices and can serve as alloying partners for free In [54][55][56] or Ga 56-58 surface atoms, respectively. The wide range of metal layering should allow for delicate local tuning of growth parameters by varying composition, material stacking sequence, thickness, and pattern shape. The possibility of designing structures with different functionalities on a single chip is an important feature for more complex devices [5][6][7][8][9] . For example, varying the composition of compound semiconductors enables tuning of the band gap, which is relevant for optical and photovoltaic applications that need to address several wavelength regions simultaneously. Adding a different group V material and annealing, the III-droplets can also serve as nucleation centres for other compound semiconductors. The phenomena observed in this study are robust with regard to different lithography approaches, varying pattern shapes, and the presence of crystal defects. Furthermore, a relation to synthesis processes can be made as the investigated temperature range is suitable for the growth of InAs-based nanowires 39,59 , thin films 60 and quantum dots 61 . This study is also important for epitaxy in the presence of prepatterned metal contacts for future more complex growth on top of gate stacks on Si. Thus, our concept opens an important route to combine lithography definition and synthesis as can be achieved with standard semiconductor fabrication techniques already in use. As the concept is based on a few general assumptions, it should be widely applicable to nano and micro-structured semiconductor synthesis. Materials The substrates for all measurements are commercially available n-type InAs(111)B wafer pieces from Wafer Technology. By employing optical and e-beam lithography, we implemented metal stacks in different shapes and sizes (details in the "Results" section) after removing the native InAs oxide via buffered oxide etching. The patterns exhibited different combinations of an Al wetting and an on-top Pd-layer (Al/Pd: 5/20 nm, 15/20 nm, 0/20 nm) after metal evaporation. Temperature treatment To induce In-droplet formation, the samples were heated to 400°C and exposed to atomic Hydrogen provided by a Hydrogen cracker from MBE Komponenten in UHV to remove the native InAs oxide. Subsequently, the sample temperature was ramped up fast to the desired target value of 500-650°C (heating rate around 3°C/s). After reaching the relevant temperature, the heating current was cut off to prevent formed In-droplets from dispersion (cooling rate around 1.5°C/s). All temperatures were measured with a pyrometer and an emissivity setting of 0.4. The effects described in the paper are observed for each stack, although temperature variations over the sample result in different droplet sizes. Analysis techniques Ex situ measurements were performed in standard imaging mode in a Hitachi SU 8010 SEM. Furthermore, AFM measurements in ambient pressure were done utilising the Nanowizard II from JPK in intermittent contact mode with a highly n-doped Si cantilever (PPP-NCHR from Nanosensors) with a nominal resonance frequency of 330 kHz and a force constant of 42 N/m. To study the formation and movement of Indroplets near the metal edge in situ, an aberration-corrected Spectroscopic PhotoEmission and Low Energy Electron Microscope (SPE-LEEM) connected to the MaxPEEM beamline at MaxIV was employed. Here, a movie was recorded in mirror mode (start voltage −0.1 eV), keeping the sample at a temperature of 550°C for 24.5 min. To analyse the composition of the area of interest, XPEEM maps of the In 4d, As 3d and Al 2p core level were taken at a photon energy of 100 eV (150 eV for Al 2p). A small cross-section of Pd in the low energy regime did not allow for similar XPEEM measurements for any Pd core level. However, μ-XPS spectra were additionally obtained before and after the annealing at distinct positions on the sample and the metal of the Pd 3d (photon energy: 430 eV), In 4d (100 eV), As 3d (100 eV) and Al 2p (200 eV) core level. Reference images were taken in mirror mode and with the Low Energy Electron Microscope. Data availability The XPEEM, XPS, AFM and SEM data generated in this study have been deposited in the Figshare database under https://doi.org/10.6084/m9. figshare.23560146. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/ licenses/by/4.0/.
5,842.2
2023-07-27T00:00:00.000
[ "Materials Science" ]
Effect of Adding Polypropylene Fibers in Met kaolin-Based Geopolymer Concrete • Adding fibers to geopolymer concrete improves the brittleness and its strength. • The polypropylene fiber content boosts the compressive strength of geopolymer concrete. • The density of geopolymer concrete was increased by adding polypropylene fibers. • The workability of geopolymer concrete was decreased by adding polypropylene fibers. Geopolymer is a binder material that was created as a result of efforts to decrease Portland cement's negative environmental effects. Geopolymer concrete shares certain properties with ordinary concrete, including brittleness. Like ordinary concrete, geopolymer concrete, when exposed to stresses, cracks and fails under these stresses. The purpose of adding fibers to geopolymer concrete is to overcome the matrix's brittleness and enhance its strength (particularly flexural strength). This study used metakaolin, a range of alkaline activators, and different quantities of polypropylene fibers to produce geopolymer concrete. Metakaolin's chemical composition, workability, density, flexural and compressive strength of geopolymer concrete were all examined for the purpose of determining the effect of polypropylene fibers on geopolymer concrete. Polypropylene fibers were used to make the mixes, which were then added to the mix at various percentages of 0 %, 0.5 %, and 1 % of the total volume of concrete. The results of the experiments showed that increasing the polypropylene fiber content to 0.5 % boosts the compressive strength of geopolymer concrete. On the seventh day, the compressive strength increased to 21 %. The density of geopolymer concrete was increased by adding polypropylene fibers, and there was a decrease in the workability with different fiber ratios. A R T I C L E I N F O Handling editor: Wasan I. Khalil H I G H L I G H T S A B S T R A C T • Adding fibers to geopolymer concrete improves the brittleness and its strength. • The polypropylene fiber content boosts the compressive strength of geopolymer concrete. • The density of geopolymer concrete was increased by adding polypropylene fibers. • The workability of geopolymer concrete was decreased by adding polypropylene fibers. Geopolymer is a binder material that was created as a result of efforts to decrease Portland cement's negative environmental effects. Geopolymer concrete shares certain properties with ordinary concrete, including brittleness. Like ordinary concrete, geopolymer concrete, when exposed to stresses, cracks and fails under these stresses. The purpose of adding fibers to geopolymer concrete is to overcome the matrix's brittleness and enhance its strength (particularly flexural strength). This study used metakaolin, a range of alkaline activators, and different quantities of polypropylene fibers to produce geopolymer concrete. Metakaolin's chemical composition, workability, density, flexural and compressive strength of geopolymer concrete were all examined for the purpose of determining the effect of polypropylene fibers on geopolymer concrete. Polypropylene fibers were used to make the mixes, which were then added to the mix at various percentages of 0 %, 0.5 %, and 1 % of the total volume of concrete. The results of the experiments showed that increasing the polypropylene fiber content to 0.5 % boosts the compressive strength of geopolymer concrete. On the seventh day, the compressive strength increased to 21 %. The density of geopolymer concrete was increased by adding polypropylene fibers, and there was a decrease in the workability with different fiber ratios. Introduction Geopolymers are green material since they are made from little processed natural ingredients or industrial leftovers, lowering their carbon footprint [1]. Geopolymers have attracted a lot of attention because of their quick strength growth [2], corrosion resistance [3], superior chemical resistance [4], low shrinkage rate, and freeze thaw resistance. To make geopolymer concrete of the needed strength, several mix proportioning methods based on the type of work, availability, quality of materials, field conditions, as well as workability and durability requirements are used. Although the geopolymers have numerous advantages over OPC, they also exhibit OPC-like strain failure behavior [5,6]. Fibers in concrete have been added to improve a range of concrete properties, including fracture resistance, ductility and fatigue resistance, as well as impact and wear resistance [7,8].The addition of fibrous elements to concrete improves its structural integrity. Recent research has discovered that reinforcing concrete with polypropylene, nylon, or steel fibers can lower shear and tensile loads in critical structural regions [9,10].The addition of twisted polypropylene bundles to OPC concrete improves its mechanical properties without increasing density [10]. In addition, adding nylon and polypropylene fibers to OPC concrete improved its engineering qualities, specifically its split tensile strength [11]. Similarly, adding fibers to concrete can greatly improve its flexural strength [12].In an alkaline environment, Poly-Vinyl-Alcohol fibers, on the other hand, are extremely stable. Recent research has shown that these fibers have a good connection to geopolymer matrices [13] and may be used to create composites with better impact toughness [14] and superior freeze-thaw cycle resistance [15]. Met kaolin was used as the basic material in this study. In a previous study, the properties of geopolymer concrete based on met kaolin were improved by replacing it with certain percentages of ordinary cement [16].PWhile polypropylene fibers were used to reinforce the geopolymer concrete. A mixture of sodium silicate solution and sodium hydroxide solution was utilized to react with aluminum and silicon in the met kaolin to form the paste that joined the aggregates and polypropylene 1815 fibers in the combination to form the geopolymer concrete. This paper also investigates the impact of polypropylene fibers on the density, workability, and compressive strength of geopolymers. Research Significance Previous research on geopolymer concrete reinforced with polypropylene fiber and based on Iraqi met kaolin has been restricted, despite the fact that met kaolin is widely available in Iraq. This study presents preliminary findings from studies using Iraqi met kaolin to make geopolymer concrete with polypropylene fibers. Materials The main component is metakaolin, which is sourced from the Dewekla site and meets ASTM C618-12a standards. The chemical composition of metakaolin as determined by the analysis is shown in Table(1), with silica oxide (SiO2) accounting for 55.99 %, aluminum oxide (Al2O3) for 38.32 %, iron oxide (Fe2O3) for 1.735 %, and calcium oxide (CaO) accounting for less than 0.7 %. The calcium silicate hydrate (CSH) gel is formed when metakaolin's silicon dioxide mixes with calcium hydroxide from the cement hydration process, resulting in cementitious compounds appropriate for usage in geopolymers. The presence of calcium ions resulted in a rapid reaction time. As a result, the geopolymer will harden quicker and cure faster [17].A mixture of a 12M sodium hydroxide solution and a sodium silicate solution is used to make the alkaline solution. NaOH granules (which comes in flakes and pellets) were dissolved in water at 98 % purity to make the NaOH solution. The characteristics of NaOH are shown in Table (2)According to Table (3), the concentration of sodium silicate solution is influenced by the ratio of Na2O to SiO2 and H2O. Table (4) shows the parameters of the polypropylene fibers employed in this study. Mixture design and specimen's preparation To prepare a Solution of 12 M NaOH, In a volumetric flask, dissolve NaOH pellets in distilled water [18] for 24 hours, the NaOH solution is allowed to settle. The NaOH solution and the Na2SiO3 solution are combined after 24 hours [19]. When both are progressively blended and swirled, an exothermic reaction occurs, releasing a large amount of heat. The mixture is allowed to settle for 45 minutes to an hour. As a result, hand gloves are utilized as a safety precaution. Metakaolin and aggregates are dry mixed in geopolymer concrete samples. The alkaline activators are then added to the dry mix, which is then wet mixed for 3 to 4 minutes. Finally, polypropylene fibers are added to the wet mix in various amounts, such as 0%, 0.5 %, and 1.0 %. Geopolymer concrete reinforced with polypropylene fibers proportions are shown in Table (5). Fresh geopolymer with or without polypropylene fibers is poured into steel molds with dimensions of (100x100x100) mm cubes, (100x100x400) mm beams, and (100x200) mm cylinders and compacted by a vibrating table. The samples are demolded after being placed in a laboratory environment at 60°C for 24 hours. After that, the samples are placed in an oven with sunlight until the testing day arrives. The weight of the samples was obtained after 7 days to measure the density and water absorption, and they were evaluated in a strength testing machine [20,21]. Figure 2 shows the slump test results of a freshly mixed geopolymer with and without polypropylene fibers. The workability value for a geopolymer mix without polypropylene fibers (PPF0) is 150 mm. The workability value for a geopolymer mix containing 0.5 percent polypropylene fibers (PPF0.5) is 90mm. PPF1 (geopolymer mix with 1% polypropylene fibers) has a workability rating of 75 mm. The workability trend indicates that as the percentage of polypropylene fibers increases from (PPF0) to (PPF1), the workability values drop. This could be due to the polypropylene strands' ability to obstruct free flow. To summarize, the workability of polypropylene fibers reduces as the number of fibers grows [22]. Figure 3 illustrates the density of geopolymer concrete after seven days. The density values for PPF0, PPF0.5 and PPF1 are 2180, 2190 and 2184 kg/m 3 , respectively. Although the density increased in PPF0.5, it noted that the increase in the percentage of fibers caused balling and the formation of voids and gaps that led to a decrease in the density in PPF1, in addition, the excessive increase in fiber reduces the weight of the solid particles. Figure 4 shows how the inclusion of polypropylene fibers affects the compressive strength of concrete as the age of the concrete increases. Figure 4 shows that at the age of 7 days, the pattern shows a little increase in strength from (PPF0) to (PPF0.5) and a decline in strength (PPF1). For 7 days, the compressive strengths of (PPF0, PPF0.5, and PPF1) were 24, 29.2 and 27.23 MPa, respectively. In comparison to (PPF0) concrete, compressive strength increases by 21.67 and 13.45 % for (PPF0.5) and (PPF1), respectively.When polypropylene fibers were added, the compressive strength increased as well. As demonstrated in Figure 4, the compressive strength of (PPF0.5) increased over (PPF0), but decreased in (PPF1). This could be due to fiber's role in inhibiting the proliferation of microcracks by arresting their form in a the matrix. The addition of fiber increased the compressive strength of each geopolymer mixture. As a result, geopolymer concrete containing polypropylene fibers was more durable than geopolymer concrete without polypropylene fibers [23]. Flexural Strength At the age of seven days, all mixtures' flexural strength is assessed, and the findings are shown in Figure 5. and shows that the percent of flexural strength has increased, with the average strength of (PPF0, PPF0.5, and PPF1) being 4.62, 5.67 and 5.83 MPa, respectively. This clearly demonstrates that adding polypropylene fibers to GPC improves its bending strength. The mechanical bond between the geopolymer and the fibers were improved, causing an increment in bending strength [24]. Conclusions The purpose of this study was to investigate the characteristics of geopolymer concrete reinforced with polypropylene fibers. The compressive and flexural strength of geopolymer concrete were influenced by the percentage of polypropylene fibers present. The compressive strength increased to some extent when the percentage of polypropylene fibers increases. At 7 days, the greatest compressive strength was found to be 29.2 MPa. Geopolymer concrete of 0.5 % of fibers. For (PPF1), flexural strength increased by 26 % when polypropylene fibers were added at 1% volume. Also, for (PPF0.5), the greatest density was reported to be 2190 kg/m3 after 7 days. (PPF1) has the lowest value for workability, which had 75 mm. Adding polypropylene fiber has a detrimental impact on workability, but it has a favorable impact on strength and reduces porosity and increases density.
2,727.2
2021-12-01T00:00:00.000
[ "Materials Science", "Engineering" ]
Development and Application of Fire Video Image Detection Technology in China ’ s Road Tunnels A large number of highway tunnels, urban road tunnels and underwater tunnels have been constructed throughout China over the last two decades. With the rapid increase in vehicle traffic, the number of fire incidents in road tunnels have also substantially increased. This paper aims to review the development and application of fire video image detection (VID) technology and their impact on fire safety in China’s road tunnels. The challenges of fire safety in China’s road tunnels are analyzed. The capabilities and limitations of fire detection technologies currently used in China’s road tunnels are discussed. The research and development of fire VID technology in road tunnels, including various detection algorithms, evolution of VID systems and evaluation of their performances in various tunnel tests are reviewed. Some cases involving VID applications in China’s road tunnels are reported. The studies show that the fire VID systems have unique features in providing fire protection and their detection capability and reliability have been enhanced over the decades with the advance in detection algorithms, hardware and integration with other tunnel systems. They have become an important safety system in China’s road tunnels. Introduction In order to cope with the rapid growth of vehicle traffic and limited real estate, a large number of highway tunnels, urban road tunnels and underwater tunnels have been constructed throughout China over the last two decades.The statistics show that 15,181 road tunnels were constructed in China at the end of 2016, and the total length of the road tunnels have increased from 628 km in 2000 to 14,039.7 km in 2016 [1].In addition, the complexity and length of the urban road tunnels constructed have also significantly increased.As of the end of 2016, there are 3,520 road tunnels stretching between 1 km to 3 km with a total length of 6045.5 km, and 815 tunnels longer than 3 km with a total length of 3,622.7 km [1].Some examples of long tunnels in China include the Beiheng Tunnel in Shanghai (10 km long) and the Zizhi Tunnel being constructed in Hangzhou (13.9 km long).At the same period of the time, the traffic in China's tunnels has also significantly increased.The average daily traffic in some of road and underwater tunnels of cities, such as Shanghai, Nanjing and Zhongqi, has reached to 150,000 in 2017. One of negative consequences with the increase in number of tunnels and traffic is that the fire incidents in China road tunnels, especially incidents involving loss of life, have substantially increased.For example, the Huishan Tunnel Challenges of Fire Detection in China Road Tunnels As required by China Standards and Codes [15], the road tunnels with the length of 1,000 m or longer must be equipped with fire detection systems.However, unlike other applications, challenges for use of fire detection systems in the tunnels are significant.The fire incidents in tunnels are attributed to increased traffic, careless driving behavior, vehicle failure, inadequate tunnel management, inadequate safety rules on vehicles, the length and complexity of the tunnels, and the specific features of tunnel infrastructure [2].Various fire types, sizes, locations and causes could be encountered in the tunnels.The fire incidents in the China's road tunnels can be divided into three major categories: vehicle failure, vehicle traffic accidents, and other factors [2,16,17].Figure 1 shows a detailed breakdown of the causes of these tunnel fire incidents, including:  70 per cent of fire incidents started from vehicle failure, which includes: 22 per cent from vehicle engine fires, 18 per cent from vehicle tire fires, 16 per cent from spontaneous combustion of vehicle itself, 7 per cent from vehicle loaded goods fire, and 7 per cent from electric circuit fires;  18 per cent of fire accidents accounted from vehicle crash;  12 per cent of accident fires from the unknown causes.The most of fires in China's road tunnel are involved in the common commute cars with heat release rate (HRR) ranging 3-5 MW, which is consistent with the tunnel fires occurred in other countries [18].However, when heavy goods vehicles or buses with the HRR of 20-30 MW or higher are involved, the consequence of the fire accidents in tunnels are very severe.Once the incident has occurred, the fire developed and spread very quickly in the tunnel, even led to explosions of delivered goods in some accidents [16][17][18][19][20][21][22].The fire incidents also produced large amounts of hot and toxic smokes.They spread very quickly and very far in narrow space of tunnels with assistance of strong winds, resulting in quick loss of visibility and exacerbating the disaster.Due to the complexity and length of the tunnels, it becomes very difficult for people to evacuate and be rescued during the accidents.The disasters in tunnels become increased in the scale with time and can last longer without effective fire control and firefighting.In addition, the traffic congestion due to accidents in tunnels can result in more disasters occurring, such as multi-vehicles crashing and/or involving fires. Considering the difficulties in rescue and firefighting in tunnels, urban tunnels and underwater tunnels with length of 1500 m or longer in China are required to equip with automatic fire suppression system [15].The tunnel is divided into multiple fire suppression zones, each measuring at 25 m in length.During a fire incident, the fire suppression system is activated at two adjacent zones.It is required that the fire detection system must be able to accurately identify the fire location and activate the fire suppression at the correct zones for quick fire control, and at the same time, to avoid any fault discharge during daily operation, resulting in traffic congestion in the tunnels. The environments in road tunnels are very harsh.Exhaust fumes from vehicles in the tunnels are dirty and partly corrosive.The lighting conditions in tunnels are very complex, such as the changes of light in day and night, various tunnel light sources, various vehicle lights, light and sun reflections and shadows.There are various moving vehicles with different sizes and velocities.In addition, the tunnel has strong fluctuating air with ventilation speed ranging from 0 to 5 m/s or higher.The temperature in the tunnel can significant change in daily and by season.For example, over the last few years, the air temperatures in urban road tunnels in some southern regions of China reach over 60 degrees Celsius in summer, and drop to -10 degrees Celsius in winter, increasing the possibility of faults for vehicle and fire detection systems.It is very challenge for fire detection systems to sustain and work well in harsh tunnel environments with low nuisance alarms. As a result, International Road Tunnel Fire Detection Research Project [23] suggested that there are generally three performance criteria for the application of a fire detection system in tunnels.The first criterion is their detection capabilities.It is expected that the fire detection systems shall be able to detect various fire types at their early stage under challenged incident conditions, and simultaneously locate the fire incident.It is also desirable that the fire detection system be able to provide fire information and communicate well with other fire protection systems regarding the spread, growth and scale of fire and smoke in the tunnels, and aid in directing evacuation and firefighting operations.The second performance criterion requires that the fire detection should be reliable and not be affected much by the location of the fire in the incidents, the emissions of pollutants from vehicles and blowing ventilation air.The third criterion is that the fire detection system should work properly in harsh tunnel environment with limited maintenance requirements and their nuisance alarm rate should be established at an acceptable level. Maciocia reported that there are eleven detailed performance requirements on linear heat detection systems in road tunnels, including [5]: Some of these requirements for fire detection have been adopted in China's standards [15,24]. Existed Fire Detection Technologies in China Road Tunnels Before 2010, the fire detection technologies that were mainly used in China road tunnels were linear optic fiber heat detection systems, linear fiber grating heat detection system and two-spectrum infrared (IR) flame detectors. Both linear optic fiber and fiber grating heat detection systems respond to a fire accident based on the rate of temperature rise or pre-set alarm temperature [25][26][27][28].Linear heat detection systems are a popular one used in the tunnels.Each sense cable of the system is required to cover two traffic lanes in the tunnels.Many researches have been conducted to study their capability and limitation in tunnel environments [29][30][31][32].Test and operation results show that the linear heat detection systems:  Respond to a fire at its flame stage as the temperature in the tunnel raises, which may miss early warnings of the fire, depending on the fire type.Their detection times are determined by fire type, size and location in the tunnel and could be delayed when the fire is shielded or under longitudinal airflow.One 1m 2 of gasoline pool fire is usually used to evaluate the detection capability of the linear heat detection systems installed in China's tunnels.  Can identify the fire location along the cable.However, the fire position identified could have a big difference from the real one as hot spot near the ceiling of the tunnel is shifted under strong wind conditions.  Have relatively low nuisance alarms.Defective manufacturing quality in the fiber grating heat detection systems can lead to nuisance alarms. Dual IR flame detectors respond to a fire accident by detecting the flame radiations produced from the fire [25,33,34].Their maximum detection distance ranges from 40 to 50 m in the tunnels.Their performances in tunnel environments have also been substantially investigated [31,32,35].Test and operation results show that flame detectors:  Respond to a fire at its flame stage, which may miss some early warnings of the fire, depending on the fire type.Some are mainly designed for gasoline and diesel fires, not to other fire types, such as heptane fires.They do not respond to fully shielded fires, as the radiations produced are blocked.Their detection times are determined by fire type, size and location in the tunnel and are delayed when the fire is shielded or under longitudinal airflow.  Can identify the region where the fire is located, but not the fire position in the region.Unclear or wrong information on fire location could be provided when a large fire is detected at the same time by a few flame detectors located nearby the fire source.  Have relatively low nuisance alarms, but lens of the detectors can be easily contaminated as they face on-coming vehicles, and work in dirty, humidity and smoke-filled environments. In addition, both linear heat and flame detection technologies can only provide limited fire information on the fire scale, growth and spread in the tunnel.As a result, these detection systems are usually required to work together with the Closed Circuit Television (CCTV) system in the tunnels. Authority in Shanghai in 2013 investigated the functions of the fire protection systems in 13 urban road tunnels built from 2003 to 2010, in which either linear heat detection systems or IR flame detectors were equipped with [36].It was found that some of fire detection systems did not work efficiently, some had high malfunction and nuisance alarm rates, while others could not even correctly respond to a fire incident.The successful cases for timely and correct response to vehicle fire incidents occurred in urban road tunnels in Shanghai were limited.As a result, the fire suppression systems in tunnels were not activated automatically by the detection systems but manually. Research and Development of Fire Video Image Detection Technology In order to compensate for the deficiencies of the linear heat detection systems and optical flame detectors, fire video image detection (VID) technology has gradually been introduced in China's road tunnels over the past ten years.The video image system itself is an essential system used for traffic management and security protection in the tunnels.Cameras and corresponding facilities required in the video monitoring system are already standard features of road tunnels. The original video image technology for fire detection was intended to transfer or record video images and then present them to the human for fire identification.However, with the increase in the tunnel length involving many cameras, it is hard to manage and process numerous video images on time only relying on human's justification.The automatic fire VID system is a combination of video cameras, computing, and video image analysis software [8,25].A typical process of an automatic VID system in fire detection includes digital image inputs from cameras, data filtering processing, background learning and modeling, physical characteristics analysis, data fusion, alarm probability calculation and output.One or multiple frames are put on the suspicious flame or/and smoke signs during detection.Pattern recognition and image processing logic are used to analyze the images on the fly.Fire alarms are issued, once the characteristics of flame and smoke are identified. There are three types of fire VID systems that are commercially available: the flame, smoke and flame/smoke VID systems [37].Flame-based VID systems detect a fire based on the flame characteristics produced by the fire, such as flaming colour, shape, frequency, chromatic aberration and intensity.Smoke-based VID systems detect a fire mainly based on the smoke characteristics, such as smoke shape, movement, colour, and blurring.Flame/smoke-based VID systems detect all fire types, according to either the smoke or flame characteristics of the fire. Over the last two decades, efforts have been made to study the characteristics of flame and smoke as well as nuisance sources encountered in tunnel environments, develop various flame and smoke detection algorithms, and evaluate performance of fire VID systems in lab and operational tunnels.With advance in technologies, fire VID has been evolved from an automatic fire detection system, to a distributed fire detector and an integrated fire/traffic/security detection system in road tunnels. Fire Detection Algorithms Color information of the fire was used for fire detection in the early studies of VID system.Chen et al. [38] used raw R, G and B color information and developed a set of rules to classify the fire pixels.Noda et al. [39] analyzed the relationship between temperature and RGB pixel channels and used gray level histogram features to recognize fires.Phillips et al. [40] used the Gaussian-smoothed color histogram to generate a color lookup table of flame pixel and to identify the fire pixel based on temporal variation of pixel values.Celik et al. [41] developed a statistical color-model in video sequences.However, the fire pixels cannot be segmented well from other objects that have the similar color distribution as fire.This could lead to high rate of false alarms.In order to enhance the reliability for fire detection, various flame detection algorithms based on the features of not only the flame color, but also their patterns, motions, flicker and edge blurring were proposed.For example, the flame detection algorithm proposed by Rong et al. [42] included a generic color model, the geometrical independent component analysis model, the cumulative geometrical independent component analysis model and BP neural network based on multi-features of the flame patterns.Celik's detection algorithm was consisted of a flame color modeling and motion detection [43].The clues used in Toreyin et al.'s fire detection algorithm [44] included flame ordinary motion and color, flame and fire flicker, quasi-periodic behavior in flame boundaries, color variations in flame regions, and irregularity of the boundary of fire-colored regions.Xuan et al. [45] divided their flame detection algorithm into four stages: (1) an adaptive Gaussian mixture model to detect flame moving regions; (2) a fuzzy c-means (FCM) algorithm to segment the candidate fire regions from these moving regions based on fire color; (3) special parameters extracted based on the flame temporal and spatial characteristics; and (4) a support vector machine to distinguish the fire from non-fire.The studies conducted by Wong and Fong indicated that the Otsu multi-threshold algorithm integrated with Rayleigh distribution analysis (modified segmentation algorithm) can be used to produce clear flame only images, while the Nearest Neighbour algorithm can be used to detect flame and non-flame images [46]. The visual characteristics of smoke are less trenchancy and more complicated in comparison to the flame.Yuan developed an accumulated model with block motion orientation for smoke detection [47,48].Ko et al. [49] used both motion detection and color information of the smoke for fire detection.In order to distinguish smoke from mist, Wei et al. [50] used a multi-spectral image system to obtain image sequence in specific spectral range of the smoke and mist.Millan-Garcia et al. [51] proposed a smoke detection algorithm in which the motion and color of the smoke are analyzed, the isolated blocks are eliminated through morphologic operations and non-smoke regions are discarded based on the expansion property of the smoke with time.Muhammed et al. proposed an early fire detection method for both indoor and outdoor fires by using convolutional neural networks that had five convolutional layers, three pooling layers and three fully connected layers [52]. For application of fire VID technology in road tunnels, Wieser and Brupbacher [9] proposed a smoke detection algorithm.Their approach was based on a loss of contrast in the imaged caused by the presence of smoke.For simplicity, they only considered the luminance contrast in their algorithm.They conducted fire tests at the test tunnel Hagerbach with heat release rate up to 5 MW and wind speed between 0.5 and 5 m/s, calibrated the smoke in a smoke box, and conducted alarm tests with large scale tunnel fires in the road tunnel "Schonberg", as well as environmental tests in Gubrist tunnel.They reported that the algorithm was quick to detect smoke, and at the same time immune to false alarms during a short period of operation in tunnels.Jamee, etc. investigated the use of video image processing for early fire detection in tunnels in their European UPTUN program [53].They conducted a literature review on the algorithms used for fire detection in tunnels.The performance of two commercial and one academic VID systems was evaluated.It was found that tens of false alarms from these systems were produced in less than a week due to traffic queues, reflections of the sue on the entrance of the tunnel.They studied the characteristics of flame and tunnel light sources.The fire detections in static images with segmentation techniques, and shape and contour analysis techniques were analyzed.Their research results suggested that these image processing techniques are not sufficient for fire VID in harsh environments.They further considered the time-dependent behavior of fire objects, and developed tracking and track-based algorithms.No smoke detection algorithm was studied in their program. Neural network is also used to determine presence of the flame and smoke in the fire VID algorithm.Ono et al proposed a method to use neural network to analyze the video images for flame detection in road tunnels [54].The flame images were taken from the dynamic image, and the estimated flame zone was extracted by the labelling method.After standardizing the flame zone by expansion and reduction, quantiles of its histogram and area were calculated as feature parameters of flame.The fire was finally judged by the hierarchical type neural network (one input layer, one middle class and one output layer) with the feature parameters (color and area information) as input elements.The flow chart of their image processing is shown in Figure 2. Their test results showed that the vehicle fire in tunnels was detectable by application of the neural network. Yu et al proposed a method by using a back-propagation neural network for smoke detection [55].The color and motion features of the smoke are used as input elements for the neural network in their algorithm.The neural network was trained to determine the presence of the smoke.Experimental results showed that the proposed approach was able to distinguish the smoke from non-smoke videos, and its accuracy was depended on the selected statistical values for training of the neural network. Figure 2. The flow chart of image processing proposed by Ono et al. [54] In order to reduce the false alarm rate and make the detection more reliable in road tunnels, Han and Lee developed a flame and smoke detection method [56].Their detection algorithm consisted of two internal algorithms: Flame and Smoke Detection Algorithm.Various tunnel and vehicle lights and non-smoke region were eliminated first, and then the identified flame and smoke regions were extracted in their algorithm, as shown in Figure 3.The performance of the proposed fire detection method was evaluated in the lab, and the flame and smoke regions were exactly detected, but no test results in operational tunnels were provided. Figure 3. VID algorithm proposed by Han and Lee [56] Both smoke and flame are generated during most of fire incidents.Yu, Mei and Zhang proposed a real-time detection algorithm to improve the detection reliability by detecting both flame and smoke at the same time [57].In addition to using color features, the dynamic features of the fire, such as turbulent movements, changeable shapes, growing rate and oscillation, were considered in their algorithm.The fire detection was processed into four major phases: 1) moving pixels and regions were extracted from the image using frame differential method; 2) two color models were used to find flame and smoke candidate regions; 3) foreground accumulation images were built of both flame and smoke; 4) motion features of flame and smoke were calculated based on block image processing and optical flow technique.Their detection method was tested in various conditions, including in tunnel environment, demonstrating good reliability as the features of both flame and smoke were processed and detected. Evolution of Fire VID Technology The fire VID system designed for flame and smoke detection is the first system that was introduced for use in the tunnel environments.It is consisted of a few of video cameras, a computer unit with video analysis software and display monitors.The cameras up to 8 sets are together connected to the computer unit in which video images from cameras are processed and analyzed using alarm algorithm (Figure 4).The fire VID systems are mainly used in short tunnels involving a limited number of cameras.They are not suited for long tunnels equipped with many cameras due to the difficulty in the process and management.It is difficult for the system to provide information on the fire position in a monitoring region.In addition, there are also concerns on the reliability of the system.Since the cameras are connected and processed in one computer unit, any malfunction of the computer unit would lead to failure of the system. With advance in digital camera and computer technologies, the distributed fire VID detector has been developed and widely used over the last ten years [14,58,59].It is an independent fire detector in which both the video processing and alarm algorithm execution are performed at the detector.The detectors are directly connected to display monitors or to control panels for providing fire protection.They are easily manageable, more reliable and flexible to use in comparison with the fire VID system. High quality cameras used in the VID detectors provide high definition images.Some of distributed VID detectors are equipped with two cameras, one regular and one IR camera, as well as an IR light source, as shown in Figure 5 [14].With both color/black and IR images, more information on the fire source is provided, and the characteristics of the flame and smoke are also more clear, sharp, and distinguishable from the background during image processing.They can reduce nuisance alarms caused from lightings, sunlight and moving vehicle lights in tunnels, and enhance the capability to detect fires inside and underneath the vehicle.The detector with the IR light source is able to work at dark to detect smokes. Figure 5. Photo of a distributed fire VID detector [14] The VID detectors equipped with two cameras are also able to accurately provide information on the fire position in monitoring zones.It is determined according to the location of the flame, not the spread of smoke in the tunnel.As shown in Figure 6, the distance between flame source and the detector is calculated based on the principle of binocular stereo vision [60], where P is fire source, Ol and Or are left and right cameras of the detector, T is the distance between the camera, and Z is the distance to the detector from fire source.This is an important feature for the fire detection system to correctly activate local ventilation and fire suppression systems for fire protection and to provide guidance for the evacuation from the tunnel during a fire incident.Currently the fire detection system, traffic management system, and security protection system are usually three independent systems in the tunnels.There is limited communication among these systems.However, many fire incidents in tunnels are initiated from security or traffic incidents due to vehicle fault or crash, bad driving behavior, inadequate rule on vehicles.Limited communication among these systems could lead to delay for providing early fire alarms and protection, resulting in the failure in preventing the loss of life and property from the fires in tunnels.Efforts are being made to combine three independent fire detection, traffic management and security protection systems into one system [61].As shown in Figure 7, one kind of such integrated system used in China's tunnels is that the distributed VID detectors are used to monitor traffic, security and fire incidents at the same time.The VID detector still directly functions as fire detection and is connected to the fire protection system, while images associated with traffic and security incidents that are provided by the VID detector are processed at a traffic/security analyses and manage unit.The information on traffic, security and fire incident is shared, and communicated and managed together in the system.Once an incident occurs, the integrated VID system can efficiently respond to it, and prompt early fire warning through monitoring and tracing traffic and security accidents.The integrated system can also largely reduce the costs of tunnel facilities and their maintenance.With further advance in technologies, information on the traffic, security as well as fire detection can be analysed and processed at the VID detector.The longitudinal wind speed in the tests are ranged from 0 to 3 m/s.It is also required that fire detection systems are installed and operated in an operational road tunnel for one year. P r i n c i p a l r a y P r i n c i p a l r a y The capability and limitation of two fire VID systems together with other detection technologies were evaluated in NRC research program [31,32,[62][63][64][65]. Full-scale fire tests were conducted in a laboratory tunnel in Ottawa and in the Carré-Viger tunnel in Montreal, Canada (Figure 8).The reliabilities of these detection systems, including the nuisance alarms and maintenance requirements in smoky, dirty and humid tunnel environments, were investigated in the Lincoln Tunnel in New York City for one year (Figure 9).The extensive research showed that unlike other types of fire detection technologies, the fire VID systems could respond to a fire at its smoldering stage for early fire alarm, detect all fire types according to either flame or smoke characteristics produced, but they had no response to the moving fires at the speed of 20 km/hour in the tests.The impact of obstacles and winds on the performance of the fire VID fire system was limited.The fire tests also showed that the effect of smoke on the visibility of the cameras was determined by the fire ventilation conditions, camera location and geometry size of the tunnels.The fire VID system could provide valuable and real-time fire information on fire location, growth and spread.Environmental tests in the tunnel showed that during operation, the fire VID systems had substantial nuisance alarms caused by some traffic lights, such as flashing lights on service/utility vehicles, or weather conditions causing fouling of camera window, or the reflection of sunlight into the tunnel entrance [65]. The performances of fire VID detectors and integrated fire/traffic/security VID systems in fires and tunnel environments were substantially investigated in a number of China road tunnels, including a mock-up of Shanghai Yangzi River Road Tunnel [59], a laboratory tunnel in Zhangzhou, Fujian [66], operating Xianyue Mountain road tunnel in Xiamen [58] and Zhongnan Mountain road tunnel in Shanxi [68].The fire scenarios and test protocols used in these tests were similar to those developed by National Research Council Canada (NRC) in the NFPA tunnel research project [7], but the detection distances from fire VID detectors to fire sources ranged from 45 to 210 m in the tests, which was much longer than those in NRC tests.The speed of longitudinal wind in the tests was up to 5.5 m/s.In addition, more fire tests simulating the smoldering fires occurred in the vehicles were conducted (Figure 10).The effect of harsh tunnel environments and nuisance sources on the capability of the VID detectors and integrated VID systems were investigated, including contaminated cameras, flashing lights on the service/utility vehicles, headlights, brake lights and reflection of traffic lights on the tunnel wall.The tests for traffic and security management included inverse driving, careless driving behavior, congestion management, fallen or left objects from vehicles, abnormal pedestrian behavior, etc.The test results from the more than 100 fire tests conducted in laboratory and operating tunnels in China demonstrated that performances of the fire VID detectors in detecting all fire types and providing early fire warnings were similar to those in NRC research project.With the advance in technologies, however, newly developed fire VID detectors had much longer detection distance than previous ones.They detected a 0.4m 2 of gasoline pool fire under 5.5 m/s of wind speed at 31 s from 125 m of the distance [67].They were able to accurately identify fire locations in its detection region without position shift.This is an important feature of fire detection for correctly activating the fire suppression system, as the detection region of a fire VID detector covers a number of fire suppression zones.The detection times of fire VID detectors generally increase with an increase in detection distance and wind speed. Environmental tests in the tunnels showed that the effect of contaminated camera windows on the response of the fire VID detectors to the flaming fires was limited, but they did make the detectors being more difficult to respond to smoke.With an adjustment of detection view and location of the detectors as well as update of alarm algorithm on the traffic lights, the number of nuisance alarms were reduced. Test results on traffic management also showed that the integrated fire/traffic/security VID systems were able to calculate the number of moving vehicles and monitor vehicle congestion.It could recognize inverse driving and stopped vehicles and identify suspicious fallen or left objects from vehicles (Figure 11).It was also able to provide information on drivers and pedestrians, automatically track their movement, and prevent them from entering into prohibited areas (Figure 12).The system could immediately notify monitoring personnel for traffic accidents, any abnormal traffic and human behavior, and trespassing and/intrusion [66,67].Information obtained from the extensive fire and environmental tests has assisted at optimizing technical specifications, performance criteria, guidelines and installation requirements of fire VID technologies for tunnel applications.The studies have also been used to update NFPA 502, Standard for Road Tunnels, Bridges, and Other Limited Access Highways [68] and a number of China's Standards, including China's National Standard "Design Specifications of Automatic Fire Alarm System" GB50116-2013, China Industry Standard "Highway Tunnel Design Code" JTGD70-2014 as well as China's National Standard "Technical Specifications for Fire Protection in Road Tunnels" that will be implemented in 2018 [15,24,70]. With understanding of the fire VID technology as well as the recognitions from the end users and fire safety authorities, fire VID detectors and their combinations with other types of detection system have been gradually introduced to provide fire detection in the urban road tunnels, two-way long highway tunnels and underwater tunnels in China.More than 30 Chinese road tunnels are equipped with fire VID detectors and integrated fire/traffic/security VID systems over the last few years and more tunnels are being planned to install. One recent application of the distributed fire VID detectors is in the Honggu Tunnel in Nanchang, Jiangxi.It is the largest immersed tunnel among country's underwater tunnels.The length of the tunnel is 2650 m for Northern line and 2665 m for the southern line.Its cross section is 30 m in width and 8.3 m in height.Its fire detection system is consisted of the fire VID detectors and a linear fiber grating heat detection system.The fire VID detectors are installed at 4.5 to 5.2 m high from the ground and the distance between detectors is 40 to 100 m far, depending on the geometry of the tunnel.A total of 88 detectors are used in the tunnel.The layout of each fire VID detector is coordinated with corresponding fire suppression zone.During a fire incident, the fire location and activation of the fire suppression system are determined by the VID detectors according to the flame detected, not by smoke detected, avoiding any fault discharge.The detectors provide information on not only the spread range and scale of the fire and smoke, but also the distance between the fire location and tunnel outlets for evacuation, rescue and firefighting.The detectors are regularly cleaned and maintained.Figure 13 is the displays of fire VID detectors in the Honggu Tunnel.One example for application of the integrated fire/traffic/security VID detection systems in urban tunnels is Xianyue Mountain Tunnel in Xiamen that was operated in 2012 year.It is a two-way tunnel with the length at 1071.78 m for the east tunnel, and at 1095.89 m for the west tunnel.The cross section of each tunnel is 9.25 m wide and 6.7m high.The integrated VID detection system is consisted of eighteen distributed fire VID detectors, one traffic management and accident identification unit, and one central monitor and display system.The distributed VID detectors are used to monitor traffic, security and fire incidents.They are installed at 4.7 m high from the ground and the distance between detectors is 100 to 125 m far, depending on the geometry of the tunnel.Figure 14 is the schematic of the integrated VID system used in the Xianyue Mountain Tunnel.The integrated system has been running smoothly since 2012.As a result, four other urban tunnels in Xiamen are also equipped with the integrated VID detection systems and more installations are being planned. Conclusion China is facing significant challenges of fire safety in road tunnels due to large increase in traffic and the number of urban, underwater and highway mountain tunnels built on its roads.Fire detection is a key element for fire protection in road tunnels, which can make a significant difference between a manageable fire and one that gets out-of-control. Fire VID technology is regarded as an emerging fire detection technology and has demonstrated unique capabilities in detecting all fire types based on either flame or/and smoke produced at their early stage.They can detect a fire at a long distance, and provide real-time images and information on fire growth, spread and scale to provide guidance for evacuation, rescue and firefighting. The VID detectors that are equipped with regular and IR cameras are able to enhance their detection capability and reduce nuisance alarms caused from lightings, sunlight and moving vehicle lights in tunnels, as the characteristics of the flame and smoke captured are more clear, sharp, and distinguishable from the background during image processing.The VID detectors with two cameras can also locate a fire position in monitoring zones for activating local ventilation and fire suppression systems. One major concern on the usage of fire VID technology is its reliability and the number of nuisance alarms produced in harsh road tunnel environments.Various fire detection algorithms considering the characteristics of smoke, flame as well as nuisance sources have been proposed.The neural network is also used to determine presence of the flame and smoke in the detection algorithms.The reliability of the VID systems in tunnel environments have been improved over the years and will be further enhanced with understanding of characteristics of fires and nuisance sources, and advance in artificial intelligence. The fire VID technology has evolved from a manual to automatic VID system, distributed VID detectors as well as integrated fire/traffic/security VID systems over the last decades.The evolution in VID technology has enhanced its detection capability and reliability.It also reduces the costs of tunnel facilities and their maintenance.The integration of fire detection with traffic manage and security in road tunnel is still a developing system.With further advance in electronic technologies, information on the traffic, security as well as fire detection will be able to be analyzed and processed at a distributed VID detector. The test protocol for evaluating the performance of fire detection systems in the road tunnels was developed.Extensive fire and environmental tests on various fire detection systems, including VID systems have been conducted.These tests are very important for understanding of the performance of VID systems and for optimizing their technical specifications, performance criteria, guidelines and installation requirements in tunnel environments.With the recognitions from the end users and fire safety authorities, the application of fire VID systems in road tunnels in China has substantially increased over the last decade.They are now playing important roles in providing fire safety for road tunnels. Figure 1 . Figure 1.Analysis of fire accidents in China road tunnels  Detection accuracy (1-2 o C for absolute temperature measurement, 0.5 o C for rate-of-rise temperature measurement and ±5m for out of a 50 m tunnel segment); Detection time (30-60 s); System approval; System interface to other systems; Fault tolerance;  Fail-and false-alarm-safe operation (no false alarm); Repair time; Tunnel washing machine; System life time;  Operation cost; and  Maintenance cost. Figure 4 . Figure 4. Schematic of a fire VID system Figure 6 . Figure 6.Schematic for identification of a fire position in the tunnel in a VID detector with two cameras Figure 7 . Figure 7. Schematic of an integrated fire/traffic/security VID system 4.3.Evaluation of Performances of VID Systems in Tunnels International Road Tunnel Fire Detection Research Project developed a test protocol to evaluate the performance of fire detection systems for use in road tunnels in a two-year international research project [7].The fire scenarios selected are those common and challenged ones encountered in tunnel fire incidents, including:  Pool fires located inside, underneath and behind vehicles with HRR up to 3.5 MW;  Stationary passenger vehicle fires with HRR up to 2 MW; and  Moving vehicle fires with HRR up to 150 kW. Figure 11 .Figure 12 . Figure 11.Test of integrated VID system on incident identification and left object from a vehicle in Xianyue Mountain road tunnel in Xiamen Figure 13 . Figure 13.Display of fire VID detectors at Honggu Tunnel in Nanchang, Jiangxi Figure 14 . Figure 14.Schematic of integrated fire/traffic/security VID system at Xianyue Mountain tunnel in Xiamen, Fujian
8,944.6
2019-01-27T00:00:00.000
[ "Computer Science" ]
Type , course and outcome of community acquired infections in hospitalized diabetics Diabetes mellitus has been associated with increased frequency of serious infections which are attributed to immune deficiencies. The aims of this study were to investigate the type, course and outcomes of community acquired infections, and especially bacteremia in diabetics hospitalized for infection. One hundred and thirty-four consecutive patients (67 diabetics and 67 non-diabetics) matched for age, who were admitted to a general District Hospital in Greece due to infection, were included in this case control study. Diabetics presented urinary infections (46.3% vs. 26.8%, P=0.006), skin infections (9% vs. 0%, P=0.007) and bacteremia (11.1% vs. 1.5%, P=0.023) more often than controls. The most common microorganisms in diabetics were Escherichia coli, Klebsiella pneumoniae, Streptococcus species and fungi. Diabetics had a significantly prolonged hospital stay (6.7±5.4 vs. 4.5±2.4, P=0.003) compared to controls. Inhospital mortality was similar in both groups (10.4% vs. 3%, P=0.082) but diabetics had an increased risk from death due to bacteremia (Log-odds 4.2, SE=1.1, P<0.0001). Although the analyzed cohorts are small, we found that patients with diabetes mellitus have longer hospitalization related to infections and are at increased risk of bacteremia which may result in adverse outcome. Introduction Diabetes mellitus (DM) has been associated with an increased frequency of infections. 1,24][5] However, despite advances in therapeutic strategies, including the introduction of novel pharmaceutic agents and therapeutic measures, morbidity indices in diabetics are still considerable. 1,20][11] In Mediterranean countries, there is an assumption that the management of infections is challenging because of the excessive use of antibiotics and of the prevalence of multidrug resistant bacteria (MDR) [12][13][14] which often requires the use of pharmaceutic agents which are potentially toxic. 15,16In other countries, community acquired bacteria resistant to antibiotics are already considered a major public concern. 17However, published data regarding the type of infections due to MDR, especially of a serious nature such as bacteremia, are scarce. In the present study, we aimed to investigate the type, course and outcomes of community acquired infections, especially bacteremia in diabetic medical patients. Materials and Methods This was an observational case-control study.Diabetic patients admitted to the General district hospital of Serres in Northern Greece between 2000 and 2006 were included in the study if they fulfilled the following criteria: a) over 18 years of age; b) DM type I or type II; and c) admission due to infection.Exclusion criteria were: d) immunosuppression due to cancer or to immunosuppressive agents; e) patients with congenital abnormalities; f) recent admission requiring surgery; g) surgery involving respiratory and urinary system; h) coexistence of unstable disease (other than DM or infection) which required admission.An equal number of age-matched nondiabetic patients admitted due to infection who fulfilled criteria a), c), d), e), f), g), h) were also included in the study as controls. Baseline measurements At baseline, participants underwent clinical examination and basic radiological and laboratory assessment including full blood count, erythrocyte sedimentation rate (ESR), serum biochemical analysis including glucose levels and C-reactive protein (CRP), and microbiology studies including blood, urine or other type of cultures decided by the physician responsible for treatment.For each patient, a Sepsis-related Organ Failure Assessment (SOFA) score was calculated and type of infection, history including duration of diabetes, co-morbidities and number of previous hospitalizations for infections were recorded.Duration of hospitalization and and in-hospital morbidity were estimated at the end of hospitalization/ discharge. Definitions Infection and bacteremia were defined according to published guidelines. 18DM was defined according to American Diabetes Association (ADA) criteria. 19Bacteremia was confirmed by positive blood cultures.Blood and urine cultures were considered as positive according to the local protocol (>10 5 cfu/mL).SOFA score was used for clinical outcome assessment, as it is commonly used for the prognosis of mortality during the first seven days of hospitalization in critical care patients. 20,21For the course of infection, daily mean of WBC and temperature available for each patient were recorded.Febrile was defined by an armpit temperature of over 38°C (fever) and patients were considered to be afebrile when armpit temperature remained below 37°C for longer than 24 h.Active infection was considered as WBC levels greater than 10,000/mm 3 .In-hospital morbidity and length of hospital stay were recorded for all patients. Statistical analysis Data are presented as frequency (%) for qualitative parameters or mean±SD for quantitative variables.Comparisons between cases and controls were performed by using the t-test or the c 2 test as appropriate; a P value below 0.05 was considered statistically significant.Univariate and multivariate logistic regression analysis were performed to determine variables associated with bacteremia, in-hospital stay and adverse outcome.The following variables were included in the univariate analysis: sex, age, cardiovascular complications, hypertension, hyperlipidemia, COPD, hypothyroidism and SOFA score on admission.Statistical software SPSS version 17 was used for data analysis. Results A total of 134 subjects were included in the study made up of 67 patients and 67 controls.Table 1 summarizes baseline characteristics of the study population.Glucose levels in DM patients at admission were 300.12±153.41mg/dL.Diabetic patients had higher prevalence of cardiovascular disease, hypertension, history of stroke and ESR compared to non-diabetics, and significantly higher frequency of previous hospitalizations due to infection (P<0.05). Skin infections were caused by Streptococcus pyogenes and Staphylococcus aureus and they included cases of erysipelas (n=1), external otitis (n=2) and abscess of lower extremities (n=1). Table 3 represents 38 microorganisms isolated in biological samples in diabetics and controls.No MDR bacteria were identified. Overall, bacteremia was found in 8 patients.Bacteremia was significantly more frequent in diabetics compared to controls (7 diabetics vs. one non-diabetic; P=0.023).Bacteremia was caused by E. coli (n=1), K. pneumoniae (n=2) and Streptococcus pneumoniae (n=1) in diabetics and Streptococcus viridans (n=1) in non-diabetics.Univariate analysis did not show any potential risk factor for bacteremia. Regarding the course of infection, diabetics had a normalization of WBC (<10,000/mm 3 ) and armpit temperature after the 3 rd and 7 th day of hospital stay, respectively, compared with control subjects for whom these values were obtained after the 3 rd and 6 th day of hospital stay, respectively (Figure 1).However, the differences between total WBC (P=0.425) and temperature (P=0.853) at day 4 were not significant.On the other hand, there was no normalization of blood glucose levels in diabetic patients during their hospital stay (Figure 2).There were 9 deaths: 7 deaths (5.2%) occurred in diabetics and 2 (1.5%) in controls (P=0.084).In diabetics, 5 out of 7 bacteremia cases in our study were fatal due to septic shock (n=2) and acute distress syndrome (n=2) from lower respiratory tract infection, and only in one case due to septic shock after urinary tract infection.The critical condition of these patients, and the unexpected rapid progress of their disease (mean hospital stay one day), probably meant it was not possible to isolate the corresponding pathogens in all of the cases (K.pneumoniae n=1, S. pneumoniae n=1).The mean age of these 5 patients was 63 years: 2 of them had an uneventful medical history and the other 3 had cardiovascular complications.Clinical profiles of the 2 fatal cases of the non-diabetic group were similar.Risk analysis showed that bacteremia was the only independent risk factor for in-hospital fatal outcome (Log-odds 4.2, SE=1.1, P<0.0001).In contrast, no significant relationship was found between the duration of hospital stay and the several clinical and laboratory variables tested, including glucose levels. Discussion The present study showed that diabetic patients admitted due to infection have more frequent hospitalizations than non-diabetics (P=0.017) and may have significantly longer hospital stay due to infection compared to nondiabetics (P=0.003).In addition, bacteremia may be significantly more prevalent in DM patients requiring hospitalization compared to non-diabetics.These findings are in agreement with previous studies which showed that bacteremia was more frequent in diabetics. 4In addition, the present study showed that bacteremia in diabetic patients may be a significant independent risk factor for fatal hospital outcome. In this study, we found there was no difference in in-hospital mortality between diabetic and non-diabetic patients admitted due to infection.Other investigators have come to similar conclusions. 4In contrast, other retrospective cohort studies, found that diabetes is a factor of increased risk of dying from infectious disease. 3,5A plausible explanation for these controversial results may be attributed to differences in study cohorts or to different treatment protocols for glycemic control and infection, which alter not only the outcome but also the duration of hospital stay.In addition, in-hospital mortality in diabetics may be attributed to several factors and therefore the contribution of infection to mortality may be underestimated.Diabetic patients experience other significant problems as well, such as cardiovascular or renal disease, which are significant risk factors for increased mortality regardless of the presence of infections. In the present study, we found an association between DM and prolonged in-hospital stay.Even if we found no significant factor that could independently increase the risk for hospital stay, we observed that there was a short delay in normalization of the temperature levels in diabetics that could be attributed to the patients' immune deficiency.Moreover, considering that diabetic patients had an abnormal mean value of blood glucose levels during their hospitalization, we speculate that, at least in part, diabetic patients may have required longer hospitalization time due to the need for metabolic control, since acute infection may lead to additional stress-related hyperglycemia. 22Therefore, this may have additionally contributed to prolonged hospitalization in this study.Similarly, Horcajada and colleagues found that diabetic patients with community acquired urinary tract infection had longer hospitalization compared to nondiabetics.This was attributed to the need to reach an adequate metabolic control or to recover from a more severe infection. 23here was a significant difference in erythrocyte sedimentation rate (ESR) between the two groups of patients with infection.As far as other inflammatory indices are concerned, such as C-reactive protein (CRP) and white cells, which are traditionally used as markers of the severity of infection, there was no significant difference between patients and controls. 24,25ESR may be a good marker reflecting the inflammatory process that occurs during infection.Notably, another study indicated that ESR in diabetics may be elevated in the absence of overt infection. 26n our study, bacteremia was significantly more frequent in diabetic patients and was associated with adverse outcome.Bacteremia was attributed to E. coli, K. pneumoniae and S. pneumoniae that were not multiple drugresistant.In this respect, one might argue that these infections could be controlled by using adequate antibiotics.However, despite adequate treatment, 5 out of 7 bacteremia cases in our study were fatal indicating the potential impairment of the diabetic patient's immune response.Nevertheless, this was not investigated in our study and this limitation should be noted. Regarding the microbiological patterns observed in this study, our findings are in agreement with previous studies which suggested that E. coli and K. pneumoniae are the most usual microorganisms for infection in diabetics. 4,27Additionally, in one of these studies, Stoeckle and colleagues proved that K. pneumoniae was the most frequent microorganism causing bacteremia in a group of diabetic patients.This might be due to the fact that DM patients often present urinary infections where the most usual microorganisms of this category of infection are E. coli and K. pneumoniae.Geerlings and colleagues found that diabetics were more vulnerable in urinary tract infections and that E. coli adheres better to uroepithilial cells in DM patients. 2,28,29oreover, according to our study, fungi and specifically Candida species were found to be another leading cause of infections in diabetics, causing exclusively urinary tract infections.However, other studies have come to contrasting conclusions.Diabetes was found to be a risk factor for fungal urinary tract infections by Krcmery and colleagues, whereas González-Pedraza Avilés and colleagues did not found an association between urinary tract infection from Candida and the presence of diabetes. 30,31n addition, skin infections in our study were observed only in the group of diabetics and corresponding microorganisms involved in these infections were S. aureus and S. pyogenes.S. aureus has proven to be the most frequent isolated pathogen in another recent study, causing 63% of soft tissue infections, whereas skin infections were increased in diabetic patients compared with non-diabetics in a Danish general population who participated in the Copenhagen City Heart Study. 32,33oreover, apart from skin infection (one case) in our study Streptococcus was found responsible for 2 pneumonia cases among diabetics.Regarding group B streptococcus infec-tions, Skoff and colleagues found that patients with diabetes were more likely to present with skin and/or soft tissue infections and pneumonia, and that diabetes was present in 44.4% of all cases, whereas in another study, Schwartz and colleagues observed a 10.5-fold increase in risk of group B streptococcus infections in diabetics. 34,35n conclusion, despite the small analyzed cohort, the present study demonstrated that diabetic patients admitted with infection, present bacteremia more often than non-diabetics.In turn, bacteremia, although not caused by MDR bacteria, is a significant factor for increased mortality in diabetic patients.In addition, the present study showed that diabetic patients had longer hospital stay compared to controls.In the light of these findings, rigorous prevention strategies and therapeutic interventions should be implemented in this group of patients, aimed at optimizing metabolic control and early detection of bacteremia by taking a blood culture on admission of all diabetic patients diagnosed with infection. 36 Table 1 . Baseline characteristics of diabetic patients and controls included in the study. Pts with diabetes mellitus Data are presented as n (%) or mean (±SD).Differences between patients and controls were evaluated by t-test or c 2 test as appropriate.COPD=chronic obstructive pulmonary disease. Table 2 . Type of infections on admission. Data are presented as n (%).Differences between diabetes mellitus patients and controls were evaluated by c 2 test.Gastrointestinal infections included biliary tract infections and gastroenteritis. Table 3 . Microorganisms isolated from diabetic patients and controls. Multimicrobial infections included one case with Pseudomonas aeruginosa, Klebsiella pneumoniae and Proteus mirabilis.Fungi included Candida spp.Differences between DM patients and controls were evaluated by c 2 test. Table 4 . Outcomes of the study. Type of infection Pts with diabetes mellitus Controls P n=67 n=67 Data are presented as n (%) or mean (±SD).Differences between DM patients and controls were evaluated by t-test or c 2 test as appropriate.
3,255
2010-10-22T00:00:00.000
[ "Medicine", "Biology" ]
Preparation and Characterization of Undecylenoyl Phenylalanine Loaded-Nanostructure Lipid Carriers (NLCs) as a New α-MSH Antagonist and Antityrosinase Agent Purpose: The aim of this study was to characterize the undecylenoyl phenylalanine (Sepiwhite (SEPI))-loaded nanostructured lipid carriers (NLCs) as a new antimelanogenesis compound. Methods: In this study, an optimized SEPI-NLC formulation was prepared and characterized for particle size, zeta potential, stability, and encapsulation efficiency. Then, in vitro drug loading capacity and the release profile of SEPI, and its cytotoxicity were investigated. The ex vivo skin permeation and the anti-tyrosinase activity of SEPI-NLCs were also evaluated. Results: The optimized SEPI-NLC formulation showed the size of 180.1±5.01 nm, a spherical morphology under TEM, entrapment efficiency of 90.81±3.75%, and stability for 9 months at room temperature. The differential scanning calorimetry (DSC) analysis exhibited an amorphous state of SEPI in NLCs. In addition, the release study demonstrated that SEPI-NLCs had a biphasic release outline with an initial burst release compared to SEPI-EMULSION. About 65% of SEPI was released from SEPI-NLC within 72 h, while in SEPI-EMULSION, this value was 23%. The ex vivo permeation profiles revealed that the higher SEPI accumulation in the skin following application of SEPI-NLC (up to 88.8%) compared to SEPI-EMULSION (65%) and SEPI-ETHANOL (74.8%) formulations (P<0.01). An inhibition rate of 72% and 65% was obtained for mushroom and cellular tyrosinase activity of SEPI, respectively. Moreover, results of in vitro cytotoxicity assay confirmed SEPI-NLCs to be non-toxic and safe for topical use. Conclusion: The results of this study demonstrate that NLC can efficiently deliver SEPI into the skin, which has a promise for topical treatment of hyperpigmentation. Introduction Stratum corneum (SC) is the first layer of epidermis, which is comprised of dehydrated and keratin-rich corneocytes compacting in a continuous lipid bilayer. This layer is the main barrier against the penetration of watersoluble and lipid-soluble materials into the skin. 1 The stratum basale is the innermost layer of the epidermis that contains melanocytes, melanin-producing cells. 2 Melanin is a skin color determining and ultraviolet radiation (UV) protection factor. 3,4 In melanocytes, melanin pigments are synthesized from tyrosine through the enzymatic process called melanogenesis. 5 This process is multifactorial that is regulated by various factors, consisting of the female sex hormones, melanotropin or α-melanocyte-stimulating hormone (α-MSH), and catecholamines. Hormone α-MSH and catecholamines induce melanogenesis through type 1 melanocortin receptors (MC1Rs) and β-adrenergic receptors (β-ADRs), respectively. These hormones are secreted in response to sunlight, UV, hormonal influences, and other environmental stimulating factors. The overproduction or abnormal distribution of melanin can cause freckles, melasma, and hyperpigmentation. 3,4 Therefore, the inhibition of α-MSH and β-ADR can be a potent strategy for treating hyperpigmentation. Undecylenoyl phenylalanine (Sepiwhite ® , abbreviated to SEPI)) ( Figure 1) is a novel lightening agent, which developed inspired from natural MC1R receptors antagonists presenting in the skin, AGRPs (agouti-related protein) and probably acting as an antagonist of α-MSH and β-ADR. 5,6 In recent years, SEPI has been used to treat melisma. [7][8][9] The treatment of local cutaneous dermatologic by employing a pharmaceutical agent is easy, suitable, and generally well accepted by the patients. 10 However, the prevention of dermal drug penetration by the SC layer is the main limitation of this strategy. There is also convincing evidence indicating that the drug released from the conventional formulations is usually trapped in the upper layers of SC and cannot pass through it. 11 Among modern drug delivery carriers, nanostructured lipid carriers (NLCs) are promising colloidal carriers. Some advantages of NLCs are the protection of the encapsulated agents against chemical and enzymatic degradation, good stability during the storage period, avoidance against organic and alcoholic solvent, easy and scalable production process, controlled release profile of drugs, and enhancing the penetration of drugs into the skin. 12 Controlled drug release is vital for long-time drug delivery and regulates the systemic absorption of the drug, which is important when the drug is stimulant in high concentrations. 13 Moreover, NLCs have a higher encapsulation capacity, a lesser drug leakage, and an improved drug release profile compared to solid lipid nanoparticles (SLNs) due to the partial replacement of solid lipid with liquid lipid. It has been demonstrated that lipid nanoparticles (NPs) have UV-blocking effects. 14 Given that UV radiation is an important cause of hyperpigmented lesions, the use of these carriers probably can be beneficial in this case as well. Additionally, NLCs can enhance the penetration of the drugs into the skin by various mechanisms such as direct exposure to the skin surface, skin hydration, lipid exchange between NLCs and SC, and internalization into follicles and adipose tissue. 15 Here, for the first time, we aimed to prepare a SEPIloaded NLC formulation as topical drug delivery system. We investigated the physicochemical properties, longtime stability, efficiency of drug incorporation, drug release pattern, and antityrosinase activity of SEPI-NLC. Moreover, to assess the accumulation and penetration efficiency of SEPI in the epidermis, the cumulative amount of penetrated drug in emulsion, NLC suspensions, and alcoholic solution were studied and compared. Male BALB/c mice were also obtained from Animal Resources Center at the Pasteur Institute of Iran (Tehran, Iran) and were treated in accordance with the National Institutes of Health guide for the care and use of laboratory animals (NIH Publications No. 8023, revised 1978 Preparation and characterization of SEPI-loaded NLCs, SEPI-EMOLSION, and SEPI-ETHANOL Preparation of NLCs Different concentrations of GPS, GMS, BW, and CP as solid lipid and POL188, SP80, and Gelucire as the surfactant, alone or in combination, were used to produce the different NLC formulations (Table 1). In all formulations, MCT oil was the liquid lipid of choice. High-speed homogenization and ultrasound methods were used to prepare SEPI-NLCs. In brief, the lipid phase consisting of solid and liquid lipids, SEPI, and surfactant, and the aqueous phase containing water with/without surfactant were separately prepared in a hot water bath (80°C). The aqueous phase was quickly supplemented to the lipid phase and a homogenized emulsion was obtained by an Ultra-Turrax T25 (IKA T10, Germany) at 11 500 RPM for 3.5 minutes. A probe sonicator (Bransonic, USA) was used to disperse the solution for 5 minutes. 16 The resultant NLCs were chilled to room temperature. The optimal formulation was selected regarding the particles size, polydispersity index (PDI), zeta potential, drug leakage, and homogeneity during one month ( Table 2). SEPI-EMULSION (SEPI-CREAM), a conventional formulation, was produced based on the equivalent concentration of SEPI, MCT, solid lipid, and surfactant at NLC formulations by hand mixer and without Ultra-Turrax homogenization and probe sonication. MCT, GPS, and SEPI as lipid phase, and dissolved POL188 in water as aqueous phase were quickly mixed. To prepare SEPI-ETHANOL, an equivalent concentration of SEPI in the other formulations was dissolved in 10 mL ethanol. Particle size and PDI Particle size, zeta potentials, and PDI of SEPI-NLCs were measured in triplicate employing Zetasizer Nano-ZS (Malvern Instruments Ltd., UK) by the photon correlation spectroscopy method at 25°C. 17 Zeta potential reflects the electric charge on the particle surface and physical stability of colloidal systems. This can be determined by the electrophoretic mobility of particles and colloids dispersed in a liquid. Particles with a large negative or positive zeta potential (more negative than −30 mV or more positive than + 30 mV) will repel each other and will not aggregate. 17 Encapsulation efficiency (% EE) and drug loading (%DL) The percentage of the encapsulated SEPI within NPS (% EE) was identified by the indirect method. Briefly, 1.5 mL of SEPI-NLCs suspension was added to an Amicon ultrafilter with a molecular weight cut-off 100 kDa and centrifuged for 30 minutes at 13 000 RPM, by refrigerated centrifuge (Remi Elektrotechnik Ltd., Maharashtra, India) to separate the aqueous and lipid phases. To determine the percent of EE, the sample that was passed through the filter membrane was quantified by high-performance liquid chromatography (HPLC). 16 HPLC was set on a Lichrospher C8 column (5 Am, 4.6 mm ID25 cm, Hanbang, Dalian, China). The mobile phase was a mixture of methanol: water (95:5 v/v) with 2 mL/min flow rate, 211 nm as the detection wavelength, and the retention time ~1 minute. The calibration curve of SEPI was showed a linear response (R 2 = 0.9999) in the SEPI concentrations, which ranged from 0.975 to 125 μg/mL. Based on the procedure, Differential scanning calorimetry (DSC) assay DSC study was performed using a Mettler DSC 821e (Mettler-Toledo, Gießen, Germany). The scanning was performed at a heating rate of 5°C/min in 0°C-200°C under 5 mL/min nitrogen flow rate and compared with an empty aluminum pan as reference. 18 Samples involved in bulk PRES, SEPI alone, lyophilized drug-free NLCs, and lyophilized SEPI-NLCs. The thermograms of samples and the calorimetric parameters were calculated by STARe software. Transmission electron microscopy (TEM) imaging The morphology of the SEPI-NLCs was investigated by TEM (Zeiss, Jena, Germany). 20 μL drop of diluted SEPI-NLC suspension (1:25) was fixed on a copper grid and dried at 25°C, and then was stained with 2% (w/v) uranyl acetate. 19 The dried specimen was evaluated by TEM. Long-term stability studies The NLCs samples were stored in sealed tubes and away from light at 25°C. The mean diameter of particles, PDI, zeta potential, and clarity of the samples were examined * In all formulation, MCT oil 1.5% was used as liquid lipid and the volume of water was 10 mL. in triplicate 1, 3, 6, and 9 months after preparation. The stability of the SEPI-loaded NLC was also assessed by samples centrifugation at 13 000 RPM for 30 min. 20 In vitro release study In vitro release study of SEPI from NLC and emulsion formulations was measured by the dialysis bag-diffusion way using a dialysis membrane with a 12-14 kDa molecular weight cut-off (Visking® dialysis tubing, Servia, Greece). Dialysis bags were waterlogged overnight and packed with 1 mL of 0.4% SEPI-NLCs and 0.4% SEPI-EMULTIONs suspensions. The sealed dialysis bags were located into the release buffer ( Q t and Q o symbolize drug concentrations at times t and zero, respectively. K o and K H are constants of Zero-order and Higuchi, respectively. In this formula, M t and M ∞ denote the released drug at times t and ∞, "K p ", and "n" show the pseudokinetic constant and the release exponent, respectively. The n < 0.43, 0.43 < n < 0.89, and n > 0.89 indicated a Fickian diffusion, non-Fickian, and zero-order release, respectively. 24,25 Skin penetration and retention study The skin permeation and retention tests were performed ex vivo for SEPI-NLCs, SEPI-EMULSION suspension (as conventional formulation), and SEPI-ETHANOL solution employing a 4-station Franz diffusion cell (PermeGear, Inc., USA). The shaved and full-thickness grafts of abdominal skin of BALB/c male mice (7 weeks, 25 g) were soaked in PBS through the dermal side for 1 h to remove the subcutaneous fatty tissues. The Franz diffusion cells were kept at 37°C connecting with a circulatory water jacket. The skins were fixed between the receptor and the donor chambers whereas the SC and dermis sides were exposed to outside the Franz diffusion cell, and the receptor site, respectively. The receptor compartment with 4.54 cm 2 diffusion area was packed with 25 mL PBS (pH 7.4) contained 1% (w/w) Tween 80 which was stirred at 200 RPM. The skin surface in the donor chamber was coated with a 4000 μg equivalent of SEPI-NLCs, SEPI-EMULSION suspension, and SEPI-ETHANOL solution. The cell was covered with paraffin to prevent evaporation. Next, 2 mL of the solution was withdrawn from the receiver compartment as samples at 0.5, 1, 2, 4, 6, 8, 18, and 24 hours, exchanged with the new PBS and 1% Tween and incubated at 37°C. After 24 hours, the residual samples were collected from the skin surface and dissolved in methanol and chloroform in a 1:2 ratio. All samples were passed through an aqueous 0.45 μm pore diameter membrane filter and the supernatant was assayed by HPLC. 16 The accumulation of SEPI in the skin (retention rate) was equal to the diminishment in the content of SEPI, which was penetrated into the skin (receptor part) from the SEPI remaining on the skin (residue). Cell viability assay The resazurin (or Alamar Blue) was used for cell viability assay. B16F10 and HDF cells were seeded into 96-well plates (4 × 10 5 cells/well) and cultured in RPMI 1640 and DMEM 1640, supplemented with 10% FBS, 100 μg/mL streptomycin, and 100 U/mL penicillin. The media were maintained in a humidified condition containing 5% CO 2 for 48 hours at 37°C. Next, the cells were treated with 20 µL resazurin (14 mg/dL) along with 1.25, 2.5, and 5 µM of SEPI-solution and further incubated for 48 hours. The cell viability rate was measured in triplicate by tracking the absorbance at 570 and 600 nm compared to doxorubicin (DOX) 5 μg/mL as a positive control and non-treated cells as a negative control. 26 The assessment of mushroom tyrosinase activity In a 96-well plate, 20 µL of SEPI at concentrations 5, 2.5, 1.25 µM were mixed with 160 µL of L-DOPA (5 mM, pH 6.8). Then, 20 µL mushroom tyrosinase was added to the wells and shaked for 5 minutes. Kojic acid and cell-containing media were used as the positive and negative control, respectively. The plates were incubated at 37°C for 30 minutes. Then, the absorbance of produced dopachrome was checked at 490 nm by an ELISA reader. 26 Cellular tyrosinase activity assay B16F10 cells tyrosinase activity was monitored by determining the L-DOPA oxidation rate to dopachrome. The 24 hours incubated B16F10 cells were seeded in a 12-well plate (10 5 cells/well) and treated with 5, 2.5, 1.25 µM of SEPI for 24 hours. After that, the cells were resuspended with trypsin and the pellet was washed with PBS. Then, 100 µL sodium phosphate buffer 100 mM (pH 6.8), containing 1% Triton X-100 and 0.1 mM PMSF, was used for 30 minutes to lyse the cell pellets. The lysed cells were then centrifuged for 20 minutes at 10 000 RPM at a cool temperature. 100 µL suspension of protein was mixed with the same amount of 5 mM DOPA in each well and incubated at 37°C for 2 hours. The produced dopachrome was quantified based on its absorbance at 475 nm and the standard curve of mushroom tyrosinase. 26 Date analysis Statistical analysis was carried out to evaluate differences between groups using one-way analysis of variance (ANOVA) for cellular test and two-way ANOVA for release and permission studies with GraphPad Prism 6.01 (GraphPad Software, Inc., USA). The data are displayed as a mean ± standard deviation (SD). Comparisons of two groups at similar times were performed by Sidak multiple comparison test. P values less than 0.05 were considered as statistical significance. Results and Discussion Optimization and characterization of NLC The size, shape, colloidal stabilization, and drug loading capacity of the nano-sized carriers are affected their performance. Particles with less than 200 nm size are considered to be optimum size distribution, which are enhanced drug accumulation in action site. 11 Moreover, the highly negative and positive zeta potentials of NPs provide a high stabile and colloidal nanocarrier. The stable colloidal nanocarriers provide proper properties for NPs in drug delivery applications. 14 The therapeutic agent may be encapsulated into nanovehicles. Encapsulation of drugs in a nanocarrier may be increased the uptake and delivery of them. 17 The using nanocarriers with high drug loading capacity is crucial for drug efficacy as the suitable drug concentration will release from nanocarriers at the right time in the right place. 17 The optimized formulation was selected based on these criteria. In this study, an optimal formulation with appropriate physicochemical features was obtained among five formulations with different ratios of lipid and surfactant, with and without SEPI. The particle size, PDI, zeta potential, and long-term stability were used for selection of optimized SEPI-loaded and drug-free formulation (Tables 1 and 2). The lipid/surfactant ratio may influence the size, zeta potential, % EE and loading of SEPI into SEPI-NLCs. The optimal nanocarrier defines as a carrier with a small particle size, higher loading content, more cumulative drug release profile, and better physicochemical stability. According to Tables 1 and 2, SEPI-NLC2 was selected as the optimal formulation. In comparison with SEPI-NLC2, a remarkable increase in particle size and PDI were detected via adding the SEPI to NLC5. SEPI-NLC3 and SEPI-NLC4 showed drug precipitation, non-homogeneous state during 1 month, instability, and large size distribution ranging from 10 to 400 nm. SEPI-NLC1 demonstrated a few drug precipitation during storage time. The long-term stability of optimal formulation of SEPI-NLC in room temperature during 9 months indicated good stability and the slight change in particle size, z-potential, and PDI (Table 3). No notable alteration of clarity and phase separation was observed. The calculated encapsulation efficiency and drug loading of SEPI-NLCs were 90.81 ± 3.75 and 7.26 ± 3.30, respectively. GPS and GMS are lipid excipients for the construction of sustained-release dosage forms due to their lipophilic criteria. This amorphous diglyceride with two different long-chain fatty acids (C16, C17) forms a disordered lipid lattice. The combination of this glyceride with other glycerides provides extra space for drug loading due to no crystal defects and the crystalline structures. 27 It explains the high entrapment efficacy of SEPI-NLC. The emulsified nature of GPS can also reduce particle size and increase drug release. In addition, PRES is an anionic lipid with the OH functional group in its structure and prevents aggregation of NPs and instability during storage. 10 It has been shown that the lipid amount up to 5% in NLC formula increase the particle size in the range of microparticles. 28 The liquid lipids with a shorter carbon chain length, such as MCT oil provide more capacity for drug loading resulting in increasing the release rate. Additionally, according to Einstein-Stokes law, adding liquid lipid to solid lipid in the molten state can increase drug release from NPs due to decrease viscosity. 29 Moreover, increasing the ratio of liquid to solid lipids prevents particle agglomeration and increasing the size of NPs. We used the 70:30 ratio of solid to liquid lipids. Suitable surfactant causes high, uniform, and long-term lipid dispersion in the aqueous phase. 30 POL188 is a nonionic surfactant with minimal skin irritation, which is used in the preparation of NPs. 31,32 POL188 was selected as a hydrophilic surfactant due to its wide melting points from 52 to 57 °C approximately nearby the GPS melting point (56°C). POL188 have a high value of hydrophile lipophile balance (HLB) and reduces the surface tension between the lipid excipient and the resolving medium. Additionally, GPS has a hydrophobic character expressed by the low HLB value of 2 that can lead to slow drug release and POL188 can help to modify the drug release from the lipid matrix structures. 33 Increasing the surfactant concentration reduces the particle size because of the decrease of surface rigidity between the solid lipid and the external liquid environs. 34 Figure 2 illustrates the DSC thermograms of SEPI, GPS, drug-free NLC, and SEPI-NLC. The thermogram of SEPI-NLC did not indicate the melt-crystallization peak of SEPI around 75°C, revealing the amorphous state of SEPI in NLC. 35 That also means the drug has completely entered into NPs. Since the drug was freely present in the formulation, it should be localized on the external surface of NPs during the lyophilization process and the endothermic peak of lyophilized SEPI-NLC may display around 75°C. The high percentage of drug encapsulation (90.815 ± 4.631) confirms this. 18,36 SEPI-NLC showed one peak at 48.41°C that was lower than GPS by melting point of 64.48°C and SEPI that exhibited two peaks at 75.36°C and 80.22°C. This phenomenon is raised from the formation of nanoscale particles instead of the bulk lipid as well as dissolving the drugs, surfactants, or oils in the lipid matrix. 37 The lipid NPs are usually dispersed in size with different melting points (smaller particles ↔less melting point). 30 According to Figure 2, the enthalpy of GPS being much higher than that of lyophilized drug-free NLC (165.70 J/g in lipid bulk versus 87.21 J/g in drug-free NLC) that suggests a loss of crystallinity of the lipids in NLCs formula and crystal dis-ordering that may affect the loading capability. 38 Morphology of SEPI-loaded NLCs The TEM images showed that the mean diameter of SEPI-NLC was less than 200 nm, which was confirmed by the DLS-based analysis (Figure 3). The polydispersity can be probable dissimilarity of particle size in the particle size analyzer and TEM assay. 37 The images of SEPI-NLCs demonstrated a spherical nanosized shape and a small size distribution. In vitro release study There is a hypothesis that lipid NPs > 100 nm are not able to penetrate into the SC layer and stay on the surface because of their dimensions and rigidity. In contrast, they may affect penetration through the hair follicles, confirming the role of other mechanisms in drug penetration. 15,39,40 Furthermore, some NPs may be found intact only in the first layer of the SC. 10 Therefore, the release of the encapsulated drug is necessary for optimal topical drug delivery. The blocking of α-MSH and β-ADR receptors on the cell surface are known as the main SEPI mechanism. So, when NPs penetrate to the deepest epidermal layer, the drug must be released from the NPs before entry the cells to target the receptors. The in vitro release pattern study provides an idea for the dose and time of therapy in the in vivo stage. As represented in Figure 4A, SEPI-NLCs verified a biphasic release profile that presented a burst release during the first 4 hours, afterward a prolonged release up to 72 hours. About 65% of SEPI was released from SEPI-NLC within 72 hours, while in SEPI-EMULSION, this value was 23%. So, a faster release rate of SEPI from NLC was observed compared to SEPI-EMULSION. Olejnik et al. investigated the in vitro release of SEPI from topical formulations consisting two different macroemulsions, carbomer-and hydroxyethylcellulose-based hydrogels, and microemulsions. 8 A comparative analysis showed the highest release rate of active substance from the carbomer hydrogel. After 10 hours, about 80% of the SEPI was permeated through the membrane. However, a continuous release of active compound was observed for hydroxyethylcellulose hydrogel, which after 24 h reached about 80%. Additionally, it can be noted that much more SEPI was released from macroemulsion #2 (prepared by Oleic acid) than macroemulsion #1 (prepared by Isopropyl myristate). After 24 hours, 60% of total mass of active compound permeated to the receptor medium, while in the case of macroemulsion #1 only 20% of SEPI was released. Regarding these, it can be concluded that hydrogels can be better vehicle for the SEPI than macroemulsions. 8 Burst release occurs when the solubility of the drug in the molten lipid is lower than its saturation limit, so the drug was enhanced in the outer shell of the particles and formed a drug-enriched shell model. 41 Poloxamer is a hydrophilic surfactant that acts as a pore-forming agent causing the penetration of the dissolution medium. The penetrated medium can dissolve the localized drug in the outer shell resulting in burst release at the beginning of the test. 33 Moreover, increasing the temperature of the NLC formula during the test up to 37°C initiate β modification as a trigger and expulsion of the drug to the water phase of the formulation due to more ordered lipid particle matrices (trigger-controlled release) and increase the diffusion coefficient (according to Einstein-Stokes law) of the drug in the early hours. 29 The solid lipid matrix plays a role in prolonging release. 37 According to Figure 4A, SEPI-NLC shows further and more complete cumulative drug release than SEPI-EMULSION because of the smaller size of NPs than emulsion (micro-sized). In other words, small size in NPs improves the surface-area-to-volume ratio and decreases the release pathway in NPs. 30 Indeed, the burst release in micro-particles reduces, and the release is prolonged and incomplete. The drug release from the matrix of systems is controlled by diffusion or digestion, single or mixture, manners. Further investigation showed the drug release from the lipid matrix follows the Korsmeyer-Peppas model (n = 0.358 and n < 0.43) and Fick's first law. According to this law, diffusion occurs in response to a concentration gradient (diffusion model). The process of drug release with linear regression was also studied. Biphasic release pattern is the optimal pattern for topical drug delivery because the initial fast release can simplify SC permeation of SEPI by providing an appropriate concentration, while the sustained release keeps up the local concentration; induce a long anti-melanogenesis effect of SEPI. The kinetics models including Korsmeyer-Peppas, Higuchi, and zero-order employing to analyze the cumulative in vitro drug release have shown in Figure 5. Based on the Korsmeyer-Peppas model, the "n" rate was obtained 0.364 and 0.441 at SEPI-SLN and SEPI-EMULSION samples, respectively, indicating the Fickian diffusion model. In addition, the higher regression coefficient (R 2 ) was observed at the model of Higuchi for SEPI-SLN and SEPI-EMULSION as compared to the zero-order model, revealing the diffusion as the dominant release mechanism ( Table 4). The K values (K o , K H , and K p ) of SEPI-SLN were higher than SEPI-CREAM, which is signified the fast release of SEPI from NLC. Ex-vivo skin permeation study To evaluate ex vivo skin permeation of drug-loaded in SEPI-NLC was compared with SEPI-EMULSION and SEPI-ETHANOL formulations by Franz diffusion cells method. According to Figure 4B, the rate of residual drug in SEPI-ETHANOL was lower than two other formulations, while the penetrated drug was high. Alcohols may damage the SC by extracting and dissolving keratin and other compounds in this layer, thereby increasing the absorption of drugs. 40,41 In addition, the rapid permeation or evaporation of ethanol during the test can enhance the concentration of SEPI in solution with increase thermodynamic activity of SEPI. 42 Generally, the system tends to reduce increased thermodynamically activity by promoting the diffusion of the drug into the skin resulting in the further penetration of the drug. 42 These factors lead to the higher permeation of SEPI through the skin and less SEPI remaining on the skin from SEPI-ETHANOL. The higher percentage (23.2%) of the drug enters into the buffer medium (equivalent to systemic blood flow) from SEPI-ETHANOL compared to the emulsion and NLC (6%-7%) formulations. Therefore, SEPI-ETHANOL moves away from its target area (basal epidermis) and can maximize the risk of systemic uptake and adverse effect in topical use. SEPI-loaded NLCs localized and accumulated a high amount of drug in the skin with a low residual drug on the skin surface and penetrated through the skin at the end of the permeation study. NLCs can promote drugs skin permeation more than EMULSION form by various mechanisms such as lipid exchange between NLC and SC lipids, and stronger SC occlusive effect by forming a continuous thin film on the skin due to its smaller particle size. 15 The occlusion effect prevents the evaporation of skin water resulting in the hydration of the SC. A higher drug release rate along with more gradient concentration could also significantly increase the accumulative and permeation of SEPI into the skin in NLC (up to 88.8%) more than EMULSION (65%) and ETHANOL formulations (74.8%) (P < 0.01), whereas no significant difference was observed in SEPI accumulative level in the skin between SEPI-EMULSION and SEPI-ETHANOL. The residual amount of SEPI on the skin after 24 hours in the NLCs formulation (4.9%) was much less than the SEPI-EMULSION (27.9%) (P < 0.01). In SEPI-NLC aqueous dispersion, after 24 h, only 55.8% SEPI was released, while 88.8% SEPI was retained into the skin in the ex vivo penetration study ( Figure 4B). The increased epidermal permeability of SEPI may be related to the close association of skin lipids with NLC surfactants and lipids. 20,29 The poor water solubility of SEPI can avoid releasing the SEPI into the buffer medium, while the release of the drug on the skin surface can promote by enzymatically lipid degradation and electrolyte change in the SC. Indeed, electrolyte change together with increasing temperature up to 37°C could cause higherordered assembly in the lipid particle by low-energy β modifications and drug expulsion. 42 Ultimately, all of these issues assist to localize the SEPI-NLC in skin layers. Generally, release and permeation studies show that NLCs enhance the drug permeation. SEPI cytotoxic effect The cytotoxicity of SEPI concentrations on cell viability of B16F10 and HDF cells showed no significant difference between all groups with negative control (P < 0.05). DOX was used as a positive control due to a great inhibitory effect on cell proliferation. 26 The result confirmed that non-cytotoxic effect of SEPI on B16F10 and HDF cells at the tested concentrations ( Figure 6A, B). Over 80% of the studied cells survived in the presence of high concentrations of SEPI solution. The use of skin cells in in vitro evaluation is safe and cost-effective, and in certain cases, such as examining the effects of toxicity and cellular stimulation, eliminates the need for human or rat skin. Cellular and mushroom tyrosinase activity of SEPI Tyrosinase is known as a vital and rate-limiting enzyme in catalyzing melanin biosynthesis. 43 As shown in Figure 6C, the inhibitory effect of the SEPI concentrations on mushroom tyrosinase activity revealed that 72% of mushroom tyrosinase activity was significantly reduced (P < 0.001) in cells treated with SEPI 2.5 µmol/L compared to the untreated control group. This rate was 84% for kojic acid 2 mmol/L as a positive control ( Figure 6C). These Values are presented as mean ± SD for triplicate. *** P < 0.001, ** P < 0.01, and * P < 0.05. results displayed the direct inhibitory effects of SEPI on mushroom tyrosinase activity. Given that phenylalanine is the precursor of L-tyrosine as well as similarity of SEPI structure to Undecylenoyl-phenylalanine, SEPI may play a mimic role for the tyrosinase. In addition, we evaluated the anti-melanogenesis effect of different SEPI concentrations on cellular tyrosinase activity to found an intracellular pathway for melanin synthesis inhibition. As shown in Figure 6D, the significant inhibition of tyrosinase activity was observed in the treated cells with 1.25, 2.5, and 5 µmol/L of SEPI solution. These results confirm that the SEPI can reduce melanogenesis through intracellular mechanisms such as tyrosinase inhibition or antagonized pathway of α-MSH and β-AD receptors lead to reducing hyperpigmented lesions. Conclusion Herein, a nanosized SEPI-loaded NLC was provided with good long-term physical and chemical stability (9 months at room temperature). The DSC analysis also exhibited an amorphous state of SEPI in NLCs. The results indicated an entrapment efficiency of 90.81 ± 3.75%. The release study demonstrated that SEPI-NLC had a biphasic release outline with an initial burst release, since about 65% of SEPI was released from SEPI-NLC within 72 hours. The ex vivo permeation profiles revealed that the higher SEPI accumulation in the skin following application of SEPI-NLC (up to 88.8%) compared to SEPI-EMULSION (65%) and SEPI-ETHANOL (74.8%) formulations (P < 0.01). In addition, SEPI showed an inhibition rate of 65% for mushroom tyrosinase activity and could inhibit melanogenesis through direct inhibition of tyrosinase activity up to 72% in B16F10 cells. Moreover, results of in vitro cytotoxicity assay confirmed SEPI-NLCs to be nontoxic and safe for topical use. NLC was also proved a highly desired nanocarrier for epidermis drug delivery of SEPI. Skin permeation and retention profiles represented that the NLCs formulation allowed efficient SEPI delivery that may be beneficial for sustained antityrosinase activity and a high accumulation of SEPI within the epidermis. These findings suggest that NLCs as an appropriate nanocarrier for brightener delivery to the epidermis.
7,350
2022-01-08T00:00:00.000
[ "Chemistry", "Materials Science", "Engineering" ]
Attempting to synthesize lasso peptides using high pressure Lasso peptides are unique in that the tail of the lasso peptide threads through its macrolactam ring. The unusual structure and biological activity of lasso peptides have generated increased interest from the scientific community in recent years. Because of this, many new types of lasso peptides have been discovered. These peptides can be synthesized by microorganisms efficiently, and yet, their chemical assembly is challenging. Herein, we investigated the possibility of high pressure inducing the cyclization of linear precursors of lasso peptides. Unlike other molecules like rotaxanes which mechanically interlock at high pressure, the threaded lasso peptides did not form, even at pressures the high pressure up to 14 000 kbar. Introduction Lasso peptides belong to a specific class of natural peptides characterized by a unique "knot" structure motif [1]. These compounds are synthesized ribosomally and modified post-translationally. Thus, the peptide precursor is genetically encoded, but the target structure is formed by a set of several enzymes. These peptides are built from a macrolactam ring formed from an isopeptide bond between an N-terminal amino acid residue (usually glycine or alanine) and a side chain of aspartic or glutamic acid. The remaining C-terminal chain is threaded through the macrolactam ring and resembles a lariat knot, which can be divided into a loop and a tail [2]. The unique topology of lasso peptides is sustained by steric interactions provided by the presence of bulky amino acid residues (e.g. tryptophan in the exocyclic part of the peptide). The broad interest in lasso peptides is due to both their extraordinary topology and their biological activity [3, 4, 5, and 6]. For example, a recent biological study of lassomycin showed some activity against Mycobacterium tuberculosis [7]. However, its structure remains uncertain, and data reported in the literature show conflicting information about whether or not lassomycin has the characteristic "knot" motif [7, 8, and 9]. This does not imply, however, that the structures of lasso peptides are completely undeterminable. There are many examples of lasso peptides for which the topology has been established with certainty, including lower than that of the regular, unthreaded peptide. However, solvation may change the Vs relationships between the threaded and unthreaded peptide. Predicting this effect is difficult, since it may depend on the applied solvent. In our manuscript, we present our experimental approach to solve this problem. We did this by performing cyclization of lasso peptides LCPs under varying conditions including several different solvents and pressures. Using high pressure may create beneficial conditions for folding the LCP to the native conformation of the lasso peptide. Data reported in the literature show that high pressure facilitates the formation of rotaxanes [21,22]. Numerous data confirm that protein conformation is sensitive to high-pressure [27,28,29,30]. It has been demonstrated that pressure applied in the range 1-8 kBar influences protein structure, including ubiquitin or lysozyme, which are known for their stability [31]. For the high-pressure cyclization of LCPs of lasso peptides, several methods were used: 1. The cyclization of the LCP of a lasso peptide immobilized on a solid support: This method is based on the solid-phase peptide synthesis (SPPS), which was previously described [32]. An LCP of sungsanpin and chaxapeptin is synthesized, and then subjected to on-resin cyclization using several solvents and elevated pressure during the coupling. 2. The use of a sungsanpin analogue in solution with the C-terminal amide and lysine protected by a benzyloxycarbonyl group. In this case, the cyclization is performed under high pressure (13000 Bar). 3. High-pressure cyclization based on native chemical ligation. This procedure was previously used for homodetic and heterodetic peptides in an aqueous solution [33]. The obtained products were monitored by LC-MS, using a SIM (selected ion monitoring) mode. According to literature data, the threaded and unthreaded forms of a lasso peptide exhibit different chromatographic behaviours, which usually results in a clear separation of the two forms (threaded and unthreaded) [34,35,36,37]. Cyclization of the linear core peptides of sungsanpin and chaxapeptin on solid support The purpose of this study was to test different reaction conditions in order to assess the possible synthesis of lasso peptides. To do so, we synthesized a LCP of sungsanpin and chaxapeptin on both the Chemmatrix Amide Resin and Chemmatrix Wang Rink. Choosing this type of resin was dictated by the possibility of application of many solvent characterized by differential polarity. For the synthesis, we used Fmoc-Asp (O-2-PhiPr)-OH because of its lability of the side chain protection, thereby allowing the cyclization of the Asp side chain carboxyl group with the α-amine of the peptide. We did not expect that assembling these peptides in such manner would enable the formation of the lasso topology, as Lear and Co. [9] reported that the chemical synthesis of lassomycin produced a branched-cyclic peptide. Thus, the regular solidphase peptide synthesis was ineffective in forming a threaded structure. However, we performed the on-resin cyclization in order to obtain a reference branched-cyclic peptide. The structure of the purified product was confirmed by NMR analysis, specifically by 2D homonuclear TOCSY and ROESY experiments. As one can see from Fig 2, the sungsanpin in our conditions does not adopt a "lasso" conformation in the water solution after it dissolves. The signal dispersion is reduced as compared to the lasso form observed for the peptide in pyridine-d 5 . Analysis of Cα chemical shift values assigned to sungsanpin (S1 Table of S1 Data), reveals in water, that they are mostly elevated in comparison to those assigned in pyridine [10]. This suggests that the peptide most likely adopts a partially helical conformation in aqueous solution. Our main goal was to study how high pressure influences the assembly of peptides exhibiting a characteristic knot structure. In our experiment, we applied a pressure of 13 000 bar, the highest pressure available on the piston apparatus. We tested the high-pressure on-resin cyclization of sungsanpin-NH 2 and that of chaxapeptin-NH 2 LCP analogues, using the following set of solvents: DMF, NMP, and THF/ACN. We chose polar and non-polar solvents to enforce conformational changes. As mentioned in the introduction, a lasso peptide differs from its branched-cyclic peptide analogue in terms of its chromatographic behaviour [34]. Therefore, we analyzed the reaction mixtures by LC-MS, using a selected ion monitoring mode (SIM). The results of the cyclization of the LCP of sungsanpin performed under high-pressure conditions are presented in Fig 3. In this figure, we note the presence of an additional signal (R f 8.5), whose intensity increased when the peptide was cyclized in a THF/ACN mixture. In this case, we found the surface area ratio between these two peaks to be 20:80. Moreover, the LC-MS/MS analysis showed a similar fragmentation pattern, although the MS/MS spectrum of the signal at 8.5 min was characterized by lower intensities for most peaks. Additionally, we confirmed the identity of these two signals, using a high-resolution mass spectrometer because the triple quadrupole mass analyser has insufficient mass accuracy. So far, the obtained results did not show any evident formation of the specific lasso topology. One potential method to be explored, in order to achieve this goal, is NMR, which we also discussed in this section. However, we decided to compare our results with chromatographic data of the natural sungsanpin. For this reason, we synthesized the isotopically labelled LCP of sungsanpin (see S3 Fig of S1 Data), using a Chemmatrix Wang resin, which, in turn, allowed the assembly of the original α-carboxyl group-containing analogue. During the SPPS synthesis, we applied Fmoc-Lys-OH-13 C 6 , 15 N 2 . The isotopologues have the same retention time as revealed by LC-MS analysis, but they have different masses. This makes it easy to differentiate from non-labelled compounds. The obtained LCP was subjected to a high-pressure cyclization, The analysis of XIC for both peptides revealed differences in retention times, and showed a longer t R for the threaded form. In our case, the additional signal formed after high-pressure PLOS ONE Chemical synthesis of lasso peptides under high pressure cyclization exhibited a shorter t R than that of the branched-cyclic peptide, which clearly excluded the possible formation of a lasso peptide. The obtained data prompted us to answer questions regarding the peptide structure that was simultaneously formed with a branched-cyclic peptide as a result of high-pressure coupling and that exhibited the same molecular mass. In our opinion, there are two possible explanations: 1) dehydration of the linear peptide, either in solution or gas phase, possibly from a serine residue, for example. However, in the case of a gas-phase water loss, the dehydrated and the linear peptides should have the same retention times. But this was not consistent with our results. Furthermore, when taking into account the assumption that a serine is dehydrated, we should expect an increase in retention, due to the increased hydrophobicity of the product; 2) the formation of an aspartimide. This modification leads to the formation of an isobaric compound with a branched-cyclic peptide (likewise with a lasso peptide), as well as a preservation of the linear nature of peptide. Thus, the formation of an aspartimide may explain the shorter retention time of the additional signal, in comparison with the cyclic product. This assumption was confirmed by the MS/MS Solution-phase cyclization of the linear core peptides of sungsanpin and chaxapeptin The next step was to study the cyclization of the LCPs of sungsanpin and chaxapeptin in solution. There is a big difference between these two approaches, since the peptide is not attached to the resin after elimination of the protection groups, with the exception of benzyloxycarbonyl (Z) protection group of the lysine side chain. This protection was introduced during the SPPS synthesis by design because of its stability in standard cleavage conditions (TFA) so that no risk of undesirable coupling of the aspartic acid side chain with the ε-amino group of lysine could occur. Samples of LCP of sungsanpin and chaxapeptin were subjected to coupling using PyAOP (2-fold excess) and several solvent systems. Similar to the previous section, the reactions were carried out both in atmospheric and high pressure conditions on the piston apparatus. Thus, the aim of these experiments was to examine the possibility of a chemical synthesis of peptides with a characteristic knot structure, via solution-phase coupling of the LCP deprived of protection groups. Results of both the atmospheric and high pressure cyclization of two discussed peptides are presented in Figs 6 and 7. Figs 6 and 7 show no difference between the two chromatograms, meaning that there is no additional structure of peptide formed in the sample, under high-pressure conditions. The same results were obtained for the sungsanpin analogue. In summary, we found that the application of high pressure did not cause formation of the lasso topology, and that the sole form obtained under these conditions was a branched-cyclic peptide. Cyclization of the linear core peptides of sungsanpin and chaxapeptin via tandem acyl shift We also tested a similar approach like in the previous section, though not based on the ordinary peptide bond coupling, but rather, on the tandem acyl shift ligation. Thus, reactions under high pressure were also studied, but by using conditions that required longer reaction times and were performed in an aqueous solution (as opposed to organic solutions). So, the fundamental differences, in comparison with the previous experiment, are the application of water as a solvent, and the elongation reaction time. We used a chemical ligation based on an N ! S acyl shift, followed by ligation with the C-terminal cysteine residue and the S ! N migration of acyl group. For this purpose, we designed the sungsanpin and chaxapeptin analogues containing an N-terminal cysteine residue instead of glycine. Moreover, we attached a 2-(ethylamine) ethane thiol moiety to the carboxyl group of the aspartic acid side chain as a thioester surrogate. According to literature data, thioethylalkilamido (TEA) thioesters subjected to acidic solutions have an N ! S acyl shift, leading to formation of a thioester. This intermediate, in turn, reacted with an excess of other thiols (e.g. sodium mercaptoethanesulphonate-MESNa) present in the mixture (transthioesterification), providing a more reactive thioester. The scheme is presented in Fig 8. We assembled the LCPs of sungsanpin-and chaxapeptin-1-Cys-containing analogues on a solid support, using a Chemmatrix Rink amide resin. After removal of the Asp side chain protection under very mild conditions (1%TFA/DCM), we attached an S-trityl-2-(ethylamino) ethanethiol (TEE) moiety by ordinary amide coupling using PyBOP. The assembled peptides were cleaved from resin, using a mixture of TFA:TIS:EDT:H 2 O (92.5: 2.5: 2.5: 2.5). LC chromatograms of purified LCPs are presented in S11 and S12 Figs of S1 Data. First, we tested the cyclization between the TEE-containing Asp residue and the N-terminal Cys. The reaction was carried out in a citric acid buffer (pH 3), and in the presence of MESNa over a 24h time period. The reaction was quenched by desalting on an OMIX 1 C4 zip tip; this allowed elimination of the buffer and an excess amount of MESNa. Results of the cyclization reaction for the sungsanpin LCP analogue are shown in Fig 9. We observed a signal characteristic of a cyclic structure, even though a quite abundant peak from the linear peptide was still present in the MS spectrum. The prolonged incubation of samples, up to 48h, resulted in an improved yield of cyclic product. Similarly, the same experiment was performed under high pressure, using a piston apparatus; however, for practical reasons (e.g. limited stability of pressure over long time), the reactions were carried out only for 24h. This allowed for a qualitative view into the products, despite shortened reaction times and a lower yield of cyclic peptides. Afterwards, the mixture was analyzed by LC-MS (see Fig 10). The comparison of LC-MS chromatograms obtained for both the atmospheric and high pressure cyclization of sungsanpin showed the presence of two signals corresponding to m/z 819 and m/z 413. Initially, the chromatographic peak at 9.8 min was broadened, but elevating the temperature during the LC separation to 60˚C, resulted in a clearly narrower peak. The peak broadening is characteristic for cyclic peptides due to a conformational balance, especially at lower temperatures. Thus, at higher temperatures, the averaged peak was observed. Also, in this case, the appearance of an additional signal was intriguing, although, based on the results from previous experiments, the aspartimide formation was taken first into account. The analysis of the MS/MS spectrum at 8.1 min (see Fig 11) revealed the presence of a z 8 ion, characteristic of an aspartimide-containing fragment. Moreover, there is a typical series of b ion types. These observations showed that the peptide eluting at 8.1 min exhibited a linear structure. The second signal at 9.8 corresponded to the branched-cyclic peptide. In the MS/MS spectrum, there are no occurrences of the b ion series (with the exception of the b 5KP ion) which include macrolactam ring fragments. Most fragments involve either the C-terminal tail or b fragments containing the whole cyclic part of the molecule. Besides these structures, no additional form of peptide was observed, even for high-pressure assembled peptides. An analogous outcome was achieved for the chaxapeptin-1-Cys LCP analogue. Thus, the conclusion was clear: these reaction conditions do not enable the formation of the characteristic lasso topology. Instead, a branched-cyclic peptide was formed. Solution-phase cyclization of the linear core peptide of the chaxapeptin-3,13-Cys analogue with a stiffened conformation because of a disulfide bridge A further attempt to chemically synthesize lasso peptides was made by assembling a macrolactam ring through conventional peptide-bond coupling; the cyclization was carried out after the prior formation of a disulfide bond between two cysteine residues in positions 3 and 13. Additionally, the peptide had a protected ε-amino group of the lysine (benzyloxycarbonyl group) in order to avoid mixed coupling. In fact, we wanted to test the possibility of formation of the lasso motif upon prior stiffening of the tail. This idea relied on the synthesis of the LCP of a lasso peptide (on-resin) containing a disulfide bridge. The coupling between the α-amino group and the side chain carboxyl group was possible, in our opinion, from two sites of the protruding tail (Fig 12). The subsequent reduction of the disulfide bridge by dithiothreitol (DTT) could result in a mixture of both the branched-cyclic peptide and the lasso peptide. The on-resin formation of the disulfide bridge was selected (over the in-solution option) because it preferentially formed the intramolecular S-S bond. The ordinary oxidation with 5% DMSO/H 2 O, over 24h, was not sufficient; therefore, we used additional iodine. This yielded almost quantitative oxidation of the sulfhydryl groups within 2h. Fig 13 shows the ESI-MS spectrum of the purified and oxidized product. The oxidized peptide was subjected to cyclization (in solution-phase) using both ambient and high pressure in DMF. In order to eliminate the reducing agent and the buffer, the sample was desalted using OMIX 1 C4 pipette tips, and was subsequently analysed by LC-MS. Results are presented in Fig 14. The chromatogram showed only one signal for both the ambient and high-pressure cyclized products. The separation was carried out at 55˚C, because, a wide signal (corresponding to cyclic peptide) was observed at ambient temperature. The obtained data suggest the formation of a branchedcyclic peptide, but they do not suggest the formation of the desired lasso peptide. Conclusions The data reported in literature suggest that, in many cases, the application of high pressure facilitates the formation of mechanically interlocked structures like rotaxanes [25]. Similar results were also obtained for [2] pseudorotaxanes based on two ether crowns, and secondary ammonium ion [26]. Because lasso peptides have [1] rotaxane like topology, we decided to test PLOS ONE whether high pressure fostered the cyclization of a LCP into a threaded lasso peptide. To do so, we performed the cyclization both on a solid support (Chemmatrix 1 resin), and in solution. The solution-phase synthesis was performed following the standard method, and using a tandem acyl-shift approach. We also tested the method involving the formation of a lactam bond in a LCP stiffened by a disulfide bridge. Those reactions were performed under atmospheric pressure, and under elevated pressure (13000 Bar). Our study revealed that the application of high pressure and the use of various solvents did not result in the formation of detectable quantities of lasso peptides. We also observed that the cyclization process did not depend on the type of solvent applied. The isomeric product formed in some of the reactions is, according to MS2 studies, a linear peptide, and the loss of water is the result of aspartimide formation. These results confirmed literature data that the cyclization of LCPs of lasso peptides results in the formation of branched-cyclic peptides. Moreover, the large scale purification of these peptides, using high-pressure liquid chromatography, appeared doable. In contrast to other examples of mechanically interlocked systems, lasso peptides do not appear to be sensitive to the conditions under which the cyclization process is carried out; these include the applied pressure and the type of solvent used in the reaction. Threaded lasso peptide formation in nature requires an ATP-dependent lactam synthetase [38,39]. Therefore, the biosynthesis of these peptides requires an energy source to mediate the core peptide pre-folding. This may explain why only using physical factors, such as the external pressure or solvent is not efficient in synthesizing the native lasso peptides. Moreover, according to IMS data, the differences in collision cross-sections of the lasso peptides, and branched cyclic peptides are small, especially in low charge states. Therefore, the reaction volume (ΔV rxn ) is not sufficient to shift equilibrium significantly toward the lasso peptide at high-pressure conditions. Peptide preparation The LCP of sungsanpin and chaxapeptin and their analogues were prepared on a solid support, according to the standard Fmoc protocol [40]. The coupling of respective amino acids residues was carried out using a PyBOP in DMF. Both the amino acid derivatives and the coupling agent were used in 3-fold excess. Additionally, the coupling reaction and Fmoc deprotection (25% piperydine/DMF) were supported by sonication over 15 min and 3 min, respectively [41]. During the synthesis, the Fmoc-Asp(O-2-PhiPr)-OH was used, together with the alternative side chain protection. This allowed on-resin de-protection of the Asp side chain, under mild conditions (1% TFA/DCM-20 x 2 min). Additionally, for the synthesis of the LCPs of the lasso peptides (cyclized in solution), a Fmoc-Lys(Z)-OH derivative was used. Z-protecting group remains intact after the TFA-based peptide cleavage from resin. This choice was dictated by two reasons: First, this alternative protecting group prevents the side reaction during solution-phase cyclization. Second, simplicity of the eventual removal of the protecting group by catalytic hydrogenolysis. Finally, a cyclization between the N-terminal glycine residue and the carboxyl group of the aspartic acid was performed, both in the solution-and solid-phase, using PyAOP (10-fold excess). The LCPs of the lasso peptides, cyclized via tandem acyl shift, contained S-trityl-2-(ethylamino)ethanethiol (TEE) on the Asp side chain carboxyl group, which was attached similarly to the amino acid residues, using a 3-fold excess of TEE and coupling agent. This derivative was attached to the Asp side chain upon prior removal of the labile protecting group (1% TFA/DCM). Chaxapeptine-3,13-Cys analogue, which was synthesized according to the aforementioned standard procedure, was oxidized on resin, using iodine in methanol (20 min). This approach favored the formation of the intramolecular disulfide bridge. The crude product was cleaved from the resin using a mixture of TFA/H 2 O/TIS (95:2.5:2.5, v/v) for 2h at room temperature. In the case of the Cys-containing peptide, 2.5% of 1,2-ethanedithiol (EDT) was also applied. The obtained peptides were precipitated in cold diethyl ether, and subsequently lyophilized. Synthesis of S-tritylcysteamine 5.68 g of cysteamine hydrochloride (50 mmol) was dissolved in 50 ml of DMF, in an Erlenmeyer flask equipped with a magnetic stirrer; 13.94 g of trityl hydrochloride was then added to the solution. The mixture was stirred at room temperature for 3 hours. The resulting solution was concentrated in vacuo, then neutralized with KOH/MeOH to a pH of 9, and concentrated in vacuo overnight. The resulting mixture was extracted four times with DCM (4x25 ml); all fractions were subsequently collected and washed with water (2x25 ml), brine (2x25 ml), dried over MgSO 4 , and finally evaporated on a rotavapor. The final product was crystallized from EtOAc by adding small portions of hexane. Crystals were left for growth overnight, at 4˚C. Yield Synthesis of S-trityl-2-(ethylamino)ethanethiol (TEE) S-Trt-cysteamine (250 mg, 7.82 x 10 -4 mol) was placed in a round bottom flask and dissolved in 15 ml of THF. After addition of acetaldehyde (44 μl, 7.82 x 10 -4 mol), the mixture was refluxed for over 1h. Sodium borohydride (118 mg) was then added to the solution in order to reduce the Schiff base, after which additional refluxing was performed, for 1h. Finally, 5 ml of water containing a few drops of acetic acid was added. This resulted in the decomposition of NaBH 4 . To remove the large amount of salt from the crude product, the mixture was placed in a separatory funnel and washed with a solution of sodium bicarbonate (3 x 5 ml), and subsequently with brine (2 x 5 ml). The organic layer was then dried with MgSO 4 , and evaporated under reduced pressure using a rotatory evaporator. Yield: 160 mg (60%) ESI-MS (m/z) cal. Cyclization of peptides by via tandem acyl shift 0.5 mg of TEE-containing peptide was dissolved in 500 μl of citric acid buffer (100 mM, pH 3), then 50 equivalents of MESNa was added. The resulting mixture was heated to 40˚C over 24h. The reaction conditions were adopted from the procedure described by Taichi and Co. [33] Afterwards, the sample was desalted using OMIX 1 C4 tips in order to remove the excess of MESNa. The peptide was eluted by 70% ACN in H 2 O; after dilution with water, it was analyzed by LC-MS. High pressure cyclization of peptides Cyclization of the LCPs of sungsanpin and chaxapeptin was performed on solid support. For this purpose, 5 mg of resin containing an LCP of peptide was placed in clipped syringe with polypropylene filter. Then, 400 μl of DMF containing PyAOP (10-fold excess) and DIEA was added. The syringe was placed in the piston apparatus in the presence of silicon oil, and subjected to high pressure (13 000 Atm) for 3h. Solution-phase cyclization of the LCPs of sungsanpin and chaxapeptin was performed similarly, using the same reaction conditions (including pressure) as those described for the on-resin coupling. The cyclization of the LCPs of sungsanpin and chaxapeptin analogues were carried out via tandem acyl shift, according to the procedure described in section 2.8, using a syringe similarly introduced into the piston apparatus and subjected to high pressure over 24h. Mass spectrometric analysis Mass spectrometry measurements were carried out on an Apex Ultra FT-ICR (Bruker, Germany) equipped with an electrospray source (ESI) ion funnel, and analyzed in the positiveion mode. Before the run, the mass spectrometer was calibrated with a Tunemix mixture (Bruker Daltonics), following a quadratic method. Collision energy (10-20 eV; argon as a collision gas) was optimized during the MS/MS experiments for optimal fragmentation (the voltage over the hexapole collision cell varied from 15 to 30 V). An acetonitrile/water/formic acid (50:50:0.1) mixture was used as solvent for recording the mass spectra. The potential between the spray needle and the orifice was set to 4.5 kV. LC-MS The LC-MS analysis of the peptides was performed on a Shimadzu LCMS-8050 equipped with a triple quadrupole mass spectrometer (using a UHPLC Nexera X2 system) and on an Agilent 6470 Triple Quadrupole LC/MS System equipped with a standard Jet Stream ESI source and Agilent Technologies 1290 Infinity II system. The LC-MS analysis on both instruments were carried out in SIM (selected ion monitoring) mode and with a Q1Q3 scan. The LC system was operated with a mobile phase consisting of solvent A: 0.1% formic acid in H 2 O and solvent B: 0.1% formic acid in MeCN. The gradient conditions (B %) were from 5 to 80% B, within 15 min. The flow rate was 0.2 mL/min, and the injection volume 5 μL. The separation was performed on an Aeris Peptide XB-C18 column (50 mm × 2.1 mm) with a 3.6 μm bead diameter. The peptide samples were dissolved in 400 μl of a water: acetonitrile mixture (80: 20). Most analysis were carried out on a Shimadzu IT-TOF, which is a hybrid system consisting of an ion trap and a time-of-flight mass analyzer; it also includes an electrospray (ESI) ion source. In our experiments, we set the potential between the spray needle and the orifice to 4.5 kV. The LC separation on this instrument was performed in the same condition as described above. NMR analysis 2.0 mg of a peptide were dissolved in 500 μL of a mixture of 10% D 2 O and 90% of H 2 O (v/v). After the peptide dissolved, the pH was not adjusted. In order to assign all cross-peaks from the Cα region the sample of sungsanpin with the same concentration in 100% of D 2 O was also prepared. All NMR experiments were performed on the 700 MHz (Avance III, Bruker) NMR spectrometer at 25˚C. All NMR data were processed by NMRPipe and analyzed using a Sparky software [42,43]. Complete assignment of the 1 H and 13 C resonances, for all peptides (S1 Table of S1 Data), was done by application of a standard and well-established procedures based on the inspection of the 2D-homonuclear TOCSY (with mixing times 10 and 80 ms) and ROESY (with mixing times 300) experiments [44]. Purification and characterization of peptides After release from the resin, the crude peptide products were analyzed using a Thermo Separation HPLC system with UV detection (210 nm) on a Vydac Protein RP C18 column (4.6 × 250 mm, 5 μm), with a gradient elution of 0%-80% S2 in S1 (S1 = 0.1% aqueous TFA; S2 = 80% acetonitrile + 0.1%) for 40 min (flow rate 1 mL/min). TEE was analyzed on an Aeris Peptide XB-C18 column (50 mm × 2.1 mm; 3.6 μm bead diameter) using a Shimadzu IT-TOF instrument equipped with a PDA detector. The separation conditions were similar to those described in section 2.3. The main carbonylated product was purified using a preparative reversed-phase HPLC on a Vydac C18 column (22 mm x 9 250 mm), using solvent systems: S1 0.1% aqueous TFA, S2 80% acetonitrile + 0.1% TFA, linear gradient from 50 to 100% of S2 for 55 min, flow rate 7.0 ml/min, UV detection at 210 and 280 nm. The resulting fractions were collected and subjected to lyophilisation. The identities of the products were confirmed by MS analysis, using an Apex Ultra FT-ICR (Bruker, Germany) mass spectrometer equipped with an electrospray (ESI) ionization source.
6,678
2020-06-24T00:00:00.000
[ "Chemistry", "Materials Science" ]
Metallic 1T-TiS 2 nanodots anchored on a 2D graphitic C 3 N 4 nanosheet nanostructure with high electron transfer capability for enhanced photocatalytic performance Photocatalysis is one of the most promising technologies for solar energy conversion. With the development of photocatalysis technology, the creation of low-dimensional structure photocatalysts with improved properties becomes more and more important. Metallic 1T-TiS 2 nanodots with a low-dimensional structure were introduced into environmentally friendly two-dimensional g-C 3 N 4 (2D-C 3 N 4 ) nanosheets by a solvothermal method. It was found that the ultrathin TiS 2 nanodots were uniformly anchored on the surface of the 2D-C 3 N 4 . The e ff ective suppression of electron – hole recombination was realized due to the addition of the intrinsic metallic property of 1T-TiS 2 in the prepared nanocomposite. The 5 wt% TiS 2 /2D-C 3 N 4 nanocomposite exhibited the best photocatalytic performance and the degradation rate towards RhB was ca. 95% in 70 min, which showed an improvement of ca. 30% in comparison with 2D-C 3 N 4 . The results indicate that the obtained TiS 2 /2D-C 3 N 4 nanocomposite is a promising photocatalyst for practical applications. Introduction With the increasing scarcity of conventional energy resources and the deterioration of the environment, the development and utilization of renewable energy becomes more and more important. 1 Solar power can be converted by photocatalysts into chemical energy to degrade pollutants, which has attracted a lot of attention in the past decades. 2 Numerous researchers have devoted themselves to creating efficient photocatalysts to take full advantage of the redox ability of photogenerated carriers. TiO 2 was reported to be used for hydrogen generation via water splitting under ultraviolet light. [3][4][5][6] Nevertheless, the challenges of photocatalytic degradation mainly lie in the following two points: (1) the wide light absorption range and (2) the efficient separation of photogenerated electrons and holes. 7,8 In recent years, researchers have found the dimension reduction of the photocatalysts can shorten the diffusion length of photogenerated carriers. [9][10][11][12][13][14][15][16] Moreover, the introduction of conducting materials has been found to be able to suppress recombination of photogenerated electron (e)-hole (h) pairs of semiconductors, such as some noble metals (Ag, Au and Pt). [17][18][19][20][21][22] Platinum (Pt) with the lowest Fermi level is the most effective accepter of photogenerated electrons for the photocatalytic reaction. 23 However, its high material cost makes it uncompetitive for practical applications. Thus, the development of the substitute materials (noble metal-free) of the noble metals is very necessary, but still challenging to date. Layered transition metal dichalcogenides (TMDs) such as MoS 2 , WSe 2 , TaS 2 etc., have received much attention owing to their excellent catalytic activity and low cost compared to noble metals. [24][25][26][27][28][29][30] As a prototype of TMDs, 1T-TiS 2 is composed of metal Ti layer sandwiched between two S layers forming edgesharing TiS 6 octahedra with strong covalent forces. 3 The adjacent S-Ti-S layers are coupled to each other by weak van der Waals interactions, providing the practical feasibility for exfoliating the bulk TiS 2 to ultrathin two dimensional TiS 2 nanosheets. 31,32 In addition, 1T-TiS 2 is a semimetal with excellent inplane conductivities. 33 It is noticeable that conductive 1T-TiS 2 can replace the noble metals to be used as the accepter of the photogenerated electrons and improve the separation efficiency of the photoexcited carriers of semiconductors. Graphitic carbon nitride (g-C 3 N 4 ), a polymeric semiconductor with a low band gap of $2.7 eV, has an appropriate band structure for photocatalysis. 34 Two dimensional C 3 N 4 (2D-C 3 N 4 ) can be prepared according to our previous report. [35][36][37] The two dimensional structure of g-C 3 N 4 can contribute to the separation of photogenerated e-h pairs and then the carriers could handily transfer to the surface to suppress the recombination. [38][39][40][41][42][43] However, the separation efficiency of the photoexcited carriers of 2D-C 3 N 4 are still unsatisfactory in the absence of noble metals. To further improve the photocatalytic activity of 2D-C 3 N 4 , the introduction of some conductive materials is an effective method. Herein, the exfoliated ultrathin TiS 2 nanodots were introduced into 2D-C 3 N 4 . Anchoring metallic TiS 2 nanodots on 2D-C 3 N 4 nanosheets via solvothermal method facilitated the fast transfer of the photogenerated carriers. The high photocatalytic performance of the nanocomposite resulted from the effective suppression of the e-h recombination was tested by degrading Rhodamine B (RhB). In the nanocomposites, uniformly distributed TiS 2 nanodots on surface of 2D-C 3 N 4 nanosheets did serve as the accepter of the photogenerated electrons. Additionally, the photocatalytic mechanism was also studied in detail by electron spin resonance (ESR). Synthesis of ultrathin TiS 2 nanodots Ultrathin TiS 2 nanodots were prepared via liquid phase exfoliation in a co-solvent (acetonitrile/IPA ¼ 19 : 1, by volume). Specically, 900 mg 1T-TiS 2 powder (Sigma-Aldrich) was dispersed in 300 ml co-solvent and the solution was sonicated in an ultrasonic bath at 25 C for 4 h. Then the mixture was centrifuged at 6000 rpm for 30 min to remove any unexfoliated bulk TiS 2 . The supernatant was further probe sonicated for another 4 h followed by centrifugation at 10 000 rpm for 30 min. The freeze-dried supernatant was named as TiS 2 nanodots. Synthesis of ultrathin g-C 3 N 4 nanosheets Ultrathin g-C 3 N 4 was synthesized as below. 2 g melamine were calcined at 550 C for 4 h with 2 C min À1 heating rate in the muffle furnace. The obtained sample was bulk g-C 3 N 4 . Then 400 mg bulk g-C 3 N 4 was ground and heated at 550 C for $30 min in the muffle furnace. Aer that, the obtained samples were heated at 550 C for another $30 min in the muffle furnace. Finally, the obtained ultrathin g-C 3 N 4 were white. Synthesis of TiS 2 /2D-C 3 N 4 nanocomposites The TiS 2 /2D-C 3 N 4 nanocomposite was prepared by solvothermal method in benzyl alcohol. An appropriate amount of ultrathin TiS 2 nanodots and 70 mg 2D-C 3 N 4 was dispersed in benzyl alcohol and sonicated for 20 min, respectively. Then a certain concentration of TiS 2 nanodots was mixed with pure 2D-C 3 N 4 and stirred for 2 h. Aer sonicating the mixture again for 20 min, the suspension was transferred to a stainless-steel autoclave and heated for 4 h at 140 C. Aer natural cooling, the solution was washed by ethanol and water twice respectively. The washed powder was then freeze dried and used as catalyst. The TiS 2 /2D-C 3 N 4 catalysts with different TiS 2 content were named as x% TiS 2 /2D-C 3 N 4 , where "x" represent the mass percentage of TiS 2 (x ¼ 1, 3, 5, 10 wt%). Material characterization Scanning electron microscope (SEM) images of 5% TiS 2 /2D-C 3 N 4 nanodots were taken on a JEOL-6500 scanning electronic microscope. Transmission electron microscope (TEM) and high resolution TEM (HRTEM) images were recorded on a JEOL-2100F at an accelerating voltage of 200 kV. X-ray diffraction (XRD) patterns of TiS 2 nanodots and the nanocomposites were performed on D/MAX2500V with Cu-Ka radiation (l ¼ 1.54056 A). Raman spectra of TiS 2 nanodots, 2D-C 3 N 4 and 5% TiS 2 /2D-C 3 N 4 nanocomposite were acquired with a RENISHAW in Via Raman Microscope using a 532 nm laser excitation. X-ray photoelectron spectroscopy (XPS) measurements were performed to analyze the presence of TiS 2 in the nanocomposites at ambient temperature using PHI Quantera with Al-Ka X-ray source. Shimadzu UV-2450 ultraviolet-visible spectrophotometer was used to collect the UV-vis absorption spectra of TiS 2 /2D-C 3 N 4 nanocomposites. Composition analysis of as-prepared materials was carried on fourier transforms infrared spectrometer (FT-IR) using Nicolet Nexus 470 spectrometer. The ESR spectra were conducted on a Bruker model ESR JES-FA200 spectrometer. Photoluminescence (PL) spectroscopy experiments were conducted at excitation wavelength 377 nm using Jobin Yvon HORIBA NanoLog spectrouorometer. Photocatalytic activity measurements For photocatalytic degradation experiment, the organic dye RhB was used as a model pollutant. In detail, 10 mg samples were added into 50 ml RhB (10 mg l À1 ) in a Pyrex photocatalytic reactor with a circulating water system to maintain a constant temperature (30 C). Before irradiation, the suspensions were magnetically stirred for 30 min in the dark to ensure that RhB could reach the absorption-desorption equilibrium on the photocatalyst surface. At a certain time intervals, 3 ml aliquots were sampled and centrifuged to remove the photocatalyst nanoparticles. Then the ltrates were analyzed by recording variations of the absorption band maximum (553 nm) in the UVvis spectra of RhB using a UV-vis spectrophotometer. The air velocity was 2 l min À1 and the photocatalytic reaction was performed under a 300 W Xe lamp with a 400 nm cutoff lter. Results and discussions The TiS 2 /2D-C 3 N 4 nanocomposites with different content of TiS 2 nanodots were prepared by solvothermal method in benzyl alcohol. The morphology of 2D-C 3 N 4 , TiS 2 nanodots and the asprepared 5% TiS 2 /2D-C 3 N 4 nanocomposite were examined by AFM and TEM, as shown in Fig. 1. 2D-C 3 N 4 nanosheets performed a micrometer-scale lateral size and thin thickness about 1-2 nm (Fig. 1a). The size and thickness of TiS 2 nanodots were nanometer grade, 5-10 nm and 1-3 nm (Fig. 1b and c), respectively. Ultrathin and small TiS 2 nanodots were uniformly anchored on the surface of thin 2D-C 3 N 4 nanosheets without hard aggregations. From the HRTEM image, the lattice with spacing of 0.29 nm of TiS 2 nanodots assigned to the (010) planes could be seen on the surface of 2D-C 3 N 4 nanosheets clearly. It demonstrated that the hybridization of 2D-C 3 N 4 nanosheets and TiS 2 nanodots did not disturb the crystal structure and morphology of TiS 2 . The crystalline structure of the as-prepared TiS 2 /2D-C 3 N 4 nanocomposites with different content of TiS 2 nanodots were further characterized by XRD. In addition to the diffraction peaks at 12.5 and 27.7 attributed to the (100) and (002) crystal planes of 2D-C 3 N 4 , respectively, 44,45 many other peaks for intralayer crystal planes at 15.5, 34.2, 44.2, 53.9, 57.7 and 65.4 of TiS 2 /2D-C 3 N 4 nanocomposites were consistent with those of pure few layers of 1T-TiS 2 . 46 Raman spectra of 5% TiS 2 /2D-C 3 N 4 nanocomposite, 2D-C 3 N 4 and TiS 2 nanodots was taken to further demonstrate the presence of TiS 2 in the nanocomposites (Fig. 2c). The pure TiS 2 nanodots showed a peak at 227 cm À1 assigned to E g and a peak at 332 cm À1 with a shoulder at 380 cm À1 attributed to A 1g vibrational modes of 1T-TiS 2 . 47 However, except for the signicant peaks of 2D-C 3 N 4 , the peaks of TiS 2 were not very clear in 5% TiS 2 /2D-C 3 N 4 because of the low content of TiS 2 in the nanocomposites, which was in agreement with previous publications. 48 FT-IR spectra of pure 2D-C 3 N 4 and TiS 2 /2D-C 3 N 4 nanocomposites were obtained as well. Because of the low detective sensitivity and low content of TiS 2 in the nanocomposites, FT-IR spectra of all hybrid only showed typical stretching mode of 2D-C 3 N 4 heterocycles from 1250 to 1700 cm À1 and the vibrational mode of N-H bond at 3147 cm À1 respectively. 48 The chemical states and elemental compositions of TiS 2 nanodots, 2D-C 3 N 4 and 5% TiS 2 /2D-C 3 N 4 nanocomposites were further examined by XPS, as shown in Fig. 3. The C 1s spectra of both 2D-C 3 N 4 and 5% TiS 2 /2D-C 3 N 4 nanocomposites showed two peaks at 284.7 eV and 288.2 eV assigned to the sp 2 C]C bonds and N]C-N 2 bonds, respectively (Fig. 3a and c). The N 1s spectrum of 5% TiS 2 /2D-C 3 N 4 nanocomposite with four peaks in Fig. 3b is similar with that of 2D-C 3 N 4 in Fig. 3d. The peaks at 398.7 eV, 399.8 eV, 401.0 eV and 404.4 eV were ascribed to the sp 2 -hybridized nitrogen atoms in C]N-C, the tertiary nitrogen, the amino functions carrying hydrogen (C-N-H) and p-excitation, respectively. 49 The C 1s and N 1s signals conrmed the presence of 2D-C 3 N 4 in 5% TiS 2 /2D-C 3 N 4 nanocomposite, which is consistent with the results in FT-IR. The binding energies of Ti 2p peaks (458.5 eV for 2p 3/2 and 463.3 eV for 2p 1/2 ) and the weak binding energies of S 2À 2p 3/2 and 2p 1/2 peaks emerging at 161.0 eV and 162.2 eV in 5% TiS 2 /2D-C 3 N 4 nanocomposite were similar with the peaks in TiS 2 nanodots, which agreed well with those of 1T-TiS 2 . 50 It demonstrated the existence of TiS 2 nanodots in 5% TiS 2 /2D-C 3 N 4 composite. The presence of TiS 2 nanodots in the nanocomposites was further studied by SEM images and corresponding EDS. The micro-morphology and element mapping of 5% TiS 2 /2D-C 3 N 4 nanocomposite were shown in Fig. 4. The 2D nanosheets could be seen clearly in Fig. 4a and d. Because TiS 2 nanodots and 2D-C 3 N 4 nanosheets were self-assembled together, it was difficult to distinguish them from morphology. From the element mapping of 5% TiS 2 /2D-C 3 N 4 nanocomposite, it could be seen that TiS 2 nanodots were dispersed in the so nanosheets of 2D-C 3 N 4 uniformly. Ti and S elements of TiS 2 nanodots were distributed in the matrix C and N elements of 2D-C 3 N 4 in the nanocomposites, which was consistent with the TEM results. As the content of TiS 2 nanodots in composite was 5%, the signals of Ti and S were much lower than those of C and N. The UV-vis absorption of pure TiS 2 nanodots, 2D-C 3 N 4 and as-prepared TiS 2 /2D-C 3 N 4 nanocomposites were shown in Fig. 5a. The curve of 2D-C 3 N 4 displayed a starting absorption at the edge of visible light blending into the ultraviolet, which corresponded to the band gap of 2D-C 3 N 4 at 2.68 eV in Fig. 5b. And TiS 2 nanodots could absorb UV and visible light. TiS 2 /2D-C 3 N 4 nanocomposites performed similar absorption width with 2D-C 3 N 4 nanosheets but much higher absorption value attributed to the light absorption of TiS 2 , and the absorption intensity at the UV and visible light region were enhanced with increasing content of TiS 2 nanodots which indicated that TiS 2 /2D-C 3 N 4 nanocomposites obtained more photogenerated e-h pairs at 200-700 nm. In order to investigate the inuence of TiS 2 QDs on the separation efficiency of the photoexcited carriers in TiS 2 /2D-C 3 N 4 nanocomposites, the PL spectroscopy measurements were carried out for 2D-C 3 N 4 and 5% TiS 2 /2D-C 3 N 4 nanocomposites. As shown in Fig. 5c, the uorescence intensity of emission peak at about 460 nm for 5% TiS 2 /2D-C 3 N 4 nanocomposite was much weaker than that of 2D-C 3 N 4 due to the low recombination rate of photo-generated charge carriers. 51 Because of the higher UVvis absorption and lower uorescence emission intensity, more photo-generated charge carriers in 5% TiS 2 /2D-C 3 N 4 nanocomposite were trapped by highly conductive TiS 2 nanodots on 2D-C 3 N 4 nanosheets. The photocatalytic performance of the nanocomposites was mainly evaluated by photo-degradation of RhB, because RhB is a typical organic dye which is a common water pollutant and could cause long-term environmental toxicity and short-term public health damage. Fig. 6a showed the evolution of the degradation rate of RhB along with time. All the curves had a similar linear downtrend which seemed to be zero-order kinetic process. Hence, the data points were tted according to the zero-order kinetic process: where k is the degradation rate constant, and b represents the residual composition at 0 min. The k, b and R 2 values of the tted curves for all the samples were listed in Table 1. According to R 2 values of the samples (higher than 0.982), the tted curves matched well with the obtained data. 52 So, the degradation behaviour surely followed the zero-order kinetic process. The degradation rate constant of 2D-C 3 N 4 was 0.00783 min À1 . Compared with blank 2D-C 3 N 4 , all the TiS 2 /2D-C 3 N 4 nanocomposites exhibited higher photocatalytic degradation rate under the same condition. The addition of TiS 2 nanodots to 2D-C 3 N 4 enhanced the photocatalytic activity obviously. As the amount of TiS 2 nanodots increase, the degradation rate of RhB was enhanced signicantly. 5% TiS 2 /2D-C 3 N 4 nanocomposite performed the highest degradation rate among all the catalysts (0.0128 min À1 ). When the loading amount of TiS 2 nanodots reached to 10 wt%, the photocatalytic degradation of RhB decreased as too much TiS 2 nanodots blocked the photo absorption of 2D-C 3 N 4 . The as-prepared 10% TiS 2 /2D-C 3 N 4 has higher b value than 5% TiS 2 /2D-C 3 N 4 nanocomposite because of weaker absorption ascribed to too high loadings of TiS 2 nanodots. The highest degradation rate of TiS 2 /2D-C 3 N 4 nanocomposites was ca. 95% in 70 min, which got an improvement of ca. 30% in comparison with that of 2D-C 3 N 4 . The photocurrent experiment was carried out to explore the charge separation and transfer efficiency (Fig. 6b). The measurements clearly showed that the introduction of TiS 2 nanodots enhanced the photocurrent of 2D-C 3 N 4 , in which the 5% TiS 2 /2D-C 3 N 4 nanocomposite exhibited the highest value. The photocurrent for 5% TiS 2 /2D-C 3 N 4 was about 2.5 times higher than that given by bare 2D-C 3 N 4 . The results above suggested that the introduction TiS 2 nanodots enhanced the charge separation greatly and suppressed the recombination of e-h pairs, thus an improving the photocatalytic performance. Besides the high photocatalytic degradation efficiency, the 5% TiS 2 /2D-C 3 N 4 nanocomposite showed high stability aer ve times of cyclic experiments (Fig. 7a). In addition, the nanocomposite still preserved the chemical structure of 2D- were consistent with nanocomposite before cycling. FT-IR spectra of TiS 2 /2D-C 3 N 4 nanocomposites before and aer cycling was similar as well. As discussed above, the metallic TiS 2 nanodots in the system played a crucial role in the migration of electrons excited by the 2D-C 3 N 4 semiconductor, which could efficiently suppress the recombination of e-h pairs. To gain a further insight into the photocatalytic degradation process, ESR analysis was carried out to explore the active radicals using 5,5-dimethyl-1-pyrroline N-oxide (DMPO) as a spin trapping, as shown in Fig. 8. Obviously, the ESR results showed that, when TiS 2 /2D-C 3 N 4 nanocomposite were employed as photocatalyst, both O 2 c À and cOH could be generated under visible light irradiation. According to previous reports, the top of valence band (TVB) potential of 2D-C 3 N 4 was 1.79 V vs. RHE. 53,54 The band gap was calculated to be 2.68 eV (Fig. 5b), thus the bottom of conduction band (BCB) potential of 2D-C 3 N 4 was À0.89 V vs. RHE. Apparently, the potential energy of the photo-excited electrons at BCB potential was higher than that of O 2 c À /O 2 (À0.046 eV), thus the electrons possessed the ability to combine with O 2 into O 2 c À . For the generation of cOH, the holes at the TVB of 2D-C 3 N 4 cannot oxidize H 2 O or OH À into cOH due to the lower TVB position in comparison with OH À /cOH (1.99 eV) and H 2 O/cOH (2.38 eV). The cOH may be generated by the further reactions of strong reduction O 2 c À through an intermediate of cOOH: [55][56][57] O 2 c À + e À + H + / cOOH (1) Fig. 9 is a schematic illustration of the synergistic effect of the photocatalytic process. In a typical photocatalytic degradation process, the 2D-C 3 N 4 nanosheets could photogenerated e-h pairs under visible light rstly. Aerwards, the electrons kept at conduction band would be efficiently trapped by metallic TiS 2 nanodots as co-catalyst. The high concentration of electrons could react with O 2 to form active radicals O 2 c À . Due to the strong reduction of the photogenerated electrons, the formed O 2 c À also possessed very strong reduction which led to the further reactions and formation of cOH radicals through an intermediate of cOOH as listed from reaction (1) to (3). The radicals O 2 c À and cOH were well known as effective radicals for water pollutant treatment. In addition, the hole at valence band of 2D-C 3 N 4 could oxidize pollutant directly. The metallic nature of TiS 2 nanodots and the construction of tight interface between the two components were benecial for the separation and transfer of photo-excited carriers. Therefore, high concentration of photogenerated electrons could converge for photocatalytic reactions at TiS 2 nanodots instead of recombination at 2D-C 3 N 4 nanosheets. Conclusion We utilized a simple exfoliation synthesis to get ultrathin TiS 2 nanodots and the TiS 2 /2D-C 3 N 4 was synthesized by a temperate solvothermal method. The TiS 2 nanodots were tightly anchored on the surface of 2D-C 3 N 4 nanosheets and the inherent metallic nature of 1T-TiS 2 improved the photocatalytic degradation activity due to the effective suppression of the e-h recombination and the higher light absorption. Noteworthy, the content of the TiS 2 nanodots in the prepared complex played a role in the RhB photocatalytic degradation. The 5% TiS 2 /2D-C 3 N 4 exhibited the best photocatalytic performance and the degradation rate towards RhB was ca. 95% in 70 min, which got an improvement of ca. 30% in comparison with pure 2D-C 3 N 4 nanosheets. So, the combination of ultrathin metallic TMDs with 2D graphitic C 3 N 4 yields a promising photocatalyst for practical application. 9 The degradation of mechanism of as-prepared TiS 2 /2D-C 3 N 4 composites. Fig. 7 (a) Cycling runs for the photodegradation of RhB in the presence of 5% TiS 2 /2D-C 3 N 4 nanocomposite. The chemical and crystal structure of 5% TiS 2 /2D-C 3 N 4 nanocomposite before and after 5 cycles photodegradation of RhB, XRD pattern (b) and FT-IR spectra (c).
5,374.6
2017-12-01T00:00:00.000
[ "Materials Science", "Chemistry", "Environmental Science" ]
Implementation and Analysis of Clustering Algorithms in Data Mining : Data mining plays a very important role in information industry and in society due to the presence of huge amount of data. Organizations in the whole world are already aware about data mining. Data mining is the process which uses various kinds of data analysis tools to obtain patterns which also referred to as knowledge discovery from data. Clustering is called unsupervised learning algorithm as groups are not predefined but defined by the data. There are so many research areas in data mining. This paper is focusing on performance and evaluation of clustering algorithm: K-means, SOM and HAC. Evaluations of these three algorithms are purely based on the survey based analysis. These algorithms are analyzed by applying on the data set of banking which is a very high dimensional data. Performances of these algorithms are also compared with each other. Our results indicate that SOM technique is better than k-means and as good as or better than the hierarchical clustering technique. We have also generated one code in Orange Python which is the enhanced algorithm based on the hybrid approach of SOM, K-means and HAC. Introduction: Any clustering technique is [1] having the purpose of evolving a K*n partition matrix U(x) of a dataset. Clustering techniques broadly fallen into two main classes, partitioning and hierarchical. In any clustering system two fundamental questions arises: 1) How many clusters are actually present in the data and 2) how real or good is the clustering itself. Data mining algorithms [2] for processing large amount of data must be scalable. Algorithms of data mining which are used for processing data with changing patterns must have the capability of updating and learning data. One of the traditional data mining techniques is [3] clustering which is an unsupervised learning paradigm where clustering methods try to identify the inherent grouping of text document, such that a set of clusters formed which exhibit high intracluster similarity an low intercluster similarity. K-means clustering algorithm is a very simple iterative method which is used for partitioning a dataset into k number of clusters which one is purely user specified. [4] This algorithm can easily adapt to dynamic P2P network where existing nodes drop out and new nodes join in during the execution of algorithm and the data in the network changes. Objective: Following are the objective of our research: 1. To evaluate the performance of clustering algorithm. 2. To analyze the banking data by applying clustering algorithms on it. 3. To find best possible solution for handling large amount of data. Dataset: We analyze the Banking dataset. This is a real dataset. I have done a lot of surveys for finding the appropriate dataset according to my requirement. . In our research work, we will be focusing on performance and evaluation of clustering algorithms. There are many clustering algorithms in data mining but we will focus mainly on K-means, SOM and HAC. Data contain 1001 entries. We have adopted the hybrid approach of kmeans, HAC and SOM. Tool Used: I have used the open source tool name Tanagra. Tanagra is very powerful tool which contain supervised learning as well as other paradigms like clustering, factorial analysis etc. In this project, I will apply K-Means, SOM and HAC using Tanagra tool and find the efficiency and performance of each algorithm and find out that using Tanagra tool which algorithm is able to handle with large amount of high dimensional data. Firstly I have applied the Kohonen SOM algorithm on Dataset. I have also changed the parameters like rows are having size 2 and column are having size 3. After applying SOM, we are getting the error ratio 0.6382. M a y 1 5 , 2 0 1 3 Now I have clicked on the data visualization option and drag the scatter plot option and dropped on the Kohonen-SOM. When we click on this option, it will show the various services of banking in the form of dots, square, triangle having different colors, When we select the attribute say income tax on the y axis and attribute say minimal deposit on the y axis. As we see that dots which form clusters are not having clear view because we have high dimensional data. When we will drag K-means on Kohonen-SOM, then we will see the above results. In this case R-Square is 0.6600 which seems to be very high as compare to Kohonen-SOM. Within sum of square is 1700.2125 and total sum of square is 5000.000 M a y 1 5 , 2 0 1 3 Above figure shows the dendrogram of HAC. Conclusion:-From above work, we can clearly see that the results Coming using K-means are not able to handle large amount of data. Its error ratio is also very large, as in this case we mainly deal with high dimension data. SOM is helpful for handling with large data and also used for pattern recognition, image processing etc. Proposed work: I have proposed one enhanced algorithm which will give the better results and visualization of nodes as results in Tanagra was not clear as the end of hybrid approach. I have generated one code in Orange Python based on the hybrid approach of SOM, HAC & K-means. We have browsed the script file name and press ok button Now the code is running, we are seeing the nodes having different attribute values and services. Future work: I will make one optimal algorithm which will overcome the drawbacks of SOM, HAC and K-means. I will generate one program in Python which will give the outputs in the form of clusters.
1,314.6
2013-05-30T00:00:00.000
[ "Computer Science" ]
Mitigation of Lost Circulation in Oil-Based Drilling Fluids Using Oil Absorbent Polymers In order to mitigate the loss circulation of oil-based drilling fluids (OBDFs), an oil-absorbent polymer (OAP) composed by methylmethacrylate (MMA), butyl acrylate (BA), and hexadecyl methacrylate (HMA) was synthesized by suspension polymerization and characterized by Fourier transform infrared spectroscopy (FT-IR), thermogravimetric analysis (TGA) and scanning electronic microscopy (SEM). The oil-absorptive capacity of OAP under different solvents was measured as the function of temperature and time. The effect of the OAP on the rheological and filtration properties of OBDFs was initially evaluated, and then the sealing property of OAP particles as lost circulation materials (LCMs) was examined by a high-temperature and high-pressure (HTHP) filtration test, a sand bed filtration test, a permeable plugging test, and a fracture sealing testing. The test results indicated that the addition of OAP had relatively little influence on the rheological properties of OBDF at content lower than 1.5 w/v % but increased the fluid viscosity remarkably at content higher than 3 w/v %. It could reduce the HTHP filtration and improve the sealing capacity of OBDF significantly. In the sealing treatment, after addition into the OBDF, the OAP particles could absorb oil accompanied with volume enlargement, which led to the increase of the fluid viscosity and slowing down of the fluid loss speed. The swelled and deformable OAP particles could be squeezed into the micro-fractures with self-adoption and seal the loss channel. More important, fluid loss was dramatically reduced when OAP particles were combined with other conventional LCMs by a synergistic effect. Introduction In oil and gas drilling engineering, one of the frequently encountered problems is lost circulation, which is defined as the undesirable partial or complete loss of drilling fluid into formation voids during drilling, circulation, running casing, or cementing operations [1,2]. Once the total pressure exerted against the formation exceeds the formation breakdown pressure, lost circulation may be encountered at any depth. According to the statistics, the occurrence of lost circulation is present in approximately 20 to 25% of wells drilled around the world during drilling [3], and results in several troublesome problems such as excessive mud losses, non-productive time, stuck pipe, well kick, well blow-out and even abandonment of the wells [4][5][6][7]. Moreover, it has also been blamed for minimized production because the loss of fluid into a formation plugs the production zones and leads By deforming and being squeezed into the loss fractures with flexibility, the polymers can absorb water and fill in the fractures or pores. Due to the adsorption on the rock surface, it is easy for the polymers to stay in the loss channels and form a strong and pliable plug [35]. Because of the above advantages, water absorbent resin has been widely used in water-based drilling fluid as an effective LCM [36,37]. Behaving like water-absorbent resins, oil-absorbent resins having the cross-linked, three-dimensional, and hydrophobic networks that do not dissolve in oil are mainly used for absorbing oil in environmental pollution treatment [38]. Therefore, considering the similar properties between oil-absorbent resin and water-absorbent resin, the object of the current study is to probe the feasibility of oil absorbent resin in mitigating the loss of OBDFs. The low-toxicity mineral oil of No. 3 white oil used as the base oil was purchased from Shandong Taichang Petrochemical Technology Co., Ltd. (Qingdao, China). The organic clay with commercial name VG-Plus was provided by M-I Swaco Company of Schlumberger. Primary emulsifier BZ-OPE (amidoamine type) and secondary emulsifier BZ-OSE (fatty acid type) were provided by China National Petroleum Corporation (Tianjin, China). Rheological modifier SD-RM prepared from a reaction between polyacid and polyethene polyamine was provided by Shandong Shida Chuangxin Technology Co., Ltd. (Dongying, China). Barite used as weighting material was purchased from An County Huaxi mineral powder Co., Ltd. (Mianyang, China). Two types of conventional fluid loss additives including asphaltic additive and modified lignite used in OBDFs were provided by China National Petroleum Corporation and Shandong Shida Chuangxin Technology Co., Ltd. (Dongying, China), respectively. Lime (CaO, pH enhancer) and CaCl 2 were bought from Sinopharm Chemical Reagent Co., Ltd. (Beijing, China). Calcium chloride was purchased from Sinopharm Chemical Reagent Co., Ltd. with analytical purity. The sized calcium carbonate (SCC) particles were provided by Jingmen Shun Zhan calcium Industry Co., Ltd. (Jingmen, China). The rubber (RUB) particles were obtained from Dujiangyan Huayi Rubber Co., Ltd. (Dujiangyan, China). The fibers (FIB) using as LCM were bought from Changzhou Tianyi engineering fiber Co., Ltd. (Changzhou, China). All the reagents were used as received without further purification. Synthesis and Characterization of Oil-Absorbent Polymer (OAP) The reaction was performed in 500 mL, four-necks, round-bottom flasks equipped with a mechanical stirrer and heating device. Initially, 240 mL of PVA (2 g) solution was added into the flask and stirred for 30 min with water bath heated to 60 • C for facilitation of dissolution. The system was charged with nitrogen gas and then sealed under nitrogen. Then, a mixture containing monomers of MMA (6 g), BA (16 g) and HMA (18 g), cross-linker MBA (0.3 g), initiator BPO (0.4 g) and porogen EAC (5 g) was added into the reactor within 10 min. The polymerization reaction was performed at 80 • C for 6 h under a stirring rate of 600 rpm. After reaction termination, the products were washed with absolute ethanol several times and then washed with hot deionized water (60-70 • C) several times. After washing, the sample was dried in vacuum drying oven (Qingdao Haitongda Special Instrument Co., Ltd., Qingdao, China) at 55 • C for 24 h and finally the product of oil-absorbent polymer (OAP) was obtained as small beads. The particle size can be controlled by adjusting the stirring rate and the ratio of reaction monomers. The Fourier transform infrared (FT-IR) spectra were recorded by a Nicolet 6700 FT-IR spectrometer (Thermo Fisher Nicolet Corporation, Waltham, MA, USA), scanning from 4000 to 400 cm −1 , with 4 cm −1 resolution in transmission. A TGA/DSC 1/1600 HT thermal analyzer from Mettler Toledo (Zurich, Switzerland) was used for thermogravimetric analysis (TGA) with a heating program from room temperature to 1000 • C at a heating rate of 10 K min −1 in nitrogen flow of 50 mL min −1 . The morphological features of the OAP were inspected with FEI Quanta FEG 250 field-emission scanning electron microscope (SEM, Hillsboro, OR, USA). The oil-adsorption capacity was conducted with the weighting method [39]. A quantity of 1 g of dried OAP samples was put into a filter bag and immersed in oil at a certain temperature. After a period of oil absorption, the filter bag with the sample was lifted from the oil and drained for 1 min. Then the sample was immediately taken out, weighed and recorded. The oil absorbency was calculated as follows: where, R is the oil absorbency at a certain testing time, g/g; W is the weight of OAP after oil adsorption for a certain testing time, g. Preparation of Oil-Based Drilling Fluids (OBDFs) The mineral oil-based drilling fluids were prepared according to the experimental methods recommended in API RP 13B-2 [40]. The drilling fluid formula with oil to water ratio (OWR) of 90:10 is listed in Table 1. When the OWR of the fluid changed, the concentration of primary emulsifier and assistant emulsifier would be adjusted correspondingly to ensure the emulsion stability. The fluids were hot rolled in a rolling oven (Qingdao Haitongda Special Instrument Co., Ltd., Qingdao, China) at a certain desired temperature for 16 h. After the dynamic aging, the fluids were cooled down to room temperature and agitated for 10 min at 10,000 rpm before it was analyzed. Rheological Properties and Electrical Stability Measurement The rheological properties of the fluids were carried out at 50 • C according to the standard American Petroleum Institute Recommended Practice (API RP) 13B-2. The rheological parameters including apparent viscosity (AV), plastic viscosity (PV), yield point (YP), and gel strength of the OBDFs were measured using a model ZNN-D6 six-speed rotating viscometer (Qingdao Haitongda Special Instrument Co., Ltd., Qingdao, China). The AV, PV and YP were calculated from 300 and 600 rpm readings by the following equations: Apparent viscosity (AV) = Φ600/2 (mPa·s) Plastic viscosity (PV) = Φ600 − Φ300 (mPa·s) The Gel in and Gel 10min were recorded as the maximum dial reading at a fixed rate of 3 r/min after undisturbed for 10 s and 10 min, respectively [41]. The electrical stability (ES) of the OBDFs was measured using the electrical stability tester (Qingdao Shande Petroleum Apparatus Co., Ltd., Qingdao, China). Filtration Properties Measurement Different filter presses are used to determine the filtration property of drilling fluids. The API filtrate volume of OBDFs before and after hot rolling was tested by a ZNZ-D3-type medium-pressure filtration apparatus (Qingdao Haitongda Special Instrument Co., Ltd., Qingdao, China). The volume of filtration was collected through filter paper as filtration medium under a fixed pressure of 0.7 MPa for 30 min as recommended with API standard. In most situations, drilling fluid filtrate into the formation in the drilling is a dynamic process; therefore, the high-temperature and high-pressure (HTHP) dynamic fluid loss was measured with a HTHP dynamic filter press (Figure 1) (Qingdao Haitongda Special Instruments Co., Ltd.) at a stirring speed of 100 rpm. The tests were run for 30 min at a differential pressure of 3.5 MPa and 150 • C. The filtrate was collected with filter paper as filter medium. Filtration Properties Measurement Different filter presses are used to determine the filtration property of drilling fluids. The API filtrate volume of OBDFs before and after hot rolling was tested by a ZNZ-D3-type medium-pressure filtration apparatus (Qingdao Haitongda Special Instrument Co., Ltd., Qingdao, China). The volume of filtration was collected through filter paper as filtration medium under a fixed pressure of 0.7 MPa for 30 min as recommended with API standard. In most situations, drilling fluid filtrate into the formation in the drilling is a dynamic process; therefore, the high-temperature and high-pressure (HTHP) dynamic fluid loss was measured with a HTHP dynamic filter press (Figure 1) (Qingdao Haitongda Special Instruments Co., Ltd.) at a stirring speed of 100 rpm. The tests were run for 30 min at a differential pressure of 3.5 MPa and 150 °C. The filtrate was collected with filter paper as filter medium. When the circulation of drilling fluid stops because of accidents or downhole operations, a static filtration occurs. To simulate the static filtration behavior of drilling fluid, the HTHP static fluid loss was conducted via the HTHP filter apparatus (GGS71-B, Qingdao Haitongda Special Instruments Co., Ltd.) under the condition of a certain temperature and 3.5 MPa pressure difference for a period of 30 min. The volume of filtration and the thickness of the filter cake were recorded. Properties of Sealing To evaluate the sealing performance of OAP, four distinctive tests including HTHP filtration (including both static and dynamic filtration) using API filter paper as the filtration medium to simulate the permeable formation [42]; a permeability plugging test using ceramic disk as filtration medium to determine the ability of particles in the drilling fluid to bridge pores [43]; a sand bed filtration test using a sand bed as filtration medium to simulate an unconsolidated formation; and a fracture sealing test using wedged and slotted stainless steel as filtration medium to simulate a fractured formation were carried out [44,45]. The test procedure of the sand bed filtration test and permeability plugging test refer to Zhong et al. [46]. The sand particles used for sand bed filtration test have the particle size ranging from 380 µm to 830 µm. The sand disk using for permeability plugging test was calibrated to 10 D. When the circulation of drilling fluid stops because of accidents or downhole operations, a static filtration occurs. To simulate the static filtration behavior of drilling fluid, the HTHP static fluid loss was conducted via the HTHP filter apparatus (GGS71-B, Qingdao Haitongda Special Instruments Co., Ltd.) under the condition of a certain temperature and 3.5 MPa pressure difference for a period of 30 min. The volume of filtration and the thickness of the filter cake were recorded. Properties of Sealing To evaluate the sealing performance of OAP, four distinctive tests including HTHP filtration (including both static and dynamic filtration) using API filter paper as the filtration medium to simulate the permeable formation [42]; a permeability plugging test using ceramic disk as filtration medium to determine the ability of particles in the drilling fluid to bridge pores [43]; a sand bed filtration test using a sand bed as filtration medium to simulate an unconsolidated formation; and a fracture sealing test using wedged and slotted stainless steel as filtration medium to simulate a fractured formation were carried out [44,45]. The test procedure of the sand bed filtration test and permeability plugging test refer to Zhong et al. [46]. The sand particles used for sand bed filtration test have the particle size ranging from 380 µm to 830 µm. The sand disk using for permeability plugging test was calibrated to 10 D. In order to evaluate the fracture plugging capacity of LCMs, a fracture sealing testing apparatus ( Figure 2, Instrument Factory of Petroleum University, Dongying, China) with tapered and slotted stainless steel discs ( Figure 3) that simulate natural/induced fractures was adopted. Firstly, tapered slots were placed before the output valve. Then, fluids containing LCMs were forced to flow at a constant stirring rate of 100 rpm through the discs with a gradual increase of pressure. The applied pressure was initially set to be 1.0 MPa and kept stable for 10 min. If the LCMs effectively sealed the fracture and the pressure declined within 5%, the pressure was continuously increased and the volume of fluid loss was recorded. Once a continuous leakage of fluid occurred, it was asserted that the LCMs had reached the maximum pressure bearing capacity and the test was ceased. Treatments with lower fluid loss values correspond to effectiveness of mitigating the loss. However, because of the limitation that the maximum applied pressure is 8.0 MPa for the equipment, the maximum pressure at which the formed seal breaks and fluid loss resumes was not measured. In order to evaluate the fracture plugging capacity of LCMs, a fracture sealing testing apparatus ( Figure 2, Instrument Factory of Petroleum University, Dongying, China) with tapered and slotted stainless steel discs ( Figure 3) that simulate natural/induced fractures was adopted. Firstly, tapered slots were placed before the output valve. Then, fluids containing LCMs were forced to flow at a constant stirring rate of 100 rpm through the discs with a gradual increase of pressure. The applied pressure was initially set to be 1.0 MPa and kept stable for 10 min. If the LCMs effectively sealed the fracture and the pressure declined within 5%, the pressure was continuously increased and the volume of fluid loss was recorded. Once a continuous leakage of fluid occurred, it was asserted that the LCMs had reached the maximum pressure bearing capacity and the test was ceased. Treatments with lower fluid loss values correspond to effectiveness of mitigating the loss. However, because of the limitation that the maximum applied pressure is 8.0 MPa for the equipment, the maximum pressure at which the formed seal breaks and fluid loss resumes was not measured. In order to evaluate the fracture plugging capacity of LCMs, a fracture sealing testing apparatus ( Figure 2, Instrument Factory of Petroleum University, Dongying, China) with tapered and slotted stainless steel discs ( Figure 3) that simulate natural/induced fractures was adopted. Firstly, tapered slots were placed before the output valve. Then, fluids containing LCMs were forced to flow at a constant stirring rate of 100 rpm through the discs with a gradual increase of pressure. The applied pressure was initially set to be 1.0 MPa and kept stable for 10 min. If the LCMs effectively sealed the fracture and the pressure declined within 5%, the pressure was continuously increased and the volume of fluid loss was recorded. Once a continuous leakage of fluid occurred, it was asserted that the LCMs had reached the maximum pressure bearing capacity and the test was ceased. Treatments with lower fluid loss values correspond to effectiveness of mitigating the loss. However, because of the limitation that the maximum applied pressure is 8.0 MPa for the equipment, the maximum pressure at which the formed seal breaks and fluid loss resumes was not measured. Fourier Transform Infrared (FT-IR) Spectra A possible chemical reaction mechanism for OAP is depicted in Figure 4. The FT-IR spectrum of the copolymer prepared by suspension polymerization is shown in Figure 5. The characteristic absorption bands that appeared at 2920 and 2850 cm −1 were assigned to the asymmetric and symmetric stretching vibrations of C-H, respectively. The bands at 1730 cm −1 were attributed to C=O stretching vibration. The bands at 1470 and 1380 cm −1 corresponded to the C-H asymmetric and symmetric bending vibration. The bands at 1240 and 1160 cm −1 were the C-O-C asymmetric stretching vibration and symmetric stretching vibration, respectively. The band at 1030 cm −1 was assigned to the C-N stretching vibration. The bands at 723 and 640 cm −1 were the out of plane bending vibration of N-H (amide V) and C=O (amide VI), respectively. The disappearance of C=C bands usually at 1620-1680 cm −1 indicated the thorough reaction of the monomers. Fourier Transform Infrared (FT-IR) Spectra A possible chemical reaction mechanism for OAP is depicted in Figure 4. The FT-IR spectrum of the copolymer prepared by suspension polymerization is shown in Figure 5. The characteristic absorption bands that appeared at 2920 and 2850 cm −1 were assigned to the asymmetric and symmetric stretching vibrations of C-H, respectively. The bands at 1730 cm −1 were attributed to C=O stretching vibration. The bands at 1470 and 1380 cm −1 corresponded to the C-H asymmetric and symmetric bending vibration. The bands at 1240 and 1160 cm −1 were the C-O-C asymmetric stretching vibration and symmetric stretching vibration, respectively. The band at 1030 cm −1 was assigned to the C-N stretching vibration. The bands at 723 and 640 cm −1 were the out of plane bending vibration of N-H (amide V) and C=O (amide VI), respectively. The disappearance of C=C bands usually at 1620-1680 cm −1 indicated the thorough reaction of the monomers. Thermogravimetric Analysis (TGA) The thermal stability is of vital importance for OAP because it has to experience downhole high temperature conditions. Figure 6 presents the resolution of weight percent of OAP sample with the change of temperature and the first derivative thermogravimetric curve. It could be seen that before Fourier Transform Infrared (FT-IR) Spectra A possible chemical reaction mechanism for OAP is depicted in Figure 4. The FT-IR spectrum of the copolymer prepared by suspension polymerization is shown in Figure 5. The characteristic absorption bands that appeared at 2920 and 2850 cm −1 were assigned to the asymmetric and symmetric stretching vibrations of C-H, respectively. The bands at 1730 cm −1 were attributed to C=O stretching vibration. The bands at 1470 and 1380 cm −1 corresponded to the C-H asymmetric and symmetric bending vibration. The bands at 1240 and 1160 cm −1 were the C-O-C asymmetric stretching vibration and symmetric stretching vibration, respectively. The band at 1030 cm −1 was assigned to the C-N stretching vibration. The bands at 723 and 640 cm −1 were the out of plane bending vibration of N-H (amide V) and C=O (amide VI), respectively. The disappearance of C=C bands usually at 1620-1680 cm −1 indicated the thorough reaction of the monomers. Thermogravimetric Analysis (TGA) The thermal stability is of vital importance for OAP because it has to experience downhole high temperature conditions. Figure 6 presents the resolution of weight percent of OAP sample with the change of temperature and the first derivative thermogravimetric curve. It could be seen that before Thermogravimetric Analysis (TGA) The thermal stability is of vital importance for OAP because it has to experience downhole high temperature conditions. Figure 6 presents the resolution of weight percent of OAP sample with the change of temperature and the first derivative thermogravimetric curve. It could be seen that before 100 • C little weight loss was observed. The copolymer began to degrade rapidly when the temperature reached 300 • C. The sample weight loss is about 50.12% at 386 • C. The maximum pyrolysis rate occurred at about 392 • C. When the temperature reached 436 • C, the sample came to the total pyrolysis. The results indicated that OAP had a good thermal stability and could be used in high temperature environments. Scanning Electron Microscopy (SEM) In order to observe the microstructure of OAP, SEM was used to inspect the cross section and surface morphologies of OAP samples, as depicted in Figure 7. From Figure 7a, aggregated micrometer-sized spheres were clearly observed. It is worth mentioning that many small pores were randomly distributed in the samples. There pores support the favorable space in the polymer network. The three-dimensional network structure and micro-pores with proper size and quantity was beneficial for the oil molecules to enter into the internal space; however, it is not easy for the oil molecules to exude from the three-dimensional crosslinked resin, like a sponge [47]. Oil-Adsorptive Capacity The evolution of oil adsorptive rate which is calculated with Equation (1) with time is depicted in Figure 8. As can be seen from Figure 8 Scanning Electron Microscopy (SEM) In order to observe the microstructure of OAP, SEM was used to inspect the cross section and surface morphologies of OAP samples, as depicted in Figure 7. From Figure 7a, aggregated micrometer-sized spheres were clearly observed. It is worth mentioning that many small pores were randomly distributed in the samples. There pores support the favorable space in the polymer network. The three-dimensional network structure and micro-pores with proper size and quantity was beneficial for the oil molecules to enter into the internal space; however, it is not easy for the oil molecules to exude from the three-dimensional crosslinked resin, like a sponge [47]. Scanning Electron Microscopy (SEM) In order to observe the microstructure of OAP, SEM was used to inspect the cross section and surface morphologies of OAP samples, as depicted in Figure 7. From Figure 7a, aggregated micrometer-sized spheres were clearly observed. It is worth mentioning that many small pores were randomly distributed in the samples. There pores support the favorable space in the polymer network. The three-dimensional network structure and micro-pores with proper size and quantity was beneficial for the oil molecules to enter into the internal space; however, it is not easy for the oil molecules to exude from the three-dimensional crosslinked resin, like a sponge [47]. Oil-Adsorptive Capacity The evolution of oil adsorptive rate which is calculated with Equation (1) with time is depicted in Figure 8. As can be seen from Figure 8 Oil-Adsorptive Capacity The evolution of oil adsorptive rate which is calculated with Equation (1) with time is depicted in Figure 8. As can be seen from Figure 8, for both No. 3 white oil and diesel oil, a quick adsorption rate was observed for the initial testing interval of 20 h, followed by a slightly increased adsorptive rate, and it finally reached saturated adsorption. After testing for 72 h, the white oil absorbency was 8.3, 9.9, 11.4 g/g for testing temperature of room temperature, 90 • C and 120 • C, and the diesel oil absorbency was 6.7, 7.6, 9.0 g/g accordingly. Based on the field experience, the saturation adsorption time of water absorbent resin higher than 5 h would be sufficient for the resin to be injected into the downhole and fulfil its role [48], therefore for OAP, it would not become adsorption saturation before reaching the desired thief zone, which is advantageous for downhole application. OAP particles would still absorb oil when entering the loss fractures and prevent fluid loss. Meanwhile, the oil-adsorptive rate increased with the increasing testing temperatures, implying that OAP would adsorb a larger amount of oil in the downhole environment. Moreover, the saturation adsorptive amount of white oil was higher than that of diesel oil, demonstrating that OAP may be more effective in white oil-based drilling fluid. Materials 2018, 11, x FOR PEER REVIEW 9 of 21 rate was observed for the initial testing interval of 20 h, followed by a slightly increased adsorptive rate, and it finally reached saturated adsorption. After testing for 72 h, the white oil absorbency was 8.3, 9.9, 11.4 g/g for testing temperature of room temperature, 90 °C and 120 °C, and the diesel oil absorbency was 6.7, 7.6, 9.0 g/g accordingly. Based on the field experience, the saturation adsorption time of water absorbent resin higher than 5 h would be sufficient for the resin to be injected into the downhole and fulfil its role [48], therefore for OAP, it would not become adsorption saturation before reaching the desired thief zone, which is advantageous for downhole application. OAP particles would still absorb oil when entering the loss fractures and prevent fluid loss. Meanwhile, the oiladsorptive rate increased with the increasing testing temperatures, implying that OAP would adsorb a larger amount of oil in the downhole environment. Moreover, the saturation adsorptive amount of white oil was higher than that of diesel oil, demonstrating that OAP may be more effective in white oil-based drilling fluid. Optical microscope photographs of OAP particles before and after white oil adsorption are displayed in Figure 9. An explicit distinction was observed between original OAP and swollen OAP. As shown in Figure 9a, before oil adsorption, a large amount of spherical particles with some cavities were accumulated together and formed an irregular surface, whereas, after reaching saturated adsorption, the small particles expanded significantly and filled in the gap space between the adjacent particles, forming a smooth surface. Optical microscope photographs of OAP particles before and after white oil adsorption are displayed in Figure 9. An explicit distinction was observed between original OAP and swollen OAP. As shown in Figure 9a, before oil adsorption, a large amount of spherical particles with some cavities were accumulated together and formed an irregular surface, whereas, after reaching saturated adsorption, the small particles expanded significantly and filled in the gap space between the adjacent particles, forming a smooth surface. Materials 2018, 11, x FOR PEER REVIEW 9 of 21 rate was observed for the initial testing interval of 20 h, followed by a slightly increased adsorptive rate, and it finally reached saturated adsorption. After testing for 72 h, the white oil absorbency was 8.3, 9.9, 11.4 g/g for testing temperature of room temperature, 90 °C and 120 °C, and the diesel oil absorbency was 6.7, 7.6, 9.0 g/g accordingly. Based on the field experience, the saturation adsorption time of water absorbent resin higher than 5 h would be sufficient for the resin to be injected into the downhole and fulfil its role [48], therefore for OAP, it would not become adsorption saturation before reaching the desired thief zone, which is advantageous for downhole application. OAP particles would still absorb oil when entering the loss fractures and prevent fluid loss. Meanwhile, the oiladsorptive rate increased with the increasing testing temperatures, implying that OAP would adsorb a larger amount of oil in the downhole environment. Moreover, the saturation adsorptive amount of white oil was higher than that of diesel oil, demonstrating that OAP may be more effective in white oil-based drilling fluid. Optical microscope photographs of OAP particles before and after white oil adsorption are displayed in Figure 9. An explicit distinction was observed between original OAP and swollen OAP. As shown in Figure 9a, before oil adsorption, a large amount of spherical particles with some cavities were accumulated together and formed an irregular surface, whereas, after reaching saturated adsorption, the small particles expanded significantly and filled in the gap space between the adjacent particles, forming a smooth surface. Influence of OAP on the Properties of OBDFs The effect of OAP particles (particle size of 80-100 mesh) on the properties of OBDFs was investigated firstly. OAP with various contents was added into the OBDFs. The fluid was hot-rolled at 120 • C for 16 h. The properties of the fluids before and after hot rolling are given in Table 2. Before hot rolling, the addition of OAP had little influence on the rheological properties of the OBDFs, scuh that AV (calculated with Equation (2)), PV (calculated with Equation (3)), YP (calculated with Equation (4)) and Gel changed slightly with the increase of OAP content; however, after hot rolling, the rheological parameters including AV, PV, YP and Gel strength all increased gradually initially when the OAP content was lower than 1.5 w/v %, and showed a significant increase at content of 3 w/v %. In terms of filtration control, as shown in Table 2, the API fluid loss decreased gradually with the increasing content of OAP for both before and after hot rolling. Also as depicted in Figures 10 and 11, for the HTHP static filtration test conducted under 120 • C and 3.5 MPa after hot rolling, both of the HTHP static fluid loss volume and filter cake thickness decreased with the increased content of OAP, which was reduced by 74% and 24%, respectively, when 3 w/v % OAP was added, indicating that OAP could effectively reduce the filtration loss and improve the filter cake quality. In regard to the electrical stability (ES) of the drilling fluid, which is one of the vital properties of an oil-based drilling fluid, this shows the voltage of the current flowing in the fluid and represents the emulsion stability of the fluid. The emulsion-breaking voltage decreased generally with the increasing content of OAP. High-Temperature and High-Pressure Filtration Test Two commercially available fluid loss additives including asphaltic additive and modified lignite were used as reference to compare the versatility of OAP. The fluid loss additives with 1 w/v % contents were added into the base formula of the OBDF and hot rolled at 120 °C for 16 h, and then the HTHP filtration tests were performed. As shown in Figure 12, for the HTHP static filtration test, the fluid loss volume was 32.4, 13.2, 20.4 and 23.6 mL for the base fluid, fluid containing OAP, modified lignite and asphaltic additives, respectively, showing the capacity of decreasing HTHP static filtration with the sequence of OAP > modified lignite > asphaltic additives. For the HTHP dynamic filtration test, the High-Temperature and High-Pressure Filtration Test Two commercially available fluid loss additives including asphaltic additive and modified lignite were used as reference to compare the versatility of OAP. The fluid loss additives with 1 w/v % contents were added into the base formula of the OBDF and hot rolled at 120 °C for 16 h, and then the HTHP filtration tests were performed. As shown in Figure 12, for the HTHP static filtration test, the fluid loss volume was 32.4, 13.2, 20.4 and 23.6 mL for the base fluid, fluid containing OAP, modified lignite and asphaltic additives, respectively, showing the capacity of decreasing HTHP static filtration with the sequence of OAP > modified lignite > asphaltic additives. For the HTHP dynamic filtration test, the The impact of OAP particles on the properties of OBDF could be explained by the fact that, when OAP particles were added into the fluid initially, it began to adsorb oil, but the adsorptive content was relatively low. After hot rolling at high temperatures for a certain time, a large amount of oil was adsorbed by OAP, resulting in the decreased amount of free oil in the system. Meanwhile the swelled volume of OAP particles increased the friction of the fluid. The decreased content of free base oil and increased friction in the fluid contributed to the viscosity buildup, lower fluid loss and reduction of emulsion stability. Furthermore, the expansive OAP particles participated in forming the filter cake, which was also favorable for reducing filtration. High-Temperature and High-Pressure Filtration Test Two commercially available fluid loss additives including asphaltic additive and modified lignite were used as reference to compare the versatility of OAP. The fluid loss additives with 1 w/v % contents were added into the base formula of the OBDF and hot rolled at 120 • C for 16 h, and then the HTHP filtration tests were performed. As shown in Figure 12, for the HTHP static filtration test, the fluid loss volume was 32.4, 13.2, 20.4 and 23.6 mL for the base fluid, fluid containing OAP, modified lignite and asphaltic additives, respectively, showing the capacity of decreasing HTHP static filtration with the sequence of OAP > modified lignite > asphaltic additives. For the HTHP dynamic filtration test, the fluid loss volume increased to 115.2, 52.0, 97.6 and 62.4 mL for the four fluids, showing the ability of decreasing HTHP dynamic filtration following the order of OAP > asphaltic additive > modified lignite. The significant increase of fluid loss compared with that of static filtration lies in the fact that filter cake was more difficult to form under a dynamic shear condition [49]. Overall, the three candidate additives all exhibited effective performance in HTHP filtration control. The cumulative fluid loss volume of both HTHP static filtration test and HTHP dynamic filtration test as a function of square root of time was analyzed with different mathematic models. For HTHP static filtration, a typical model recommended by API standard was given as follows [50], where Vsp and m are the intercept and slope of the line, representing the spurt loss and filtration rate, respectively. While for HTHP dynamic filtration test, another model describing the filtration in a dynamic test consisting of three separate steps: (1) spurt loss, (2) filter-cake deposition (square-root-of-time dependence), and (3) limitation of cake buildup by erosion (linear time dependence), was proposed by Roodhart [51] as follows, where Vsp indicates the spurt loss, the constants m and B represent the static and dynamic filtration components. According to the proposed models (Equations (5) and (6)), the HTHP filtration test data were fitted and given in Table 3. It could be seen that the two models appeared to fit well as the value of R 2 all approached to 1. With regard to the HTHP static filtration test, the addition of fluid loss reducers caused the increase of spurt loss, but lowered the filtration rates. The fluid in the presence of OAP generated the lowest filtration rate. For the HTHP dynamic filtration test, all the filtration components including spurt loss, static filtrate rate, and dynamic filtration rate decreased obviously after addition of the fluid loss reducers. The difference of filtrate rate between static and dynamic filtration was not clear, whereas, it was obvious that OAP was capable of reducing HTHP fluid loss more effectively than modified lignite and asphaltic additive under both static and dynamic conditions. The cumulative fluid loss volume of both HTHP static filtration test and HTHP dynamic filtration test as a function of square root of time was analyzed with different mathematic models. For HTHP static filtration, a typical model recommended by API standard was given as follows [50], where V sp and m are the intercept and slope of the line, representing the spurt loss and filtration rate, respectively. While for HTHP dynamic filtration test, another model describing the filtration in a dynamic test consisting of three separate steps: (1) spurt loss, (2) filter-cake deposition (square-root-of-time dependence), and (3) limitation of cake buildup by erosion (linear time dependence), was proposed by Roodhart [51] as follows, where V sp indicates the spurt loss, the constants m and B represent the static and dynamic filtration components. According to the proposed models (Equations (5) and (6)), the HTHP filtration test data were fitted and given in Table 3. It could be seen that the two models appeared to fit well as the value of R 2 all approached to 1. With regard to the HTHP static filtration test, the addition of fluid loss reducers caused the increase of spurt loss, but lowered the filtration rates. The fluid in the presence of OAP generated the lowest filtration rate. For the HTHP dynamic filtration test, all the filtration components including spurt loss, static filtrate rate, and dynamic filtration rate decreased obviously after addition of the fluid loss reducers. The difference of filtrate rate between static and dynamic filtration was not clear, whereas, it was obvious that OAP was capable of reducing HTHP fluid loss more effectively than modified lignite and asphaltic additive under both static and dynamic conditions. Permeability Plugging Test The results of the permeability plugging tests of OBDFs with and without fluid loss reducers are shown in Figure 13 and Table 4. The total loss of control sample was as high as 47.6 mL, indicating that base fluid was not able to plug the micro-pores of disks. However, an obvious decrease of total loss was observed after the addition of 1 w/v % fluid loss reducers. Furthermore, the static filtration rate decreased from 6.57 mL/min 1/2 to 0.95, 2.26, 3.43 mL/min 1/2 after incorporation of OAP, modified lignite, and asphalt additive, respectively. Because of the elastic and deformable properties, OAP particles could be easily squeezed into the micro-pores. After entering the micro-pores, the particles continued to absorb oil and swell to saturation state. The enlarged volume could effectively fill in the micro-pores and lead to the decrease of fluid loss. Permeability Plugging Test The results of the permeability plugging tests of OBDFs with and without fluid loss reducers are shown in Figure 13 and Table 4. The total loss of control sample was as high as 47.6 mL, indicating that base fluid was not able to plug the micro-pores of disks. However, an obvious decrease of total loss was observed after the addition of 1 w/v % fluid loss reducers. Furthermore, the static filtration rate decreased from 6.57 mL/min 1/2 to 0.95, 2.26, 3.43 mL/min 1/2 after incorporation of OAP, modified lignite, and asphalt additive, respectively. Because of the elastic and deformable properties, OAP particles could be easily squeezed into the micro-pores. After entering the micro-pores, the particles continued to absorb oil and swell to saturation state. The enlarged volume could effectively fill in the micro-pores and lead to the decrease of fluid loss. Sand Bed Filtration Test The sand bed filtration test was designed to more realistically evaluate the tendency of a fluid to invade a permeable and unconsolidated formation [42]. As shown in Figure 14 and Table 5, when there was no sealing agent in the control sample, the base fluid invaded the sand bed readily and Sand Bed Filtration Test The sand bed filtration test was designed to more realistically evaluate the tendency of a fluid to invade a permeable and unconsolidated formation [42]. As shown in Figure 14 and Table 5, when there was no sealing agent in the control sample, the base fluid invaded the sand bed readily and rapidly, and the total invasion depth reached about 7.2 cm after the test. After addition of sealing agents, the invasion depth decreased to a different extent. Compared with modified lignite and asphalt additive, OAP exhibited a better sealing performance. In the testing process, there was a small amount of initial invasion, followed by the rapid formation of a dense and extremely low permeability seal in just a few seconds. The fluid invasion ceased with an invasion depth of only 1.5 cm, which was the shortest invasion depth observed in contrast with other two products. This rapid shutoff of invasion was crucial in achieving the excellent results seen in the field. Firstly, the fracture sealing ability of OAP was evaluated by a 0.2 mm width slot to simulate light losses. OBDF with OWR of 80:20 containing 1 w/v % OAP with size of 80-100 mesh was poured into the cell. When the pressure increased to 1.0 MPa, almost all of fluid was lost, indicating sealing failure. After the experiment, the fracture was taken out. As shown in Figure 15, the swollen OAP particles had been forced into the fracture and were distributed along the fracture plane. Unlike rigid materials with high strength, OAP particles exhibited low compressive strength after oil adsorption, which could not form effective sealing alone. However, taking other materials such as SCC particles and sized RUB particles in combination, this relatively narrow fracture was easy to seal. Firstly, the fracture sealing ability of OAP was evaluated by a 0.2 mm width slot to simulate light losses. OBDF with OWR of 80:20 containing 1 w/v % OAP with size of 80-100 mesh was poured into the cell. When the pressure increased to 1.0 MPa, almost all of fluid was lost, indicating sealing failure. After the experiment, the fracture was taken out. As shown in Figure 15, the swollen OAP particles had been forced into the fracture and were distributed along the fracture plane. Unlike rigid materials with high strength, OAP particles exhibited low compressive strength after oil adsorption, which could not form effective sealing alone. However, taking other materials such as SCC particles and sized RUB particles in combination, this relatively narrow fracture was easy to seal. (1) The sealing capacity of OAP in OBDF with a 0.2 mm width slot. Firstly, the fracture sealing ability of OAP was evaluated by a 0.2 mm width slot to simulate light losses. OBDF with OWR of 80:20 containing 1 w/v % OAP with size of 80-100 mesh was poured into the cell. When the pressure increased to 1.0 MPa, almost all of fluid was lost, indicating sealing failure. After the experiment, the fracture was taken out. As shown in Figure 15, the swollen OAP particles had been forced into the fracture and were distributed along the fracture plane. Unlike rigid materials with high strength, OAP particles exhibited low compressive strength after oil adsorption, which could not form effective sealing alone. However, taking other materials such as SCC particles and sized RUB particles in combination, this relatively narrow fracture was easy to seal. (2) The sealing capacity of LCMs in a 1 × 0.5 mm width slot. This fracture was selected to model light to medium losses. Because OAP individually could not effectively seal small fractures, to deal with this kind of loss it is widely accepted that combinations of LCMs are much more effective compared to just one type of LCM when curing lost circulation. The LCM's type, concentration, particle size distribution and fracture width are the vital factors for effective sealing [52]. Based on general experience, rigid particles like calcium carbonate, deformable particles like rubber, and fibers were used in combination to improve the sealing capacity in this study. The particle size classification of LCMs is presented in Table 6. According to effective bridging theory [53] and numerous laboratory experiments, an acceptable LCM formula No. 1 aimed for the 1 × 0.5 mm tapered slot was established as a control sample and shown in Table 7. Table 7. LCM formula for lost circulation control of oil-based drilling fluids in a 1 × 0.5 mm slot. The testing results shown in Figure 16 indicates that, for the control sample of formula No. 1, the fluid loss volume kept stable when the pressure less than 4.0 MPa, indicating the blended materials formed effective sealing in the fracture. After that, the fluid loss volume increased to 17 mL regardless of the increase of pressure. While for formula No. 2, the fluid loss volume was 5 mL at pressure of 1.0 MPa, then it increased to 7 mL and kept constant with the change of pressure, demonstrating that addition of OAP promoted formation of a tighter sealing in the fracture. As shown in Figure 17a, SCC particles and rubber particles were trapped with fibers to form a sealing integrity in the fracture. After OAP incorporation, as depicted in Figure 17b, SCC particles, rubber particles and OAP particles lodged against each other with the help of fibers binding. After rigid particles bridging, the resilient and deformable particles could fill in the formed voids, which was favorable for forming a tighter seal and lower fluid loss. pressure of 1.0 MPa, then it increased to 7 mL and kept constant with the change of pressure, demonstrating that addition of OAP promoted formation of a tighter sealing in the fracture. As shown in Figure 17a, SCC particles and rubber particles were trapped with fibers to form a sealing integrity in the fracture. After OAP incorporation, as depicted in Figure 17b, SCC particles, rubber particles and OAP particles lodged against each other with the help of fibers binding. After rigid particles bridging, the resilient and deformable particles could fill in the formed voids, which was favorable for forming a tighter seal and lower fluid loss. (3) The sealing capacity of LCMs in a 2 × 1 mm width slot. Testing Sample SCC-II RUB-II SCC-III RUB-III SCC-IV FIB OAP This type of slot was used to simulate moderate to severe losses. To form effective sealing, the SCC particles, rubber particles and fibers with appropriate size distribution and concentration were optimized after extensive experiments, then the optimized formula No. 3 was established as control sample and shown in Table 8. The fracture sealing test results for OBDFs containing formulas No. 3 and No. 4 are given in Figure 18. Figure 19a, it was seemed that the blended LCMs without OAP were mainly distributed on the tail of the fracture, which was relatively easy to be extruded out of the fracture and led to failure of sealing. While in the presence of OAP, as observed in Figure 19b, the blended LCMs filled along the entire fracture, beneficial to forming a steady sealing. The results also verified that the addition of OAP improved the retention of LCMs in the fractures, which in turn caused lowered fluid loss. (3) The sealing capacity of LCMs in a 2 × 1 mm width slot. This type of slot was used to simulate moderate to severe losses. To form effective sealing, the SCC particles, rubber particles and fibers with appropriate size distribution and concentration were optimized after extensive experiments, then the optimized formula No. 3 was established as control sample and shown in Table 8. The fracture sealing test results for OBDFs containing formulas No. 3 and No. 4 are given in Figure 18. Figure 19a, it was seemed that the blended LCMs without OAP were mainly distributed on the tail of the fracture, which was relatively easy to be extruded out of the fracture and led to failure of sealing. While in the presence of OAP, as observed in Figure 19b, the blended LCMs filled along the entire fracture, beneficial to forming a steady sealing. The results also verified that the addition of OAP improved the retention of LCMs in the fractures, which in turn caused lowered fluid loss. Probable Mechanism of Mitigation of Lost Circulation OAP particles were synthesized by MMA, BA and LHA with typical crosslinked structure. The crosslinking chemicals tied the chains together to form a three-dimensional network, which enabled the polymers to absorb oil into the spaces in the molecular network and thus formed a gel that locks up the liquid. The oil-absorption process was an expansion process of this three-dimensional network, which had a clear transition from the high initial rate to the slow rate and toward the end of swelling equilibrium by the van der Waals attractive forces between molecules [54]. Because the oil absorption depended mainly on van der Waals attractive force, the oil absorption speed was relatively slow compared to that of water absorption resins; when pumped to downhole before absorption saturation, it would have a lower viscosity and small volume. Upon entering the lost-circulation zone, the absorption of oil continued, forming a viscous plug, and created a barrier to the subsequent flow of drilling fluid into the fracture channels. Meanwhile, after thorough adsorption under downhole conditions, the OAP particles swelled appreciably in volume and became resilient, which enabled them to be more effective in packing fractures even with smaller widths. However, due to their low compressive strength, the OAP particles were not efficient in closing off the lost circulation zone alone. When taking the rigid particles, elastic particles, fibers and OAP particles in combination by considering the particle shape, surface texture, concentration, particle size Probable Mechanism of Mitigation of Lost Circulation OAP particles were synthesized by MMA, BA and LHA with typical crosslinked structure. The crosslinking chemicals tied the chains together to form a three-dimensional network, which enabled the polymers to absorb oil into the spaces in the molecular network and thus formed a gel that locks up the liquid. The oil-absorption process was an expansion process of this three-dimensional network, which had a clear transition from the high initial rate to the slow rate and toward the end of swelling equilibrium by the van der Waals attractive forces between molecules [54]. Because the oil absorption depended mainly on van der Waals attractive force, the oil absorption speed was relatively slow compared to that of water absorption resins; when pumped to downhole before absorption saturation, it would have a lower viscosity and small volume. Upon entering the lost-circulation zone, the absorption of oil continued, forming a viscous plug, and created a barrier to the subsequent flow of drilling fluid into the fracture channels. Meanwhile, after thorough adsorption under downhole conditions, the OAP particles swelled appreciably in volume and became resilient, which enabled them to be more effective in packing fractures even with smaller widths. However, due to their low compressive strength, the OAP particles were not efficient in closing off the lost circulation zone alone. When taking the rigid particles, elastic particles, fibers and OAP particles in combination by considering the particle shape, surface texture, concentration, particle size Probable Mechanism of Mitigation of Lost Circulation OAP particles were synthesized by MMA, BA and LHA with typical crosslinked structure. The crosslinking chemicals tied the chains together to form a three-dimensional network, which enabled the polymers to absorb oil into the spaces in the molecular network and thus formed a gel that locks up the liquid. The oil-absorption process was an expansion process of this three-dimensional network, which had a clear transition from the high initial rate to the slow rate and toward the end of swelling equilibrium by the van der Waals attractive forces between molecules [54]. Because the oil absorption depended mainly on van der Waals attractive force, the oil absorption speed was relatively slow compared to that of water absorption resins; when pumped to downhole before absorption saturation, it would have a lower viscosity and small volume. Upon entering the lost-circulation zone, the absorption of oil continued, forming a viscous plug, and created a barrier to the subsequent flow of drilling fluid into the fracture channels. Meanwhile, after thorough adsorption under downhole conditions, the OAP particles swelled appreciably in volume and became resilient, which enabled them to be more effective in packing fractures even with smaller widths. However, due to their low compressive strength, the OAP particles were not efficient in closing off the lost circulation zone alone. When taking the rigid particles, elastic particles, fibers and OAP particles in combination by considering the particle shape, surface texture, concentration, particle size distribution and fracture width, a synergistic effect was obtained. The rigid SCC particles firstly formed bridging in the fracture, which was the framework of the sealing zone. The fibers that wrapped the mixture of other particles together improved the compactness of the sealing zone. After bridging in the fractures by rigid particles, deformable particles like OAP particles were able to occupy the voids among the previously bridged particles and reduce the permeability of the formed seal. Finally, a tight sealing zone was formed and resulted in a significant drop of fluid loss. Conclusions In this study, the oil-absorbent polymer (OAP) including MMA, SA, and BA was prepared by suspension polymerization. The OAPs had spherical and porous structure, and were capable of absorbing white oil and diesel oil several times of its own weight at temperatures ranging from room temperature to 120 • C. The addition of OAP had relatively little influence on the rheological properties of OBDF at content lower than 1.5 w/v % but increased the fluid viscosity remarkably at content higher than 3 w/v %. OAP particles could reduce the HTHP filtration and improve the sealing capacity of OBDFs effectively under downhole conditions. OAP particles used individually showed poor fracture-sealing capacity, but could effectively decrease fluid loss when treated conjunct with calcium carbonate particles, rubbers and fibers by a synergistic effect. After addition of OAP into the OBDF, the volume of OAP would increase with time, which resulted in the increase of viscosity of the fluid and slowed down the fluid loss speed. Meanwhile, the swelled and deformable OAP could be compressed to enter an opening that is substantially smaller and different in shape. OAP particles would conform to the openings with various shapes and sizes. More importantly, OAP particles could occupy the voids of the bridging layer, and resulted in a tight sealing zone when used in combination with other conventional LCMs.
12,355
2018-10-01T00:00:00.000
[ "Engineering", "Materials Science", "Environmental Science" ]
Histone modification pattern evolution after yeast gene duplication Background Gene duplication and subsequent functional divergence especially expression divergence have been widely considered as main sources for evolutionary innovations. Many studies evidenced that genetic regulatory network evolved rapidly shortly after gene duplication, thus leading to accelerated expression divergence and diversification. However, little is known whether epigenetic factors have mediated the evolution of expression regulation since gene duplication. In this study, we conducted detailed analyses on yeast histone modification (HM), the major epigenetics type in this organism, as well as other available functional genomics data to address this issue. Results Duplicate genes, on average, share more common HM-code patterns than random singleton pairs in their promoters and open reading frames (ORF). Though HM-code divergence between duplicates in both promoter and ORF regions increase with their sequence divergence, the HM-code in ORF region evolves slower than that in promoter region, probably owing to the functional constraints imposed on protein sequences. After excluding the confounding effect of sequence divergence (or evolutionary time), we found the evidence supporting the notion that in yeast, the HM-code may co-evolve with cis- and trans-regulatory factors. Moreover, we observed that deletion of some yeast HM-related enzymes increases the expression divergence between duplicate genes, yet the effect is lower than the case of transcription factor (TF) deletion or environmental stresses. Conclusions Our analyses demonstrate that after gene duplication, yeast histone modification profile between duplicates diverged with evolutionary time, similar to genetic regulatory elements. Moreover, we found the evidence of the co-evolution between genetic and epigenetic elements since gene duplication, together contributing to the expression divergence between duplicate genes. Background Although gene duplication has been widely considered as the main source of evolutionary novelties [1][2][3][4], the issue of duplicate gene preservation remains a subject of hot debate, i.e., how duplicate copies can escape from being pseudogenized and then evolve from an initial state of complete redundancy to a steady-stable state that both functionally divergent copies are maintained by purifying selection. As one of plausible hypotheses, rapid expression divergence between duplicate genes may be the first step that is fundamental for the preservation of redundant duplicates [3]. There are three recognized types involved in expression regulatory mechanism: (i) cis-regulation of transcription mediated by promoters, enhancers, silencers, etc.; (ii) trans-regulation mediated by regulatory proteins binding to cis elements, such as transcription factors (TF); and (iii) epigenetic regulation, such as DNA methylation, specific histone modification pattern of genes (histone code hypothesis). Many studies have addressed the effect of cis-and trans-regulatory mechanisms on the expression divergence after gene duplication, e.g., cis-regulatory motif, TF-binding interaction, transacting expression quantitative trait loci (eQTLs) [5][6][7][8][9][10]. Though there is increasing evidence that epigenetic changes may play important roles in the initial expression divergence between duplicate genes [11][12][13][14][15], little study has been done about how regulatory network between duplicate genes evolve at epigenetic level. Epigenetic regulation on gene expression is a highly complex process. In a broad sense, it includes DNA methylation, histone modification, nucleosome occupancy, as well as microRNA [11]. Moreover, these epigenetic elements can interact with each other, for instance, the reciprocity between DNA methylation and histone modification [16,17]. The complexity of epigenetic regulation has made it difficult to explore its role in regulatory divergence between duplicates. Nevertheless, we have recognized budding yeast (Saccharomyces cerevisiae) as an ideal organism for our purpose, because its epigenetic regulation is relatively simple: DNA cytosine methylation and microRNA were not detected [18,19]. In other words, histone modification is the main representation for epigenetic modification in the budding yeast, less affected by other epigenetic modification types. Therefore, in the present study, we focus on the evolution of histone modification (HM) between yeast duplicate genes. Eukaryotic DNA with a unit of 146 bp wound around a histone octamer (two copies of each core histone H2A, H2B, H3, H4) is assembled into chromatin. Histone Nterminal tails are subject to multiple covalent posttranslational modifications, including lysine (K) acetylation, lysine or arginine (R) methylation, serine (S) phosphorylation, and so on [20,21]. Enormous possible combinations and interactions of histone modification types constitute histone code. The 'histone code' hypothesis claims that a specific pattern of hisone modification code can produce a specific effect on local chromatin structure, modulating DNA accessibility, and consequently regulating transcription and other DNA-based biological processes [20][21][22][23][24]. Our goal is to investigate the pattern of yeast histone modification (HM) code divergence between duplicates. Our hypothesis claims (i) that, when a gene is duplicated, the gene-specific histone modification profile is also duplicated, so on average duplicate pairs tend to have a higher degree of HM code similarity than randomly selected singleton gene pairs; and (ii) that since duplication, HM-code profile between duplicate copies become divergent with evolutionary time. We test these two predictions by conducting genome-wide analyses, as well as genes involved in different biological functions. Moreover, we are particularly interested in whether genetic regulatory elements, including cis-motifs (such as TATA box) and transcription factors, and epigenetic HM-code profile co-diverge during the evolution since gene duplication. To this end, time-dependent confounding factors in both genetic and epigenetic factors need to be ruled out. Finally, the significance of our study for having a better understanding of regulatory evolution after gene duplication is discussed. Results Combinational interactions of histone-modifying enzymes (HATs, HDACs, HMTs, HDMs, etc.) to histone N-tail produce numerous types of post-translational modification of histones (H2A, H2B, H3, H4), such as methylation, acetylation [25]. Moreover, the same modification site can be affected by different modifying enzymes, and vice versa, generating different histone modification combinations, like H3K4me2 and H3K4ac (dimethylation and acetylation in Lys4 of histone H3, respectively), or H4K8ac (acetylation in Lys8 of histone H4). In this study, histone modification (HM) code of a gene represents the combined profile of different HM sites, HM types and HM states in gene promoter and open reading frame (ORF) regions, respectively (see Methods). We believe that the HM code of a gene reveals the pattern of HM mediated regulatory network of that gene. Functional redundancy in histone modification (HM) between yeast duplicate genes Because of evolutionary relatedness, duplicate pairs may have a higher similarity of histone modification (HM) code than two randomly selected single-copy genes (singletons). To test this hypothesis, we compared the distance of HM code between duplicate pair and randomized singleton pair. To be simple we choose one minus Pearson's product-moment correlation coefficient, i.e., D HM = 1-r, to define the distance of HM code. The larger the value of D HM , the higher divergence of histone modification code between duplicate genes. Specifically, denote the distance of HM code associated in promoter and ORF regions by D HM-P and D HM-O , respectively. Randomized pairs were selected from single copy genes with 10000 repeats. As expected, we observed that both D HM-P and D HM-O measures show a lower divergence degree of histone modification code between duplicate genes than that of randomized singleton pairs (Wilcoxon rank sum test: P < 10 -15 for both cases; Figure 1A). Generally speaking, local chromatin environment around genes is one of important components leading to different histone modification code of each gene. While chromatin environment differs in different chromosomes, duplicate genes locating in different chromosomes may be under different chromatin environment, possessing the chromosomespecific HM profile. Following this argument, we classified all yeast gene pairs (duplicate and randomized singleton pairs) under study into two groups: they are located in the same chromosome, or different chromosomes, and tested the relationship between histone modification pattern and chromosome condition. We observed that though the HM profile divergence is not significantly correlated with location distance of gene pairs in the same chromosome (Pearson's product-moment correlation: r = 0.06, P = 0.09 for promoter region and r = 0.02, P = 0.45 for ORF region), gene pairs locating in the same chromosome share more common HM code than that in different chromosomes (Wilcoxon rank sum test: P = 0.07 and P = 0.01 for promoter and ORF regions, respectively; Figure 1B). The chromosome effect on the HM-code divergence between duplicate genes may suggest an alternative interpretation about a higher similarity of HM distance between duplicate genes than random pairs. That is, duplicate pairs tend to be located in the same chromosome because of tandem gene duplications, compared to randomized singleton pairs (Chi-squared test: χ 2 =17.2, d.f. = 1, P < 10 -4 ). To rule out this possibility, we chose duplicate pairs and randomized singleton pairs where both copies are belonging to different chromosomes, and observed the similar result to Figure 1A (see Additional file 1: Figure S1). Hence, we conclude that duplicate genes represent their functional redundancy at the level of histone modification mediated regulatory network. The evolution of HM is coupled with coding sequence divergence after gene duplication We further expect that functional redundancy in histone modification code of duplicate genes as shown in Figure 1A would maintain a high degree in recently duplicated genes, and low in ancient duplicates. To verify this claim, we investigated the relationship between the distance of HM code (D HM-P or D HM-O ) and coding sequence divergence (the synonymous distance K S or the nonsynonymous distance K A between duplicate genes). Considering the statistically unreliable estimation of synonymous or nonsynonymous substitution distance when K S or K A becomes larger because of repeated substitutions at the same site, we selected duplicate pairs with K S < 2.0 and K A < 0.5 for this analysis. We observed that both D HM-P and D HM-O are positively correlated with K S or K A (Pearson's product-moment correlation: all r > 0.45, P < 10 -15 for all data points; Figure 2), suggesting that the divergence of HM code is coupled with the coding sequence divergence between duplicate genes. To avoid correlated data points bringing the bias, we selected independent pairs of duplicate genes using the method from Zou et al. [9] and reanalyzed. The similar result remains hold (all r > 0.40, P < 10 -13 for all data points; in Additional file 1: Figure S2). Considering K S or K A as a proxy to evolutionary time since gene duplication, we suggest that the correlation between the HM-code divergence and the coding sequence divergence has been mainly driven by mutations accumulated with evolutionary time. Our interpretation is based on two reasons: First, we observed a weak negative correlation between the HM divergence and the K A /K S ratio of duplicate genes (Pearson's productmoment correlation: r = -0.12, P < 10 -7 for D HM-P and r = -0.05, P = 0.04 for D HM-O ). As the K A /K S ratio is an indicator of sequence conservation in coding region, our finding implies that duplicate genes with stringent functional constraints on coding sequence may have greater divergence in the HM code, but the effect is marginal. Second, promoter HM code of duplicate genes diverges much quicker than that of ORF HM code (Wilcoxon rank sum test: P < 10 -11 ; Figure 2), while significant but weak difference of D HM-P and D HM-O in randomized singleton pairs (Wilcoxon rank sum test: P = 0.02; Figure 1). Some factors may be involved to accelerate the divergence of promoter HM code between duplicate genes, such as the evolution of transcription factors (TF) shared by duplicate genes. Co-evolution of the HM-code divergence between duplicates with several genetic regulatory elements The interaction between epigenetic and genetic elements in gene regulation has been increasingly acknowledged [21,26], raising an interesting question whether the divergence of histone modification code between duplicate genes co-evolves with some trans-acting factors binding to those duplicate genes. We first studied the relationship between the distance of promoter HM code (D HM-P ) and the distance of transcription factors (D TF ) and trans-acting expression quantitative trait loci (eQTLs) (D t-eQTL ) between duplicate genes. Trans-acting eQTLs of one gene represent all trans-regulatory proteins for its transcription, not limited to transcription factors. Two distance measures D TF and D t-eQTL were determined by Czekanowski-Dice formula (Methods). We found that they are significantly correlated (Peasron's product-moment correlation: r > 0.25, P < 10 -13 ; Figure 3A, 3C). Similar results were obtained in the case of ORF HM code ( Figure 3B, 3D). As both D HM-P and D HM-O , as well as D t-eQTL and D TF , increase with evolutionary time (K S or K A as the proxy) [ Figure 2; 9], it is reasonable to suspect that K S or K A may underlie these statistically significant correlations between the HM code and genetic regulatory elements. We conducted the partial correlation in D HM-P -D TF and D HM-P -D t-eQTL of duplicate genes under the controlling of K S and K A variables (with the restriction of K S < 2.0 and K A < 0.5), and still observed the significant relationship (r = 0.18, P < 0.001 for D HM-P -D TF and r = 0.15, P < 0.05 for D HM-P -D t-eQTL ), though they are relatively weak. In short, our analysis provides the evidence that the histone modification code and trans-regulators shared by duplicate genes may have co-evolved since gene duplication. Moreover, we design the following analysis to further explore the co-evolution between the HM code and transregulators (TF or trans-acting eQTL), by dividing yeast genes into two categories, trans-targeted genes and controlling genes. Trans-targeted genes are genes that are targeted by transcription factors (TF-targeted genes) or have at least one trans-acting eQTL (trans-eQTL acting genes) and the rest of genes are controlling genes (Methods). We totally obtained 4495 trans-targeted genes and 2226 controlling genes ( Figure 4A). Interestingly, both promoter and ORF HM code distances of duplicates in the group of trans-targeted genes are, on average, significantly higher than those in the group of controlling genes ( Figure 4B) (Wilcoxon rank sum test: promoter, P = 0.003 and ORF, P = 0.0001). It should be noticed that the pattern we observed in Figure 4B would not be affected by the strong correlation between the HM divergence and evolutionary time (K S as the proxy) of duplicate genes, because the distribution of K S has been found no significant difference between trans-targeted genes and controlling genes (Wilcoxon rank sum test: P = 0.08). The HM-code divergence and TATA-box regulation TATA box is the core promoter element for gene regulation responding to environmental stresses [27,28]. To examine the role of TATA box in the HM-code divergence between duplicate genes, we divided all yeast duplicate pairs into three groups, TATA-containing (both have TATA-box), TATA-less (both do not have TATA-box), and TATA_diverge (only one copy has TATA-box). We compared the HM-code distance in promoter (D HM-P ) and ORF (D HM-O ) region between duplicate genes in these groups. Interestingly, both D HM-P and D HM-O show the highest degree in the TATA-diverge group, and the lowest in the TATA-containing group ( Figure 5) (Wilcoxon rank sum test: promoter, P < 10 -10 ; ORF, P < 10 -15 ). Our explanation is as follows. Note that TATAcontaining genes may interact with some specific chromatin modification factors to regulate gene expression [29]. In the TATA_diverge group, only one duplicate with TATA-box has such interaction, resulting in a higher HM-code divergence between them. By contrast, in the case of TATA-containing group, both duplicates with TATA-box have similar interactions, resulting in a higher HM-code similarity between them. Biological functions and the HM-code divergence between duplicate genes Do different biological functions affect the level of the HM-code divergence between duplicate genes? We used GO (Gene Ontology) Slim (biological process with 37 categories; see Methods) to address the issue. We analyzed D HM-P and D HM-O of duplicate genes in each GO category and compared D HM-P and D HM-O among these GO groups by one-way analysis of variance (ANOVA). Our finding is that duplicate pairs with different biological processes differ significantly in D HM-P and D HM-O (P < 10 -13 for D HM-P , P < 10 -15 for D HM-O ; Table 1). Use D HM-P for an example, duplicate genes involved in cofactor metabolic process, cellular amino acid and derivative metabolic process, cell cycle and cytoskeleton organization may have greater HM-code divergence, i.e., a higher D HM-P , while duplicate genes involved in translation, DNA metabolic process, pseudohyphal growth and transcription may have lower divergence of the promoter HM code. Moreover, we conducted a similar analysis on 24 cellular components (GO Slim classification, see Methods), and observed that duplicate genes in different subcellular localization also undergo the different evolutionary rate of histone modification code after gene duplication ( Table 2). It is possible that duplicate genes in some biological processes or subcellular localization are young (measured by small K S ) while others are old (with high K S ). Since D HM-P and D HM-O are positively correlated with K S (Figure 2), we have to remove the confounding effect caused by K S . We used the analysis of covariance (ANCOVA) (D HM-P or D HM-OF igure 3 Relationship between the distance of histone modification pattern and trans-regulators shared by duplicate genes. (A) and (B) show the correlation between the distance of transcription factors shared by duplicate genes and the divergence of promoter and ORF histone modification pattern between duplicate genes, respectively, while (C) and (D) represent the interrelationship between the distance of trans-acting eQTLs targeted to the duplicate genes and the distance of promoter and ORF histone modification pattern between duplicate genes, respectively. K S + T (biological process or subcellular localization) + K S : T) and the result remains statistically highly significant (Table 1, 2), suggesting K S is not a confounding factor that may affect our analyses. The expression divergence under genetic, epigenetic and stressful perturbations It is well-documented that gene expression divergence of duplicate genes increases with evolutionary time, but the underlying mechanism remains a subject of debate [30]. The analysis we describe below is to know whether the divergence of HM-mediated regulatory network affects the expression divergence between duplicate genes. We compared the interrelation between the histone modification pattern distance (D HM-P , D HM-O ) and the expression distance (E) between duplicate genes, and found that they are significantly correlated (Pearson's product-moment correlation; D HM-P -E: r = 0.24, P < 10 -15 ; D HM-O -E: r = 0.30, P < 10 -15 ; Figure 6). We further divided yeast expression profiles into four types from different perturbation conditions: 1) normal developmental or physiological conditions, defined as 'Normal' treatment; 2) a set of conditions where expression changes attribute to environmental stresses, denoted as 'Stress' treatment; 3) conditions where single gene coding chromatin modifiers (CM) like SWI/SNF, HDACs, HATs, etc. was deleted, denoted as 'CM_del' treatment; 4) conditions where single gene coding transcription factors (TF) was deleted, denoted as 'TF_del' treatment. The latter two types are able to test the effect of chromatin modification related proteins and transcription factors on other genes, respectively. We then calculated the expression divergence between duplicate genes under these four types of conditions, denoted by E Normal , E Stress , E CM_del , E TF_del , respectively. We observed that the expression distance between duplicate genes in 'CM_del' condition (E CM_del ) is significantly greater than that in 'Normal' condition (E Normal ) (Wilcoxon rank sum test: P < 10 -10 ; Figure 7A), but much lower than that in 'Stress' and 'TF_del' conditions (E Stress and E TF_del ) (Wilcoxon sum rank test, P < 10 - 15 ; Figure 7A). Results imply that histone modification related enzymes and gene associated histone modification profile may indeed influence the expression evolution of duplicate genes, though the relative contribution to the expression divergence between duplicate genes is highly lower than genetic related factors like transcription factors, even externally environmental stresses. TATA-containing genes are usually enriched in stressrelated genes [29], and represent high expression variability [31,32]. In our study, we detected that in four disturbed conditions (Normal, Stress, CM-del, TF_del), the expression divergence in TATA-diverge and TATA-containing groups are significantly larger than that in TATA-less duplicate genes ( Figure 7B). The discrepancies between expression change and the histone modification divergence in these duplicate gene types (TATA-containing, TATA-less, TATAdiverge) are observed. Discussion The divergence of HM profile between duplicate genes Our detailed analyses on yeast whole-genome histone modification (HM) code profile have shown that duplicate genes share more common HM-code patterns than randomized singleton pairs in their promoter and ORF regions, and the HM-code divergence between duplicates in both regions increase with the sequence divergence. This finding supports the notation that epigenetic divergence between duplicate genes may have been driven by the accumulation of mutations with the duplication time in both their promoter and ORF regions, because it has been shown that the divergence of coding sequence such as K S between yeast duplicates is a proxy to evolutionary time. In other words, no external driving force is needed to explain the HM profile divergence between duplicates, though it remains possible of neofunctionalization based upon divergent HM profile through some positive selection mechanisms. Hypothesis-based genomic correlation analysis Genome-wide functional analysis of duplicate genes is in attempt to reveal functional correlation between genetic and epigenetic factors during the process of functional innovation through gene duplication. However, the universal confounding effect of the sequence divergence (K S ) has complicated the practical analyses. Consequently, the controversy about the cause-effect interpretation has been inevitable, because genome-wide analysis of duplicate genes has been viewed as an exploration rather than a hypothesis-testing approach. Since the correlation between the HM profile divergence and the sequence divergence actually reflects the fundamental evolutionary process driven by the accumulation of mutations, we view this as a null hypothesis in the genome-wide functional analysis of duplicate genes. That is, any meaningful inference about the functional correlation within or between genetic and epigenetic elements needs to reject this null hypothesis, as we have shown in this study. Interaction between genetic and epigenetic elements: who is the driver? The interaction between epigenetic and genetic elements in gene regulation has been increasingly acknowledged. For instance, the establishment of the histone modification code may partially involve the recruitment of specific histone modifying enzymes such as HATs, HDACs, HMTs by transcription factors (TF) [26]. Meanwhile, the distinctly combinatory histone modification code associated with gene may also provide specific binding code that is read by other transcription factors [21]. These observations raise an interesting question whether the divergence of histone modification code between duplicate genes may co-evolve with trans-acting factors binding to those duplicate genes, e.g., transcription factors, histone modifying enzymes. We have observed a higher divergence for HM code of duplicate genes in the category of trans-targeted genes than that of controlling genes, suggesting that the change of trans-regulators binding to duplicate genes may affect the pattern of HM code in both promoter and ORF regions, and thus accelerating the divergence of histone modification code between duplicate genes. Yet, it remains unclear about the cause-effect relationship. For instance, does the divergence of trans-regulators to duplicate genes facilitate the divergence of histone modification between duplicate genes, or vice versa? Our further study will address this issue. Functional preference in the HM code divergence between duplicate genes We observed the functional bias on the HM-code divergence after gene duplication. One possibility is that histone proteins associated with genes in different biological functions may be subject to differentially post-translational modification, leading to different divergence rate of histone modification code between duplicate genes. Since histone modification process of one gene largely depends on its chromatin environment and DNA sequence interacted by histone modifying enzymes [33], functionally selective constraints may also be imposed on histone modification evolution associated with that gene, a situation similar to DNA sequence evolution. Conclusions In this study, we unveiled the evolution of yeast histone modification code since gene duplication. Though duplicate genes represent functional redundancy at histone modification level compared with single-copy genes, the histone modification divergence occurred along with evolutionary time (K S as the proxy), which possibly due to the coding sequence evolution after gene duplication. Moreover, the histone modification code in ORF region evolves slower than that in promoter region, indicative of functionally selective constraints on protein sequences. Going further, after controlling the confounding effect of the coding sequence divergence (K S ), the histone modification code co-evolves with cis-(TATA box) and trans-(TF and trans-acting eQTL) regulatory factors, confirmed by the fact that the histone modification code is shaped by the combined interaction among histone-modifying enzymes, trans-acting elements and cis-regulatory motif. In addition, histone modification makes contribution to the expression divergence between duplicate genes, despite the minor effect compared to transcription factors and environmental stresses. Taking together, we provided the evidence of the co-evolution between genetic and epigenetic elements since gene duplication, together contributing to the expression divergence between duplicate genes. Data of yeast histone modification pattern Genome-wide histone modification pattern data of Saccharomyces cerevisiae were downloaded from Chroma-tinDB [34] (http://www.bioinformatics2.wsu.edu/cgi-bin/ ChromatinDB/cgi/visualize_select.pl). ChromatinDB provides the user with easy access to ChIP-microarray data for a large set of histones or histone modifications in S. cerevisiae, which includes 17 distinct histone modification combinations like dimethylation in Lys4 of histone H3 (H3K4me2), acetylation in Lys12 of histone H4 (H4K12ac), etc. and 5 histone protein occupancy levels (H2A, H2B, H3, H4, H2A.Z). We applied log base-2 of average enrichment ratio with nucleosome-normalizing for each of 22 histone modification data in promoter and open reading frame (ORF) regions of genes. Yeast microarray expression data A total of 84 microarray expression data points of S. cerevisiae whose expression changes are attributing to internal disturbing like developmental or physiological conditions were respectively collected [35][36][37]. This type was defined as "Normal" conditions, which are not genetically perturbed by regulatory network related elements like transcription factors and chromatin modifiers (CM) or other environmental stresses. We collected 170 gene expression profiles of yeast strains mutated for various chromatin modifiers from 26 publications [38]. We called this type of expression profile data as 'CM_del' conditions. Expression profile data of 263 transcription factor-deletion experiments were obtained from the Gene Expression Omnibus (GEO) database under the series accession number GSE4654 [39]. Similarly, this type was denoted as 'TF_del' experiments. A total of 504 cDNA microarray data points of yeast whose expression changes are attributing to environmental stresses were collected [9]. We call this type as 'Stress' conditions. Normalization was done as each original paper recommended. Data of transand cisregulatory elements Transcription factor (TF)-DNA binding profiles of yeast were downloaded from Lee et al. [40] and Harbison et al. [41]. In the study of Harbison et al. (2004), we just used 203 DNA-binding transcription factors in rich media conditions, regardless of 84 regulators in environmental stressed conditions. Most transcription factors in two studies are overlapped. Finally, we obtained 207 TFall S. cerevisiae genes binding interaction profiles. For each gene, a p-value was assigned to measure the probability of true TF-target interaction; a smaller p-value means the interaction is more likely. Here, we used relatively stringent significance level of 0.001 as cutoff to define the status of TF-target gene interaction. Two studies together determine all TF-target gene interactions, and we observed that 3183 genes are binding by at least one transcription factor (TF-targeted genes). Yeast genomic expression quantitative trait loci (eQTLs) data were downloaded from Brem and Kruglyak [42]. Wilcoxon-Mann-Whitney (WMW) test with the criterion of 50 kb interval was conducted to detect and define eQTL regions [9]. Finally, we obtained 2775 genes which at least have one trans-acting eQTL (trans-eQTL acting genes). Thus, we can divide all yeast genes into two categories, trans-targeted genes and controlling genes, where trans-targeted genes are the union of TF-targeted genes and trans-eQTL acting genes, while controlling genes are the reminders. Since trans-eQTL acting genes were not only regulated by transcription factors, but most by chromatin related enzymes and other factors [43], trans-targeted genes may have the possibility to be regulated by all trans-regulators, not restricted to transcription factors. There are two types of genes, TATA-containing genes and TATA-less genes [29]. We classified all duplicate pairs into three categories, TATA-containing, TATA-less and TATA-diverge pairs. TATA-containing and TATA-less types are those duplicate pairs where both copies have or don't have TATA box, respectively, while TATA-diverge type refers to duplicate pairs where one copy belongs to TATA-containing genes, and the other TATA-less genes. Protein subcellular localization and biological process The information of protein localization and biological process for S. cerevisiae was defined by the Gene Ontology (GO) classification, and downloaded from Saccharomyces Genome Database. GO Slim was used to classify 24 cellular component sorts and 37 biological process categories. A duplicate gene pair was assigned to a GO Slim term if both duplicate copies are belonging to this GO term, or one copy is belonging to while the other is not annotated. Defining functional divergence between yeast duplicate genes There are two types of histone modification profiles, histone modification pattern associated with gene promoter region and open reading frame (ORF) region. One minus Pearson's product-moment correlation coefficient (1-r) was used to determine the distance of these two types of histone modification pattern between duplicate genes, shortly denoted as D HM-P for promoter histone modification distance and D HM-O for ORF histone modification distance. We modified the Czekanowski-Dice formula [44] to calculate the distance of transcription factors or trans-acting eQTLs shared by duplicate gene 1 and 2 in a duplicate pair, shortly denoted as D TF and D t-eQTL , respectively. Suppose Δ 12 be the number of TFs or trans-acting eQTLs that differ between one duplicate pair; y 1 [y 2 be the number of TFs or trans-acting eQTLs that regulate at least one of duplicate genes, and y 1 \y 2 be the number of shared TFs or trans-acting eQTLs between a duplicate pair. Then, the TF distance or trans-acting eQTL distance between duplicate genes 1 and 2 is defined as follows: Apparently, the greater the value, the higher degree of TF or trans-acting eQTL divergence between duplicate genes. We used evolutionary distance (E) defined by Gu et al. [5] as the measure of expression divergence between duplicate genes of four types, shortly denoted as E Normal , E CM_del , E TF_del and E Stress for 'Normal' , 'CM_del' , 'TF_del' and 'Stress' experiments, respectively. Specifically, for any duplicate gene 1 and 2, let x 1k and x 2k be its expression level, respectively, in the kth microarray experiment; x 1 and x 2 be the mean of expression level in kth microarray experiments, respectively, where k = 1,. . .m. The formula of the expression distance (E) between gene 1 and 2 is as follows: Determination of yeast duplicate pairs The method of Gu et al. [45] was applied to identify duplicate genes. As the criterion of 80% alignable regions between protein sequences is too stringent, and may miss some duplicate genes, we reduced this criterion to 50%. All pairs of duplicate genes in each gene family were used for the analysis. The reminders of S. cerevisiae genes were considered as singleton genes. The rate of synonymous substitutions (K S ) and nonsynonymous substitutions (K A ) between duplicate genes were estimated using PAML [46] with default parameters. Additional file Additional file 1: Figures S1-S2. are available at online web site of BMC Evolutionary Biology journal.
7,302.4
2012-07-09T00:00:00.000
[ "Biology" ]
A TB Model with Infectivity in Latent Period and Imperfect Treatment An epidemiological model of TB with infectivity in latent period and imperfect treatment is introduced. As presented, sustained oscillations are not possible and the endemic proportions either approach the disease-free equilibrium or an endemic equilibrium. The expanded model that stratified the infectious individuals according to their time-since-infection θ is also carried out. The global asymptotic stability of the infection-free state is established as well as local asymptotic stability of the endemic equilibrium. At the end, numerical simulations are presented to illustrate the results. Introduction Tuberculosis, or TB, is an infectious bacterial disease caused by Mycobacterium tuberculosis M. tuberculosis , which most commonly affects the lungs.It is transmitted from person to person via droplets from the throat and lungs of people with the active respiratory disease.Tubercle bacilli carried by such droplets live in the air for a short period of time about 2 hours , and therefore it is believed that occasional contact with an infectious case rarely leads to an infection 1 .In most cases, the body is able to fight the bacteria to stop them from growing.The bacteria become inactive, but since they remain alive, can become active later.People who are infected with TB do not feel sick, do not have any symptoms, and cannot spread TB.But they could develop TB disease at some time in the future.The symptoms of active TB of the lung are coughing, sometimes with sputum or blood, chest pains, weakness, weight loss, fever, and night sweats. It is estimated that one-third of the world's population has been infected with the M. tuberculosis, which is a major cause of illness and death worldwide, especially in Asia and Africa 2 . 1 in 10 people infected with TB bacilli will become sick with active TB in their lifetime.If not treated, each person with active TB infects on average 10 to 15 people every year.There were 9.4 million new TB cases in 2008 3.6 million of whom are women including 1.4 million cases among people living with HIV, and 1.8 million people died from TB in 2008, including 500 000 people with HIV-equal to 4500 deaths a day WHO, 2009 .World TB Day, falling on March 24th each year, is designed to build public awareness that tuberculosis today remains an epidemic in much of the world, causing the deaths of several million people each year, mostly in the third world. TB is curable and considerable progress has been made in controlling TB in the whole world.36 million people were cured in DOTS programmes between 1995-2008 , with as many as 8 million deaths averted through DOTS.The 87% global treatment success rate exceeded the 85% target for the first time since the target was set in 1991.However, TB bacteria can become resistant to the medicines used to treat TB disease.This means that the medicine can no longer kill the bacteria.Multidrug-resistant TB MDR-TB is a form of TB that is difficult and expensive to treat and fails to respond to standard first-line drugs.Extensively drug-resistant TB XDR-TB occurs when resistance to second-line drugs develops on top of MDR-TB.5% of all TB cases have MDR-TB, based on data from more than 100 countries collected during the last decade WHO, 2009 . The transmission dynamics of TB has received considerable attention for a long time, and different mathematical models have been developed incorporating various factors, such as fast and slow progression 1 , treatment 3-5 , drug-resistant strains 6-8 , reinfection 8, 9 , coinfection with HIV 10-13 , migration 14, 15 , chemoprophylaxis 5 , relapse 16 , exogenous reinfection 17 , seasonality 15, 18, 19 , and age dependent risks 9 .However, most models mentioned above assume that individuals who are latently infected are neither clinically ill nor capable transmitting TB.In this paper, we assume that people in latent period also have infectivity 20 , which may occur with the development of the disease.We can find some ODEs models considering infectivity in latent period 21-23 , in which 21, 22 are SEI epidemic models, and 23 is SEIR epidemic model.An agestructured MSEIS epidemic model with infectivity in latent period has also been discussed in 20 , which stratified each class according to its real age and assume that people in latent and infected period has the same infectiveness.Our work differs from these studies in that we considers weaker infectivity in latent period as well as imperfect treatment.Further, we introduce an age-structured epidemic model in which the infective stage is stratified by the age-since-infection. The structure of the paper is organized as follows.In Section 2, we formulate a simple ODEs model and prove the globally asymptotical stability of the disease-free equilibrium and the endemic equilibrium, respectively.The basic replacement ratio is also briefly discussed in this section.An extension of the ODEs model from the previous section, that is, an agestructured model, is analyzed in Section 3. In two subsections we, respectively, discuss the globally asymptotical stability of the disease-free equilibrium and the locally asymptotical stability of the endemic equilibrium.The numerical simulations and brief discussion are given in Sections 4 and 5, respectively. A Simple ODEs Model In this section, we begin with a simple ODEs ordinary differential equations model with infectivity in latent period and imperfect treatment.Since the disease progression is slow the model should also incorporate demographic changes in the population.It is assumed that the total population is growing exponentially 24 .We consider a population whose total population size at time t is denoted by N t , which is divided into three classes: S t -susceptible individuals; E t -latent individuals; I t -infective individuals, who may receive imperfect treatment and enter E t again.In fact, the latent may also receive treatment and recover, thus move to S t instead of I t .In addition, we assume that those who are exposed have infectiousness too, but weaker than that of the infectious ones. The model takes the form: where we have used the following parameters: b: birth/recruitment rate into the population, μ: per capita natural death rate, k: the coefficient of reduction of infection, δ: the rate at which exposed individuals become infective, α: per capita recovery rate from the class E, γ 1 : per capita recovery rate from the class I, γ 2 : per capita imperfect treatment rate. We assume that all parameters are nonnegative and μ > 0. The demographic equation for the dynamics of the total population size N S E I is given by: N bN − μN. 2.2 We obtain N N 0 e rt , where r b − μ.Hence, r gives the growth rate of the population.If r > 0, that is, b > μ, the population grows exponentially; if r < 0, that is, b < μ, the population decreases exponentially.The case r 0 implies that the population is stationary.These thresholds are often interpreted in terms of the demographic reproduction number. The Threshold In this subsection we derive the threshold, namely, the basic reproduction number, by considering the existence of the endemic equilibrium, and then analyze the meaning of each part. Since the model 2.1 is homogeneous of degree one, we consider the equations for the normalized quantities.Setting s S/N, e E/N, i I/N leads to the following equivalent nonhomogeneous system: where s e i 1.It is evident that the system 2.3 always have a DFE disease-free equilibrium : P 0 1, 0, 0 .To see the existence of the endemic equilibrium, we define For the sake of convenience, R 0 can be reduced to and in particular, when R 0 > 1, there is an unique endemic equilibrium P * s * , e * , i * , where It can be easily seen that R 0 is an increasing function of β and k, decreasing function of b, γ 1 , and α.However, it has more complicated relations with δ and γ 2 .By calculating the derivation we get: Since s e i 1, the system 2.3 can be reduced to the following equivalent system by replacing s by 1 − e − i: 2.9 Thus in the following sections, we only need to investigate the properties of the DFE Q 0 0, 0 and endemic states Q * e * , i * of the system 2.9 , where e * and i * have been given in 2.6 , which are corresponding to P 0 and P * , respectively. The Globally Asymptotical Stability of the DFE In this subsection, we firstly prove that the disease-free equilibrium DFE is locally stable when R 0 < 1 by calculating the Jacobian of 2.9 at Q 0 , and then choose a proper Liapnov function to get the globally asymptotical stability of the DFE. It is easy to see that the two eigenvalues ω 1 and ω 2 satisfy Proof.We choose a liapunov function It is easy to calculate that 2.13 If R 0 < 1, noting that e ≥ 0, i ≥ 0 and all the parameters are positive, then δβ e i / 0, 2.14 it follows from 2.13 that ke i 0, that is, e i 0. So Q 0 is globally attractive.Coupled with Theorem 2.1, we can derive that Q 0 is globally asymptotically stable when R 0 < 1.The proof is complete The Globally Asymptotical Stability of the Endemic Equilibrium In this subsection, we firstly prove that the endemic equilibrium is asymptotically stable if it exists by calculating the Jacobian of 2.9 at Q * and then choose a proper Dulac function to rule out the periodic solution. Proof.The Jacobian of 2.9 at where l ke * i * .Since the endemic state obtained from 2.9 by putting the derivatives equal to zero, the 1,1 entry in the Jacobian can be written as It is easy to see that the two eigenvalues ω 3 and ω 4 satisfy ω 3 ω 4 < 0 and which implies that both the eigenvalues are negative.Thus we can conclude that Q * is locally asymptotically stable.This completes the proof.System 2.9 is two dimensional and direct application of the Dulac's criterion is possible.We define the relevant region D { e, i | e ≥ 0, i ≥ 0, e i ≤ 1}. 2.18 Proposition 2.4.The system 2.9 has no periodic solutions, homoclinic loops, or oriented phase polygons inside the region D. Proof. Let As a Dulac multiplier we use 1/e.We have and therefore there are no closed orbits in the region D. This completes the proof. From Theorem 2.3 and Proposition 2.4, we can immediately obtain the following theorem. An Age-Structured Model In this section we consider an extension of the ODEs model from the previous section in which the infective stage is stratified by the age-since-infection, that is, the time spent in the infective stage.Let θ be the age-since-infection.With the notation from the previous section we consider where γ 1 θ and γ 2 θ are nonnegative functions of θ.As before, γ 1 θ is the age-structured recovery rate for the infective state, and γ 2 θ is the age structured imperfect treatment rate of individuals who have infected.We will use the following notations: where I t is the number of infected individuals, Γ 1 θ represents the probability of not recovering to S t at θ time units after becoming infected, and Γ 2 θ denotes the probability of not receiving imperfect treatment and enter into E t again at θ time units after becoming infected, therefore Γ θ Γ 1 θ Γ 2 θ is the probability of being still infective θ time units after becoming infected.Integrating the third equation in 3.1 and assuming that there are no individuals with infinite age-since-infection, that is, i θ, t → 0 when θ → ∞ for all t, we get thus for the total population size N S E I we obtain a Malthus equation of exponential growth: N bN − μN.Similar to the previous section, by introducing the proportions s S/N, u E/N, v i/N we get the normalized system: where s t u t ∞ 0 v θ, t dθ 1.It is easy to show that 3.4 always exists in a DFE: M 0 1, 0, 0 .Let M * s * , u * , v * θ be an endemic equilibrium of 3.4 , then M * satisfies the following equations: From the third and fourth equations of above system it follows that v * θ δu * e −bθ Γ θ . 3.6 From this and the following equation 3.8 The Globally Asymptotical Stability of the DFE In this subsection, to analyze the stability of the DFE we still take the linearization of system 3.4 at the point M 0 as before and obtain the threshold R 0 , namely, the basic reproduction number, then we prove that the DFE is globally attractive if R 0 < 1. Theorem 3.1.If R 0 < 1, the disease-free equilibrium M 0 is locally asymptotically stable (LAS); if R 0 > 1, M 0 is unstable and there is a unique endemic equilibrium M * . Proof.Set s 1 x, u y, v z. 3.9 Plug it into the system 3.4 and ignore the high-degree terms, then we get the linearization around DFE M 0 : 3.10 Looking for exponential solutions in 3.10 , that is, solutions of the form x e ωt x, y e ωt y, z e ωt z θ , 3.11 where ω is a constant.Substituting it into 3.10 , we have 3.12 From the third and the forth equations in 3.12 , we get z θ δye − ω b θ Γ θ .The characteristic equation is: which can be rewritten in the form 3.14 Denote the left hand side of 3.14 by F ω and define the basic reproduction number For ω ≤ − b δ α , F ω is negative and the equation has no solution.For ω > − b δ α , F ω is a decreasing function for real ω which approaches ∞ as ω → − b δ α and zero as ω → ∞.Therefore, the characteristic equation always has a unique real solution ω * , and if then ω * > 0, which implies that the DFE is unstable.In addition we assume that ω c di is an arbitrary complex solution of 3.14 , then we have 3.16 Since F x is a decreasing function for real x and ω * satisfies 3.14 , we have that c ≤ ω * .Hence, any complex solution of 3.14 has a real part smaller than the unique real solution of 3.14 .Therefore, if R 0 < 1, then the disease-free equilibrium is locally asymptotically stable. We note that if R 0 < 1, then s * > 1 see 3.8 , thus the endemic state does not exist.If R 0 > 1, then s * as given by 3.8 is smaller than one and an endemic state exists and is given by 3.8 .This completes the proof. If we can show that the DFE is globally attractive, then coupled with the above theorem we derive the DFE is globally asymptotically stable GAS .In particular, we have the following theorem.Theorem 3.2.Assume that γ 2 θ is a bounded function and Proof.Integrating the third equation of 3.4 along the characteristic lines we obtain 3.18 where Γ θ − t, θ e − θ θ−t γ 1 τ γ 2 τ dτ is the probability of an individual who has been infected in class I for θ − t units to remain infective until θ units after infection.Then 3.19 Taking the upper limit for t → ∞ in the above inequality leads to lim sup 3.20 From the second equation in 3.4 and using the fact that s ≤ 1 we obtain 3.21 Taking the upper limit for t → ∞ in the above inequality we get lim sup 3.22 Since R 0 < 1, the above inequality can only hold if lim sup t → ∞ u t 0. From 3.18 , we also have lim sup t → ∞ v θ, t 0 for all θ fixed.This completes the proof. The Locally Asymptotical Stability of the Endemic Equilibrium In this subsection, we will show that the endemic equilibrium is locally asymptotically stable as long as it exists. 3.23 Plug it into the system 3.4 and ignore the high-degree terms, then we get the linearization around M * : 3.24 Looking for exponential solutions in 3.24 , that is, solutions of the form x e ωt x, y e ωt y, z e ωt z θ , 3.25 where ω is a constant.Substituting it into 3.24 , we have 3.29 In fact, one of the three equations of 3.28 and 3.29 is a consequence of the other two.In particular, adding the equations in 3.28 and using the fact that 3.32 If ξ ≥ 0, then the left hand side of 3.32 is strictly larger than b δ α, and from 3.8 we note that the right hand side of 3.32 is just b δ α.This is a contradiction, so ξ < 0, that is, any complex eigenvalue has negative real part, then we can say the endemic equilibrium M * is locally asymptotically stable.The proof is complete. Simulations In this section, the system 2.9 is simulated for various sets of parameters and we will find that the results are consistent with the analytical results.In Figure 2, we give an example showing that the disease-free equilibrium Q 0 is stable when R 0 < 1. Figure 3 there exists an endemic equilibrium Q * , which is stable when R 0 > 1.The difference between two groups of parameter values only lie in k: the coefficient of reduction of infection.Figure 1 represents when we omit infectivity in latent period, and Figure 2 gives a very different result at the same case if we consider weaker infectivity in latent period, which means that it would make a big difference to the prevalence of TB whether considering infectivity in latent period or not.To find better control strategies for TB infection, we would like to see what parameters can reduce the basic reproduction number R 0 .In Figure 4 a we can see that R 0 decreases if γ 1 increases or α increases.From Figure 4 b we can find that R 0 decreases if b increases, or β decreases.Although they all act to decrease the basic reproduction numbers, which can be seen by analysis above, it often does so by different amounts.When it comes to complicated relations between R 0 and δ or γ 2 , we can see from Figure 5 a that the basic reproduction number R 0 is a increasing function of δ and decreasing function of γ 2 , for b α β γ 2 − b γ 1 γ 2 kβ > 0 and b γ 1 − β < 0, which coincides with the analysis above.In Figure 5 some region R 0 is a decreasing function of δ, but others are increasing.So to find better control strategies for TB infection, we should consider all factors and its weight comprehensively. Discussion In this paper we formulate a new model of a common disease, that is, TB.We assume that people in latent period has weaker infectivity, and those both in latent period and infective period can receive successful or unsuccessful treatment.We firstly introduce a simple ODEs model, and prove that sustained oscillations are not possible because the endemic proportions either approach the disease-free equilibrium or an endemic equilibrium. For the behavior of the proportions does not give us much insight on the behavior of the total number, we also define a basic replacement ratio.Then we consider an extension of the ODEs model from the previous section in which the infective stage is stratified by the age-sinceinfection, that is, the time spent in the infective stage.In this section we show that the diseasefree equilibrium is globally asymptotically stable if R 0 < 1 and the endemic equilibrium is locally asymptotically stable if R 0 < 1.However, it is regretful that in this paper we omit the disease-induced mortality and do not discuss persistence of the disease in PDE model.The proof of uniformly strong persistence involves complicated theory, and the reader interested in it can refer to 25, 26 . 4 which denotes the basic reproduction number, that is, the average number of secondary infections produced by an infective individual during the entire infectious period in a purely susceptible population.The first term kβ/ b δ α can be interpreted as the contribution to the reproduction number due to secondary infections generated by an infective individual when he or she is in the class of E, and the second term δβ/ b δ α b γ 1 γ 2 can be decomposed to δ/ b δ α β/ b γ 1 γ 2 , where δ/ b δ α denotes one move to I from E, and β/ b γ 1 γ 2 represents the secondary infections generated by an infective individual when he or she is in the class of I. Finally, the third term can also be decomposed to δ/ b δ α γ 2 / b γ 1 γ 2 , where δ/ b δ α denotes one move to I from E, and γ 2 / b γ 1 γ 2 should be responsible for those who get imperfect treatment and enter E again. Figure 1 : Figure 1: The estimated TB incidence, prevalence, and mortality in differen regions in 2008. Figure 3 : Figure 3: Phase plot of e versus i showing the existence of endemic equilibrium Q * which is stable when R 0 > 1, for the parameter values for the parameter values k 0.02 and the others are the same as Figure 2. Based on these parameters, R 0 ≈ 1.398 > 1. Figure 4 : Figure 4: The graphs of the basic reproduction number R 0 in terms of some parameters: a R 0 in terms of α and γ 1 , for the remaining parameter values are the same as Figure 3; b R 0 as a function of b and β, for the remaining parameter values are the same as Figure 3. Figure 5 : Figure 5: The graphs of the basic reproduction number R 0 in terms of some parameters: c R 0 in terms of β and δ, for the remaining parameter values are the same as Figure 3; d R 0 in terms of α and δ, for the remaining parameter values are k 0.04, and the others are the same as Figure 2. illustrate that Estimated TB incidence, prevalence and mortality, 2008.
5,308.8
2012-04-19T00:00:00.000
[ "Medicine", "Mathematics" ]
MFV Reductions of MSSM Parameter Space The 100+ free parameters of the minimal supersymmetric standard model (MSSM) make it computationally difficult to compare systematically with data, motivating the study of specific parameter reductions such as the cMSSM and pMSSM. Here we instead study the reductions of parameter space implied by using minimal flavour violation (MFV) to organise the R-parity conserving MSSM, with a view towards systematically building in constraints on flavour-violating physics. Within this framework the space of parameters is reduced by expanding soft supersymmetry-breaking terms in powers of the Cabibbo angle, leading to a 24-, 30- or 42-parameter framework (which we call MSSM-24, MSSM-30, and MSSM-42 respectively), depending on the order kept in the expansion. We provide a Bayesian global fit to data of the MSSM-30 parameter set to show that this is manageable with current tools. We compare the MFV reductions to the 19-parameter pMSSM choice and show that the pMSSM is not contained as a subset. The MSSM-30 analysis favours a relatively lighter TeV-scale pseudoscalar Higgs boson and $\tan \beta \sim 10$ with multi-TeV sparticles. Introduction Supersymmetry, when linearly realised, requires the existence of superpartners to the known elementary particles, and robustly dictates their quantum numbers. Less robustly dictated are their masses and couplings once supersymmetry is spontaneously broken, as experiments demand it must be. A full description of these requires the more than 100 parameters of the supersymmetry-breaking sector of the R-parity conserving minimal supersymmetric Standard Model (MSSM). The challenge of confronting such a vast parameter space with data drives the development of various kinds of well-motivated benchmark models. The earliest of these, the cMSSM/mSUGRA [2], specialises to a restricted parameter space motivated by what would be generated if supersymmetry were broken in a flavour-blind hidden sector (as suggested by the earliest gravity-mediation models). This simple model is one of the main benchmarks against which LHC results are compared, with the result that it is in real tension with the data. But should this tension be regarded as evidence against supersymmetry, even if only in its linearly realised 1 form? Answering this requires a more detailed exploration of the parameter space, yet a complete scan of the total parameters still remains beyond our current computational capabilities. What is needed is a more strategic survey of the possibilities, of which several approaches have emerged. One approach -for example, Gauge-Mediated Supersymmetry Breaking (GMSB) [4], or more sophisticated string-motivated gravity mediation mechanisms [5] -is to explore alternative mechanisms of supersymmetry breaking whose low-energy implications differ from those of the minimal gravity-mediated picture. Another focusses less on surveying the parameter space and more on the generic features of the underlying production and decay mechanisms, such as appear in 'simplified models' [6]. Comparison of such models to the data can quantify which of these mechanisms are favoured or disfavoured. A more specific 'simplified models' approach instead focuses on those interactions that take part in the naturalness issues that underlie the motivation for supersymmetry in the first place [7]. A third approach is to try to broadly survey the allowed parameter space, but to use prior knowledge about other constraints (like limits on flavour and CP violations) to cut down the range of parameters examined at the LHC. Of course this would be simple if it were just a matter of removing couplings that are excluded by other constraints. How the parameters are best pruned is more of a judgement call when the couplings of interest are not directly forbidden by other observations. The phenomenological MSSM (pMSSM) [8] is one of the leading approaches along these lines which stakes out a 19-parameter subset of MSSM by removing all members of potentially dangerous families of couplings -such as all flavour-changing interactions beyond those already in the Standard Model (SM), for example. Besides providing a good motivation for dropping the discarded parameters, the remaining 19-parameter set is also broad enough to include many models and yet small enough to allow reasonably systematic comparisons with LHC data. On the other hand a drawback of the pMSSM is the relatively ad-hoc way that the couplings are truncated in detail (in the precise sense described in more detail below). Our goal in this paper is to proceed further along this line of reasoning, in particular to cast the removal of parameters in terms of approximate symmetries. This has the advantage of building in at the outset naturalness constraints since radiative corrections are guaranteed to respect the choices made for the assumed hierarchies amongst the model's parameters. In particular we use Minimal Flavour Violation (MFV) [9,10,11,12] as our main symmetry criterion to limit flavour-changing physics, wherein the flavour symmetries of the SM in the absence of Yukawa couplings are assumed to be broken only by other parameters that transform as do the SM Yukawa couplings themselves. In such a formulation all of the magic of the GIM mechanism [13] is automatically incorporated because flavour-changing interactions are typically suppressed by the same small mixing angles as are those of the SM fermions. When imposed on the MSSM, the MFV hypothesis expresses flavour-violating supersymmetrybreaking interactions in new basis which emphasises their transformation properties under the approximate global flavour symmeties. This makes it possible to associate a power of the small symmetry-breaking size to all flavour-changing interactions in a way that is consistent with the known flavour-changes of the SM itself. Counting the suppression by this symmetry-breaking parameter provides a natural way to rank their size (and thereby gives a natural parameter-selection procedure, wherein one neglects all terms beyond a fixed order [9,10,14]). Of course, we are not the first to apply MFV methods to models beyond-the-SM (BSM) [11] or to the MSSM [10,12]. However, we believe our work represents the first use of MFV for systematic comparison of the MSSM to experiment. Although very similar in spirit to the ideas behind the pMSSM, the MFV approach differs in detail and offers several advantages. An important advantage is the ability to strengthen (or weaken) the MFV parameter-selection prescription at will to exclude more (or fewer) interactions, simply by changing the order in symmetry breaking that is to be neglected. This is to be contrasted with setting all off-diagonal mass terms to zero by hand once and for all, as is done for the pMSSM. In this paper we consider three such choices: the strongest is a 24-parameter MSSM-24, which works at the lowest nontrivial order. At next order is a 30-parameter MSSM-30 and at the order beyond this lies a 42-parameter Adams' model, 2 MSSM-42. Interestingly, none of these parameter sets contain the pMSSM, which is not defined by any fixed order in the MFV's small flavour-mixing parameter expansion. All three of these are multi-parameter alternatives to the pMSSM. They contain all of the pMSSM's main virtue and more. They are broad enough to include a large variety of well-motivated supersymmetric model points, and yet are small enough to bring within reach a systematic comparison with experimental data. As a first illustration we perform such a comparison for MSSM-30, showing that even this 30-parameter system is not too large to be surveyed using reasonable resources. The systematic exploration of these models should provide a better way for drawing quantitative inferences regarding whether linearly realised supersymmetry is yet disfavoured by current data. As is also the case for the pMSSM, the broader set of parameters contains some atypical expectations compared to the simpler and more constrained sub-spaces usually considered, until recently, in supersymmetry searches. Although we focus only on R-parity invariant interactions, the method can be easily extended to include the R-parity violating MSSM [18]. From a bottom-up perspective, the drawback of using ad-hoc criteria for reducing the MSSM parameter space is the uncertainty of the theoretical prejudices that underlie the choices made. Selecting to work within a few-parameter framework comes with a cost -a potential loss of physics that may prove important. For example, moving from the cMSSM to the 20-parameter pMSSM, as done in Ref. [15,17], changed the favoured masses of the Higgs boson and the scalar top-quark to 119-128 GeV and 2-3 TeV respectively, at a time where such heavy masses were considered impossible within the traditional cMSSM. Another example: by setting the off-diagonal mass terms to be zero within the pMSSM frame, certain diagrams that contribute to flavour changing decays (such as in the decay B s → µ + µ − ) are lost by construction. The MFV framework provides a natural way to extend the number of parameters in a systematic fashion, order-by-order, from the traditional few-parameters towards the complete and phenomenological representations. We consider the work we present here as only a first step towards a more systematic approach to soft terms from a bottom-up perspective. Jumping from the handful of parameters of the cMSSM to phenomenological studies of the pMSSM took more than 25 years [15,16,17] due in part to the computational challenge of considering more than 5 parameters. Thanks to increasing computing capacities this is becoming less of an issue. The main disadvantage of a Bayesian analysis for models with many parameters is that prior-dependence can limit the predictive power. One possible approach in the short term is to seek observables that are prior-independent and to estimate, qualitatively, the extent at which current data is able to constrain the supersymmetry models [23,28,25,26]. In the longer term this is less of an issue as better, more constraining, data becomes available. In what follows we do not explore prior dependence in too much detail, beyond comparing some of our results with fits to the pMSSM, because our immediate goal is to define the general set-up for later use. Our presentation is organised as follows. We first, in §2 describe in more detail the choices made both in the pMSSM and in our three realisations of the MFV-MSSM. §3 then describes a global fit of the MSSM-30 model to the data, with the goal of illustrating the utility of the MFV approach. Finally §4 briefly summarises our conclusions. The models In this section we provide a brief summary of the pMSSM and of the assumptions that go into the MFV-MSSMs that are compared later with observations. The starting point for both is the observation that a full comparison of LHC and other experiments data to the 100-plus parameters of the MSSM is not (yet) feasible, nor is it desirable (at the moment) given that many of these parameters describe processes that are strongly constrained by limits on flavour changing neutral currents (FCNCs) and on CP violation. Therefore we seek a methodology that allows a maximal probe of the MSSM parameter space with minimal imposition of ad-hoc relations or truncations amongst the free parameters. Parameter pruning We start with a broad-brush description of the pMSSM and MFV-MSSM, in particular showing how these are related to one another. The pMSSM The goal is to arrive at a criterion for excluding flavour-changing and CP-violating interactions. The pMSSM does so by making the following choices [8]: • The absence of flavour-violating interactions (when renormalised at TeV scales); • Degenerate masses and negligible Yukawa couplings for the first two generations of sfermions; • No CP-violating interactions (beyond those of the SM CKM matrix); • R-parity conservation; • The lightest neutralino should be the lightest superpartner (LSP) and a thermal relic. This approach leads to a model for which 19 parameters capture superpartner and multiple-Higgs physics. The 19 parameters are: 10 sfermion masses ( A τ ); and 3 Higgs/Higgsino parameters (µ, M A , tan β). We see that its definition includes choices that are well-motivated but ultimately ad-hoc. For instance, to avoid extra CP-violating sources the supersymmetry-breaking terms are set, by hand, to be real. This amounts to the assertion that no CP-violation effects play an important role in physical processes or interactions at colliders. Similarly, the first and second generation sfermion masses are set to be degenerate in order to avoid conflict with the non-observation of FCNCs, while the flavour changes of the SM are of course kept. It is necessarily tricky to distinguish BSM physics that explicitly violates flavour from the higher order corrections through which flavour-blind BSM physics learns about SM flavour violation. An alternative approach is to systematically represent all flavour physics effectsboth SM and BSM -as a perturbation involving some natural flavour expansion parameter, such as would be the case in an MFV analysis. Although inspired by MFV considerations, the pMSSM flavour constraints are not derived using MFV symmetry considerations (though this claim is sometimes made). Minimal flavour violation: the MFV-MSSM The MFV hypothesis [9] formulates the small size of flavour-violating effects in terms of approximate symmetries. To this end the starting point is to identify the large group, G, of flavour symmetries that the SM enjoys when all Yukawa couplings vanish. The assumption is then that the only quantities that break these symmetries are spurion fields that are proportional to the SM Yukawa couplings themselves. That is, the action is G-invariant when expressed in terms of its regular fields and the spurion fields, with the spurion fields then being replaced by their vacuum expectation values, whose values are inferred from the SM Yukawa couplings. This has the virtue of automatically building in the GIM cancellations required by observations once loop effects are included. As applied to the MSSM the upshot is that MFV boils down to the requirement that all the low-scale MSSM flavour couplings can be reconstructed entirely out of appropriate powers of the SM Yukawa coupling matrices, Y U,D,E , ensuring that flavour violations are solely governed by the CKM matrix. Within the MFV framework, soft supersymmetry breaking terms are expanded in series of the G-invariant spurion factors [14,9,10,19]: Although the ellipses appear to denote an infinite series, this collapse to only a few terms due to the Cayley-Hamilton identities for 3 × 3 matrices. For instance, any generic matrix can be written in the form in Eq.(2.1), but generically the required coefficients, b i and c i , would span many orders of magnitude. The power of the MFV hypothesis lies in the assumption that the b i and c i are of order unity, with all small numbers suppressing flavour changes coming solely from those already in the Yukawa matrices. Trilinear scalar couplings similarly take the form (A E,U,D ) ij = (A ′ E,U,D Y E,U,D ) ij . Now, a non-symmetry way to truncate the above parameters to a flavour-blind set is to impose b i = c i = 0. This sets all off-diagonal elements of the matrices to zero and all diagonal elements are set to be equal to one another, leading to a 14-parameter flavourblind MSSM with no extra-SM sources of CP violation. Note these choices ensure the sfermion masses within each family are degenerate. Lifting the degeneracy to only the first two generations then gives the 19-parameter pMSSM. This shows how the pMSSM is related to the MFV MSSM, and why some of the assumptions in its construction do not rely on symmetries. By contrast, the number of MFV MSSM parameters in principle is the same as for the original MSSM, if we work to all orders in the small Yukawa couplings. However, within the MFV MSSM the number of parameters can be reduced in a systematic way by dropping terms smaller than a particular fixed order in small mixing angles (like the Cabibbo angle), as we now see. Expansions in small mixing angles A systematic approach for selecting the number of MSSM parameters have been prescribed in Ref. [14]. The counting rule explores the hierarchical structure along the offdiagonals terms of the Yukawa matrices usually expressed in terms of the Cabibbo angle, λ = sin θ CB ≃ 0.23. The idea starts from the observation that after the collapse of the infinite series in Eq.(2.1) and Eq.(2.2) into few terms by employing the Cayley-Hamilton identities, large pieces of the terms such as where V is the CKM matrix. The next relatively smaller terms are proportional to V * 2i V 2j . So V * 3i V 3j and V * 2i V 2j can be used as basis vectors with coefficients of order one and ij , δ * i3 δ j3 and δ * i2 δ j2 can be used with order y 2 b and y 2 s coefficients respectively. Here δ ij is the unit matrix in family space. This way, all possible multipliable structures lead to new complete basis vectors that form a closed algebra under multiplication: Note that the basis vectors are all of order one since each has at least one entry of order unity. With these, each of the MFV parameters can be assigned an order in λ. Once the accuracy of calculations is chosen in the form O(λ n ), then the prescription can be used to systematically discard terms within the supersymmetry-breaking parameters expansion expressed in the X i basis. The MSSM-42 model For instance, as done in Ref. [14], dropping terms of order λ 6 ∼ 10 −4 and higher from the soft supersymmetry-breaking terms in Eq.(2.1) and Eq.(2.2), the MSSM parameters become: Since the squark supersymmetry-breaking mass parameters are Hermitian thenã 1−3,6,7 > 0, x 1 , x 2 , y 1 , y 3 , y 6 , y 7 must be real while the other coefficients can be complex. Hence the total number of supersymmetry-breaking parameters amounts to 42, defining the MSSM-42. The MSSM-30 model The MSSM-24 model , only x 1−2 ∈ R; and y 5 ∈ C remain from the nondiagonal mass and trilinear coupling expansion terms in Eq.(2.5). These make a total of 24 soft-supersymmetry breaking parameters for MSSM-24: Note that the MFV MSSM parametrisation cannot be reduced to the 19 parameters of the pMSSM. The MSSM-11 model Ideally we would like to reduce the number of parameters even further, keeping the systematic approach we are following here. This we cannot do, but it is possible to define a minimal extension of the constrained MSSMi.e. the cMSSM -in a more ad hoc way by setting in the above: φ 1 = φ 2 = 0,ã 1 =ã 2 =ã 3 =ã 6 =ã 7 = m 0 , Re(ã 8 ) = Re(ã 4 ) = Re(ã 5 ) = A 0 , and Im(ã 8 ) = Im(ã 4 ) = Im(ã 5 ) = 0. That is, {µ, M A , e φµ } → {m H 1 = m H 2 = m 0 , sign(µ)}, tan β, This reduces the parameter space into an 11-parameters cMSSM or cMSSM-11. Given its simplicity it may be worth studying this model in detail even though it reintroduces some ad-hoc selection of parameters at the end. Out of these sub-MSSMs derived via the MFV MSSM scheme, in this paper we concentrate on the MSSM-30 model and fit its parameters to experiments data as a first step into landscaping, and making further forecasts about, the MSSM parameter space. The MSSM-30 fit As mentioned earlier the MSSM-42 model cannot be reduced into the traditional pMSSM parameter space. The MSSM-24 is the closest to the pMSSM. However, looking at the parameter lists of the MSSM models mentioned in the previous section we select MSSM-30 for going beyond the pMSSM especially in the flavour sector. This is a first-step beyond the pMSSM within our series of MSSM projects [20,15,17,21,22,23,24,25,26,27] that is systematically built for absorbing experimental data from both energy-and intensity frontiers to high-energy physics explorations. The explorations of the MSSM CP-violating phases within various constructs can be found in the literature such as in Refs. [29,30,14,31,32]. The sub-MSSMs mainly fall into one of the the various constrained MSSMs, the pMSSM or flavour-blind MSSM with variable extra-SM CP phases. The MSSM-30 goes beyond these by construction, considering that the systematic inclusion of the flavourviolating terms is important, and by number of parameters. The procedure for the Bayesian fit of the MSSM-30 to data is described as follows. Fitting procedure We use Bayesian statistical methods for fitting the MSSM-30 to data. Bayes' theorem takes two input information for deriving essentially two inference about the model addressed. The process has to be within a well-defined context. The context, H, for the MSSM-30 analysis is that the model represents R-parity preserving linearly realised supersymmetry and that the neutralino LSP make at least part of the cold dark matter (CDM) relic. One of the input is the assumption about the nature of the model parameters, θ. Here we assume a flat prior probability density, p(θ|H), over the MSSM-30 parameters in Eq. , For this analysis, the constraint from anomalous magnetic dipole moment of the muon is not included in order to avoid possible tension with the EDM constraints since this have the potential of slowing down the exploration of the MSSM-30 parameter space. The compatibility of the MSSM-30 with the data is quantified at each point in parameter space by the likelihood, the probability of the data set given the parameter point, p(d|θ, H). Assuming the observables are independent 3 , the combined likelihood where the index i runs over the list of observables O, the variable x represents the predicted value of neutralino CDM relic density at an MSSM-30 parameter point and Here y = 0.11 is the CDM relic density central value and s = 0.02 the corresponding inflated (to allow for theoretical uncertainties) error. The MSSM-30 parameters are passed to SPHENO [47,48] and SUSY FLAVOR [49] packages, via the SLHA2 [50] interface, for computing the supersymmetry spectrum, mixing angles and couplings; and corresponding predictions: the branching ratios BR(B s → µ + µ − ), BR(B → sγ), R BR(Bu→τ ν) , Br(B d → µ + µ − ), ∆M Bs , ∆M B d and d e,µ,τ . Using the SLHA1 [51] interface, the neutralino CDM relic density were computed using mi-crOMEGAs [52] while susyPOPE [53,54] is used for computing precision observables that include the W -boson mass m W , the effective leptonic mixing angle variable sin 2 θ lep ef f , the total Z-boson decay width, Γ Z , and the other electroweak observables whose experimentally determined central values and associated errors are summarised in Table 1. The predictions from SUSY FLAVOR were not used for fitting the MSSM-30 but could be used for comparing predictions from the two packages. The MultiNest [55,56] package which implements the Nested Sampling algorithm [57] were used for fitting the MSSM-30 to data. The results of the Bayesian fit are the posterior probability density of the model parameters given the data, p(θ|d, H), and the support (or evidence), Z = p(d|H), for the MSSM-30 from the data used. These come directly from Bayes' theorem p(θ|d, H) × p(d|H) = p(d|θ, H) × p(θ|H). (3.5) The posterior probability densities of the MSSM-30 parameters and representative sparticle masses are presented in the next subsection. Posterior distributions The quantities of interest to be investigated from the output of the MSSM-30 fit to data are the supersymmetry-breaking parameters and the sparticle masses. The former provide an indication of the preferred regions within the MSSM-30 hyperspace which are compatible with the experimental results while from the latter an insight could be obtained concerning the prospects for detecting sparticles at the LHC and/or future colliders. The one-dimensional posterior probability distributions for the MSSM-30 parameters are shown in Figure 1. The real and imaginary parts of the complex parameters are plotted on the same figure while the corresponding magnitudes and phases are shown in Figure 2. In addition we also present, in Figure 2, the posterior distribution for the nature of the neutralino LSP's gaugino-Higgsino composition (1 − Z g ) where Z g = |N 11 | 2 + |N 22 | 2 with an LSP binob, winow 3 and HiggsinosH 1,2 combinatioñ N 1i with i = 1, 2, 3, 4 are coefficient depending on the supersymmetry-breaking parameters [58]. The neutralino is dominantly Higgsino-or gaugino-like for (1 − Z g ) approximately equal to unity or zero respectively. The nature of the LSP composition in relevant for understanding the posterior distributions of the gauge-sector supersymmetry-breaking parameters. From Figure 2, it can be seen that the LSP and lightest chargino are quasi-degenerate, m χ ± 1 ∼ m χ 0 1 ∼ µ. Secondly, the posterior of distribution of (1 − Z g ) indicates that the LSP is mostly higgsino-like. Therefore there is an efficient neutralino-chargino co-annihilations taking place for satisfying the CDM relic density requirement. The posterior distributions for the gaugino mass parameters M 1 and M 2 , and the electroweak symmetry breaking constraint control the nature of the neutralino gauginohiggsino admixtures. M 1 and M 2 remain approximately unconstrained because the scenario is similar to the cMSSM's focus point region, see e.g. [59], where the renormalisation group running of m H 2 is decoupled from the gaugino and trilinear parameters. This is the case for the pMSSM distributions shown in dashed lines except for M 2 which looks quite different apparently due to the non-negligible interplay of the EDMs and other constrains on the imaginary parts Im(M 1 ) and Im(M 2 ) as shown in Figure 1 or the corresponding phases (φ 1,2 ) shown in Figure 2. The gluino mass distribution is slightly preferred to be heavier relative to that in [15,17] due to the intensity-frontier constraints. Unlike the case for the gluino mass, the posterior distributions in Figure 1 show that the intensity-frontier constraints, plus fixing m h = 125 ± 3 GeV favour smaller values of tan β and lighter pseudoscalar Higgs boson mass M A relative to the fits in [15,17]. The tan β feature together with the tendencies for heavier gluinos and sparticles are compatible with the effect of the EDM constraints. The EDMs tend to be proportional to tan β [63] so the prevention of EDM over-production necessarily requires lower tan β values. The fit indicates a 95% credible interval (Bayesian confidence interval) of 4.5 to 26.9 with a mean value of tan β = 13.4 ± 5.8. The value of M A is within the range of 327.8 to 3803.3 GeV at 95% credible interval with a mean-value M A = 1751.9 ± 1024.5 GeV. The application of the ATLAS and CMS collaborations' search for MSSM Higgs bosons results [60,61] on the MSSM-30 posterior would require a dedicated interpretation of their data within the new MSSM frame. The remaining MSSM-30 parameters, which appear in the mass-squared terms,ã 1,2,3,6,7 , x 1,2 , y 1,3,6,7 and those that appear in the trilinear couplings,ã 4,5,8 , y 4,5 cannot be compared due to their absence within the pMSSM. However, the mass-squared terms can be compared as shown in Figure 2. It can be seen that the posterior sample from the flat-prior fit of the MSSM-30 to data favours supersymmetry-breaking parameters in regions deeper into the multi-TeV scale beyond the pMSSM results. This feature is expected for scenarios that alleviates the supersymmetry CP problems (see Refs. [62,63,64] and references therein, for instance). The CP-violating phases have to be either small or the sparticles be heavy into the multi-TeV regions. The phases were not restricted to be small for the MSSM-30 fit. The feature is also supported by the fact that radiative corrections to the lightest CP-even Higgs boson mass require heavy 3rd generation squarks for making up the constraint m h = 125 ± 3 GeV. The trilinear couplings, on another hand, are peaked around zero because values away tend to solutions with negative squark masses. The only exception here is for the leading parameter (a 4 , Im(a 4 ) in Figure 1 and the corresponding magnitude a 4 and phase φ a 4 shown in Figure 2) in the trilinear coupling term A U which is roughly fixed by the m h = 125 GeV constrain. [15,17] (when these are available) for comparison with the current MSSM-30 fit. The mass parameters are in TeV and the phases are in radians. The distribution of the neutralino composition 1 − Z g as described in the text is also shown. Conclusions and outlook We have implemented the MFV hypothesis' reparametrisation of the R-parity conserving MSSM as a prescription for selecting supersymmetry-breaking parameters at various orders, O(λ n ), n = 1, 2, 3, . . ., where λ = sin θ CB = 0.23 in a Cabibbo mixing angle (θ CB ) expansion of the flavour-violating mass and trilinear coupling terms. This leads to the construction of the phenomenological MSSM frames, namely MSSM-42 by keeping terms at order O(λ 6 ), MSSM-30 by keeping terms at order O(λ 4 ), and MSSM-24 by keeping terms at order O(λ 3 ) with 42, 30 and 24 parameters respectively. The traditional pMSSM cannot be obtained via this systematic approach because by construction it has 1st-2nd generation squark mass degeneracies and off-diagonal elements in the mass terms set to zero by hand. The MSSM-42, MSSM-30, or MSSM-24 are suitable for fundamental physics studies involving the usually unavoidable energy-and intensity-frontier effects' interplay. As a first step within our broader MSSM project, the MSSM-30 is chosen for going more significantly beyond the current R-parity conserving MSSM phenomenology constructs. The MSSM-30 parameters with O(λ 4 ) ∼ O(10 −3 ) coefficients in the MFV basis include the flavour conserving but CP-violating MSSM phases. We have performed a Bayesian global fit of the MSSM-30 to experiments data, following the standard techniques as in Refs. [15,17]. The data consists of the Higgs boson mass, the electroweak physics, Bphysics, the electric dipole moments of the leptons and the CDM relic density observables. The posterior distributions of the 30 parameters are shown in Figure 1. The mass term posterior distributions shown in Figure 2 indicate that the data used favours multi-TeV 1st/2nd generation and 3rd generation sparticles. The preference for smaller/lighter values of tan β and M A compared to the case for the 2008/9 pMSSM fits [15,17] is clear. Their posterior distributions are approximately prior-independent for the pMSSM fits [15,17,65]. This is also expected to be the case for the MSSM-30 since there is no feature or observable indicating otherwise, but the study of other priors is beyond the scope of the present paper 4 . The MSSM-30 flat-prior fit indicates a 95% Bayesian confidence interval of 4.5 to 26.9 for tan β with a mean value of tan β = 13.4 ± 5.8. M A is within the range of 327.8 to 3803.3 GeV at 95% credible interval with a mean-value M A = 1751.9 ± 1024.5 GeV. It would be interesting to assess the effect of the ATLAS and CMS's MSSM Higgs bosons search results [60,61] on the MSSM-30 parameter space. This can be done by interpreting the experimental data within the MSSM-30 such as done in Ref. [26] for interpreting the supersymmetry results within the pMSSM. Extending our analysis to the more robust MSSM-42 should be achievable in the near future, including a comparison of different priors in order to extract prior-independent information. This is a concrete project to follow-up. This is especially relevant for future studies in search for supersymmetry with the LHC or some other future collider(s). The power of the Bayesian approach in determining prior-independent results should be applied within robust phenomenological frameworks such as the MSSM-30 and MSSM-42 for this purpose. Its relevance should improve with the increasing availability of data. Having preference for multi-TeV supersymmetric particles may also add to the different arguments supporting higher energy initiatives such as a potential 100 TeV machine.
7,358.4
2014-11-06T00:00:00.000
[ "Physics" ]
Important Citation Identification by Exploding the Sentiment Analysis and Section-Wise In-Text Citation Weights A massive research corpus is generated in this epoch based on some previously established concepts or findings. For the acknowledgment of the base knowledge, researchers perform citations. Citations are the key considerations used in finding the different research measures, such as ranking the institutions, researchers, countries, computing the impact factor of journals, allocating research funds, etc. But in calculating these critical measures, citations are treated equally. However, researchers have argued that all citations can never be equally influential. Therefore, researchers have proposed other techniques to identify the important content-based, meta-data-based, and bibliographic-based citations. However, the produced results by the state-of-the-art still need to be improved. In this research work, we proposed an approach based on two primary modules, 1) The section-wise citation count and 2) Sentiment based analysis of citation sentences. The first technique is based on extracting the different sections of the research articles and performing citation count. We applied Neural Network and Multiple Regression on section-wise citations for automatic weight assignment. The citation sentences were extracted in the second approach, and sentiment analysis was used for sentences. Citations were classified with Support Vector Machine, Multilayer Perceptron, and Random Forest. F-measure, Recall, and Precision were considered to evaluate the results, compared with the state-of-the-art results. The value of precision with the proposed approach was enhanced to 0.94. I. INTRODUCTION Scientific research always has its roots in the literature of the domain [1]. Citation specifies the relationship between the citing and cited articles. In the research community, citations act as an acknowledgment of the stat-of-the-art work and the researcher. Therefore, the citation is deemed as a gauge to measure the different research aspects such as the impact factor of journals [2], H-index, I-index, research grants, and The associate editor coordinating the review of this manuscript and approving it for publication was Vivek Kumar Sehgal . funds [3], awards, ranking of researchers [4], institutions, etc. To compute such parameters, all citations are given equal weightage. In this era, the researchers have asserted that each citation is not influential [5], and the importance of citations varies for reasons such as researchers can cite an article to provide technical background, enhance the results, or compare the findings. To analyze the citations, qualitative features should be accompanied by quantitative aspects. The research community suggests that a citation reflecting only literature knowledge and a citation that enhances the work can never be of equal importance. In research articles, citations are primarily made to provide the general background of the research work [6]. Therefore, researchers have adopted multidisciplinary approaches to discriminate between important and non-important citations. If a citation enhances the work, it is considered important and non-important in the case of providing only background knowledge [7]. The researchers have developed multiple models and approaches to classify the citations concerning their reasons. This classification was converged to automatic categorization by manually asking the reasons from the authors. Finney [8] was the first researcher who proposed an automatic model to classify citations into seven categories. The different groups of citations were merged, forming two classes such as important and non-important citations. The key approaches for the classification of citations are 1) Content-based [7], [9], 2) Mata data-based [10], 3) Count based [11], 4) Sentiment based [5], 5) Hybrid approaches [12], etc. In Meta-data based and Content-based techniques, the similarity of the corpus is calculated while the frequency of citation is considered in the count-based approach. Zhu et al. [9] performed the pioneer binary classification of citations. This work was enhanced by Valenzuela et al. [7]. The author utilized contextual features and categorized the citations into non-important and important categories. Qayyum and Afzal et al. [10] used the Meta-data approach and enhanced the results further. Wang et al. [12] introduced the syntactic and contextualbased approach. The author produced a 0.85 value of the F-measure. The produced results by the state-of-the-art need to be enhanced for potential decisions. This research presents a hybrid approach to identifying the credible citations of research articles. To experiment, two annotated datasets were used. The first dataset was collected by Valenzuela et al. [7], and the domain experts annotated this dataset. The second dataset was compiled by [10] and annotated by a Faculty member of the Central University of Science and Technology Islamabad. To classify the citations, different modules were considered, such as 1) Citation Count, 2) Similarity of research articles, 3) Section-wise weights for in-text citation, and 4) Sentiment analysis of citations. In citation count, the direct and indirect frequency of citations was considered. Furthermore, a cosine similarity algorithm was utilized to calculate the text similarity of citing and cited research articles. Further, the sentiment analysis on citation sentences was performed, and the citation was categorized as positive, negative, or neutral. Finally, a section-wise citation count was performed to assign the automatic weights to sections. Considering the section-wise citation count Neural Network and Multiple Regression algorithms were utilized to produce appropriate weights for sections. Support Vector Machine, Multilayer Perceptron, and Random Forest were considered to classify the citations. The performance of the approach was measured with Precision, Recall, and F-measure values. The produced results were compared with stat-of-the-art. The outcomes of the experiments enhanced the state-of-the-art results from 0.9 to 0.94 value of the F-measure. This research considered potential features for identifying important citations, such as section-wise in-text citation weights, sentiment analysis, and similarity of research articles. These features effectively classified the citations producing a significant value of the F-measure. As a result, the proposed approach outperformed as compared to the state-ofthe-art making a considerable contribution to the literature. II. LITERATURE REVIEW Citations are the key factors in effectively estimating the different technical aspects, such as the impact factor of journals, H-index, and I-index. Citations represent the bond between the citing and cited research articles. The esteem citation analysis is used to acquire scientific information about the author's research work. The pioneer of the domain citation analysis was Garfield [13]. The author worked on the correlation between citations and the Prize winners. Furthermore, Inhaber and Przednowek [3] developed the idea of considering the relationship between citations and research fund winners. Garfield [13] performed the research and extracted the 15 reasons for citations to find out why the authors do citations. These reasons were investigated by Bornmann and Daniel [2]. Moraviscki and Murugesan [6] performed the citation classification based on the reasons. The author claimed that the citations are performed considering different reasons. Therefore, citations are not equally influential. The classification categories of citations were reduced to 13 by Spiegel-Rosing et al. [14]. In the early era of citation analysis, the reasons for citations were manually asked by authors. The manual citation reason finding was unfeasible for a massive corpus. Therefore, the need of the hour was to classify the citations automatically. Roger Mayer et al. [15] explained that specific words or phrases with citations could justify citation category. Moravisks and Murugesan [6] introduced the citation classification technique and reported that a single citation could belong to different categories. Finney performed the first semi-automatic citation classification [8]. The author classified the citations into seven categories. The fully automated approach for citation classification was introduced by Garzone and Mercer [16]. The author highlighted the shortcomings of the Finny model. The citations were categorized into 35 categories using 195 lexical and 14 parsing rules for documents. The approach was implemented on a dataset of 20 research articles. The experiments showed better results on the known dataset, but for the unknown dataset, the results were averaged. Giles developed the first automatic citation indexing engine [17], later named CiteSeer. This engine is a digital library consisting of literature on computer science. Pham and Hoffmann [18] categorized the citations into four categories. Bi et al. et al. [19] proposed a similar approach. The author considered the direct and indirect citations. The author stated that the proposed system achieved higher results than a state-of-the-art method like SCi and PageRank. Another automated approach was introduced VOLUME 10, 2022 by Teufel et al. [20] based on a supervised machine learning model. The author considered different linguistic rules and classified the citations into four groups. The citation groups were further divided into 11 subgroups. The dataset consisted of 548 citations, and these citations were categorized considering 892 linguistic phrases. A 90% dataset was utilized for training the model, and the model was tested on the remaining 10%. The results depicted that 65% of citations were neutral with a 0.71 value of the F-measure. Sugiyama et al. enhanced this idea [21]. The author classified the citation into citing and non-citing categories. The Support Vector Machine (SVM) model was considered to implement the approach, utilizing different features such as nouns, position phrases, following sentences, n-grams, and previous sentences. It was reported that context and proper nouns were significant for training purposes. Agarwal et al. [22] used SVM and Naïve Based approaches to classify citations considering eight categories. The dataset used by the author consisted of 43 research articles from the domain of medical science. The annotation was performed with phrases from the context of citations. The results were presented in the form of an F-measure value of 0.76. Next, Small [23] performed the sentiment analysis of citations to understand the social process. The dataset of 20 research articles was used, consisting of words and phrases depicting the sentiments of citations. The author reported the correlation of sentiments with social and cognitive reasons. Finally, Shahid et al. [24] developed an approach to find the relevant research articles. The author used a dataset of 16404 reference pairs and stated that the articles would be relevant if the citation frequency were five or more. This approach was further enhanced by Hou et al. [25]. The author claimed that if the in-text citation frequency is more than 10, there would be vital relevancy between the citing and cited research articles. For this experiment, the dataset of 651 articles was used, and the results showed closely related references more often. To classify the citations, Balaban [26] introduced the approach of assigning more weightage to the citations of famous authors. The author also stated that a research article would be significant if cited by the high impact factor article. Dong and Schäfer [27] reduced the classes to three, considering that more classes can produce a conflict for citations. The classes were 1) Positive, 2) Negative, and 3) Neutral. Athar [28] also classified the citations into three categories. Next, the author performed the sentiment analysis on citations. Citation analysis was further implemented by Jochim and Schütze [29]. Finally, the author proposed an approach to find the citations having more impact in the research domain. For this experiment, different lexical features from context were utilized, and the dataset was collected from ACL Anthology. Classification of citations into two categories was performed by Roger Mayers [15]. The author used a dataset of 20 articles. Another classification technique based on keywords was introduced by Kumar [30]. The citations were categorized into 1) Positive and 2) negative classes after performing sentiment analysis. The dataset was collected from Association for Computational Linguistics (ACL) Anthology. Lee et al. [31] classified the citations into three categories 1) Positive, 2) Negative, and 3) Neutral. These categories were further distributed into 12 subcategories. The dataset consisted of 6,355 citations. To perform the experiment n-gram technique was used. The model achieved a 0.67 value of the F-measure. Butt et al. [32] proposed extracting the five sentences with citations. The author implemented sentiment analysis using Naïve Bayes to classify the citations. The accuracy of the model was 80%. The same research was conducted by Sula and Miller [33], and the author used the Naïve Bayes model to classify the citations. They manually extracted the citation sentences and annotated them as positive and negative citations. But the proposed model could not extract the multiple citations in a sentence. Another approach for the classification of citations was proposed by Kumar [30]. The approach was keyword-based. The citations were categorized as positive and negative citations. The dataset was collected from ACL Anthology. Zhu et al. [9] classified the citations into two categories and termed them categories influential and non-influential. The author used the machine learning algorithm Support Vector Machine for the classification. The dataset consisted of hundred research articles collected from ACL Anthology. Five features were used: citation frequency, similarity, position-based, context-based, and miscellaneous. This research was further enhanced by Zhu et al. [9]; the author classified the citations into important and non-important citations. The dataset collected from ACL Anthology consisted of 465 pairs of citation articles. Field experts annotated the citations. The experts 93.6% agreed on classification. For classification, twelve features were utilized, such as citation count, similarity, direct, indirect, etc. The authors utilized SVM and Random Forest models while computing the value of precision as 0.65 and Recall as 0.90. The results were further boosted by Qayyum and Afzal et al. [10], and citations were classified into two categories. For this experiment, two datasets were used. The first dataset was collected from the research work of Valenzuela et al. [7], and the other dataset consisted of 324 citation pairs. The Faculty of Computer Science, CUST Islamabad, collected and annotated this dataset. The research work was performed on metadata [34] of research articles. Eight different features were used, such as title similarity, abstract similarity, keywords similarity, etc. The author used three machine learning models SVM, KLR, and Random Forest. The author claimed the best results with Random Forest by achieving a 0.72 value of precision. Aljuaid et al. [5] enhanced this idea by considering the sentiment analysis and achieved a 0.83 precision value. After that, we [11] contributed to the citation classification domain and increased the results to 0.84. Currently, the state-of-the-art approach [12] has achieved a 0.9 value of precision, but considering the citation important is still not optimal. III. METHODOLOGY To classify the citations into two categories, such as important and non-important citations, the overall approach is given in Figure 1. This experiment consisted of four key modules 1) Similarity calculation, 2) Citation Count, 3) Sectionwise weights assignment, and 4) Sentiment Analysis of citation sentences. Machine learning algorithms Random Forest, Multilayer Perceptron, and Support Vector Machine were used. The results were evaluated using Precision, Recall, and F-measure. A. DATASET This research was conducted considering two datasets. The first dataset was collected by Valenzuela et al. [7], and the other dataset was composed by Faiza Qayyum and Afzal [10]. Valenzuela's dataset consisted of 20,527 scientific research papers. This dataset was obtained from ACL Anthology, and two experts in the domain performed the annotation of the dataset. The extracted number of citations was 106,509, and due to difficulty in labeling a massive number of citations, the annotators only considered 465 citation sets. The domain experts categorized the citations into four classes concerning their importance in articles. Further, the four groups converged into binary classes. In dataset, 0 represents non-important and important citations are reflected by 1. 14.6% of citations were annotated as non-important, and 85.4% as important, as presented in Table 1. In Valenzuela's dataset, the IDs of research articles are placed, and by using these IDs, the pdf files can be downloaded from http://www.aclweb.org/anthology/. The IDs will be linked at the end of the Anthology URL to extract the Pdf files. The annotated dataset is shown in Table 2. Unfortunately, while performing the scraping, IDs 1) L08-1584, 2) W07-2058, 3) L08-1267, and 4) L08-1584 were unavailable, and four IDs were unable to be scrapped. Therefore, 457 research articles were considered from the first dataset for the experiment. In the first column of Table 2, A and B represent the annotators. The following field presents IDs of cited research papers; the third column consists of citing research papers. Finally, the fourth column describes whether the citation is important or not. Here, 0 is presenting a non-important citation, and 1 is for an important citation. To increase the citation pairs, we considered another dataset consisting of 324 research articles and 311 citation pairs. The research papers were from several publishers such as IEEE, Elsevier, Science Direct, etc. This dataset was gathered by [10] and annotated by the members of the Faculty of Central University of Science and Technology (CUST) Islamabad. As described in Table 3, most of the citations were from non-important citation classes, and important citations were more minor in number. This dataset contained two spreadsheets, the first sheet comprised the titles of research articles and their IDs and the second sheet had follow-up of citation pairs, describing if the citation is important or nonimportant. The dataset D2 is presented in Table 4 and Table 5. These tables consist of Titles and citation pairs. B. PDF TO TEXT CONVERSION The research articles of dataset D1 were automatically downloaded from Anthology, considering their IDs. on the other hand, for dataset D2, we manually downloaded the articles from different publishing sites. All the research articles were in PDF format. PDF files store the text in the form of a content stream, and parsing the PDF files is a difficult task to perform. In comparison, Text files are the simplest document form and can be easily parsed. Therefore, both datasets' PDF files were converted into Text files. To perform this conversion XPDF tool was used, which is openly available on GitHub. This tool implements the R language and is considered efficient for such conversions. Therefore, using XPDF 1 the PDF files were automatically converted into Text files. C. CONTENT EXTRACTION After converting the PDF files into Text files, the content extraction from Text files was performed. Therefore, text files were parsed using an openly available tool ParCit [35], and can be downloaded from the GitHub site. This tool uses a Conditional Random Field Model and is already trained. Therefore, we do not need to train it. It performs tokenization at the sentence level and labels the tokens. It considers different research elements such as (1) Title, (2) Authors, (3) Abstracts, (4) Different Sections, and (5) citations. This tool parses the files considering their structure and can extract bibliographic portions. The text files of datasets D1 and D2 were parsed, and different elements of files were extracted. D. CITATION IDENTIFICATION It is a complex task to identify the citation location as the citation styles vary concerning publishers, but in both datasets, the citation pattern was similar. Therefore, the identification process became feasible. For citation identification, the sections were parsed, and the program automatically located the citations. First, for each citation pair, the Authors name and the publishing year were the main elements for locating the citations. Next, the structure of citations was considered as it starts and ends with round brackets. Then, the publishing year and name of the author in citation were compared with articles publishing year and author of articles. Finally, this pattern matching was executed for each section, and the citation locations were identified. The identified citations were further utilized in calculating overall citations of manuscript, section-wise citations and in citation sentence extraction for performing sentiment analysis. E. SECTION-WISE WEIGHT CALCULATION FOR IN-TEXT CITATION The section-wise weight calculation phase consists of two sub-modules, 1) calculating section-wise citation count and 2) Assigning weights to sections. In this step, we identified the section-wise citation and counted the citations concerning the sections. After that, we used machine learning algorithms to assign weights to sections. Four sections were considered that are a standard part of most of the research articles such as 1) Introduction, 2) Literature Review, 3) Methodology, and 4) Results and Discussions. 1) CALCULATING SECTION-WISE CITATION COUNT The frequency of citation can be termed as its occurrences in a specific research article. This approach is quantitative and simple, where the frequency of citation is considered. For example, if a research article is cited three times, the citation frequency will be three. To calculate the section-wise citations, we considered four key sections [36] 1) Introduction, 2) Literature Review, 3) Methodology, and 4) Results and Discussions. Then, considering citation pairs, the citations were computed. Table 6 presents the section-wise citation count for dataset D1. 2) WEIGHT ASSIGNMENT TO SECTIONS For automatic weight assignment, we used supervised machine learning models 1) Neural Network and 2) Multiple Regression. Many researchers used these models to calculate the weights. For example, Karakaya and Awasthi [37] proposed an approach for relative weight calculation using Multiple Regression. Multiple Regression considers a single dependent feature and multiple independent features. Similarly, Neural Network was utilized by Choi et al. [38] for landslide susceptibility analysis. Therefore, we focused on Multiple Regression and Neural networks for weight calculation. F. SENTIMENT ANALYSIS The sentiment analysis provides the intent of citation, whether the document is positively or negatively cited. The positively cited citations would be more probably important ones. In this phase, we extracted the sentences with citations and performed sentiment analysis to explore the essence of citation. This step consists of three modules. In the first module, we select the window size for citation sentences. In the second step, we performed the sentiment analysis, and thirdly, the sentiment score was calculated. 1) SELECTING WINDOW SIZE For sentiment analysis, different citation windows can be considered, such as 2-3 sentences across the citation, a single sentence after, and a single sentence before the citations, or it can be one sentence where the citation is present. We selected the window size 1, considering the sentence where citation occurred as this approach is termed better than other approaches [39]. Next, the citation sentence was extracted using in-text citation identification. The citation sentence extraction consisted of the following steps: • Identification of the citation using in-text citation identification. • The existing text before opening brace was picked till the full stop of the last sentence appeared. • The after the text of closing brace was picked till the full stop of this sentence. • Storing the picked sentence in comma-separated values (CSV)file. 2) CITATION SENTIMENT ANALYSIS To classify the text, sentiment analysis was used. Different machine learning algorithms exist for text classification, but the model produces better results. Therefore, the commonly used classifiers were evaluated, such as Multinomial Naïve Bayes, Support Vector Machine, Random Forest, K-Nearest Neighbour, and Logistic Regression. Finally, the model that produced high accuracy and a high average macro score was selected for the classification of citations into Negative, Positive, and Neutral categories. The process is represented in Figure 2. 3) TRAINING DATASET To train the models, a training dataset was used that was collected by [23]. This dataset consisted of four features, 1) Source_Paper_ Id, 2) Target_Paper_ID, 3) Sentiment and 4) Citation_Text. The first feature contains the ID of the cited research articles, while the second feature shows the IDs of citing research articles. In the Sentiment feature, 'p' reflects the positive, 'n' represents negative, and 'o' represents neutral or objective sentiments. The last feature contained the text having the citation sentences. This dataset was given as input to the state-of-the-art machine learning models. The summary of the training dataset is presented in Table 7. 4) PRE-PROCESSING Preprocessing is important while manipulating the text or performing text classification. In this step, we tokenized the sentences and then converted those tokens into lower case. After that, the tokens were labeled using parts of speech (POS) tagging. Finally, the stop words were removed from the text, and lemmatization was performed. This preprocessing was conducted using Wordnet and Natural Language tool kit. 5) FEATURE EXTRACTION AND SIMILARITY CALCULATION To extract the features for sentiment analysis, we used the Term Frequency -inverse document frequency. This technique is statistical and measures the relevancy of a word to a document. Thus, it multiplies two metrics, 1) the occurrence of a word in a document and 2) the inverse document frequency concerning a set of documents. Equation (1) is used for extracting the features. In this experiment top, 30% [5] of features were considered. Using this technique, Unigram, bigram, and trigram features were evaluated. To calculate the similarity between the features of citation sentences, Cosine similarity [40] was used. Cosine similarity can reflect the relatedness of a text corpus. The high value of cosine similarity, the higher the relatedness of input text. 6) MODEL SELECTION AND SCORE COMPUTATION Different performance measures were used to evaluate the models, such as F-measure, Precision, Recall, and mean VOLUME 10, 2022 accuracy values. For validation, we utilized the 10-fold technique. Six algorithms were investigated for sentiment analysis: Multinomial NB, Linear SVC, Bernoulli NB, Logistic Regression, and K-Neighbors. First, the model that produced high f-measure value than other models was selected. Furthermore, the linear Support Vector Classifier observed a high macro score compared to other models. Therefore, a linear support vector classifier was used to classify the citations into negative, positive, and neutral classes. The selected machine learning algorithm calculated the sentiment scores. The sentences were classified into three categories such as positive, negative, and neutral. The frequency of citations was also considered, and for each citation, the citation sentence was extracted, and we calculated the sentiment score. G. CITATION COUNT In this step, the frequency of the citations is calculated [41]. It is the identification of citation occurrences in a research document. For example, if a research article is cited five times in another article, the citation count would be 5. This experiment counts all the citation of citation pairs irrespective of their sections. For citation count calculation, ParCit [35] was used, which is an openly available tool. This tool considers the structure of the research article for citation extraction and calculation. H. DOCUMENT SIMILARITY SCORE CALCULATION The similarity of citation pairs such as cited and cited articles can be deemed important in verifying the important citation. To calculate the similarity, we considered the whole content of the research papers. For this purpose, the content was extracted from the files. Further, the preprocessing was performed for removing stop words were removed, and words were converted to their base by applying stemming. Further, the key terms were identified using cosine similarity and term frequency-inverse document frequency [40] on extracted vital terms. The similarity value indicated the relatedness of the research articles. IV. RESULTS AND DISCUSSIONS A. SECTION-WISE WEIGHT ESTIMATION To conduct this research work, we utilized two datasets, D1 and D2. The dataset D1 was compiled by Valenzuela and two experts of the domain, who annotated this dataset. This dataset consisted of 457 citation pairs, out of which 388 citation pairs were non-important, and 69 citation pairs were annotated as important. We categorized these citations concerning their sections, and 155 citation pairs were observed in the Introduction section; similarly, 131 citations were in the Literature review, and 404 and 77 citations were found in the Methodology and Results and Discussion sections, respectively. Overall the citation found in dataset D1 was 767. The citations in the Methodology sections were more in number than in the other sections, and the minimum citations were in the Results and Discussions sections. For dataset D1, the characteristics are described in following Table 8. While the dataset D2 consisted of 311 citation pairs but for ID's 32,71,135,152,156,157,163,164,175,180,187,191,192,195,198,199,216,222,228,230,235,244,246,262,266,290,303, 316 and 317, we were unable to download the articles. Therefore, we researched with 282 citation pairs, out of which 193 citation pairs were non-important, and 89 were important. Further, we considered the sections for citations; 157 citations were observed in Introduction, 122 citations in Literature Review, 116 were found in Methodology, and similarly, 69 citations were in the Results and Discussion sections of the citation pairs. The total citation frequency for all citation pairs was 464 because a citation can be made multiple times. The maximum citations were made in the Introduction sections, and the minimum ones were in the Results and Discussion sections. The statistics of dataset D2 are presented in following Table 9. To automatically assign the weights to sections, we utilized two machine learning algorithms 1) Neural Network and Multiple Regression. However, both datasets were imbalanced as there were more non-important citations. Therefore, we applied Smote Filter while considering five neighboring instances. This filter creates virtual instances by considering the neighbor instances. These algorithms were trained on 60% data and tested on the remaining 40% [5]. In Multiple Regression, we kept the Y-intercept as 0, so there would be no need to add the constant value to the obtained weights. As a result, the obtained weights by both machine learning algorithms had a sum equal to 1. Table 10 presents the weights obtained by the Neural Network. The algorithm Neural Network assigned a maximum 87996 VOLUME 10, 2022 weight to the Results and Discussion sections, and the least weight was assigned to the Literature Review sections. On the other hand, The Multiple Regression assigned more weight to the Methodology sections and less to the Results and Discussions sections, as mentioned in Table 11. At the same time, the research community focuses more on Results and Discussion citations. Therefore, we utilized the weights obtained by Neural Network for further manipulation. The weights were further multiplied to the section-wise in-text citations. For dataset D1, the multiplication of weights to the section-wise citation count is presented in Table 12. B. SENTIMENT SCORE To compute the sentiment score, we extracted the citation sentences and classified the sentences into positive, negative, and neutral categories. We considered six algorithms, such as Linear SVC, Multinomial NB, Bernoulli NB, K-Neighbors, and Logistic Regression, for extracting the essence of citations. These algorithms were trained on the separate training dataset, and we utilized Unigram, Bi-gram, and Trigram measures. These algorithms were evaluated based on Accuracy, Precision, Recall, and F-measure parameters. The following table presents the macro F-measure values with Unigram, Bi-gram, and Tri-gram features. As described in Table 13, on average, the considered models produced better results with Unigram than Bigram and Trigram. Therefore, Unigram was selected for feature evaluation. The Weighted average, Micro average, and Macro average values for Accuracy, Precision, Recall, and F-measure are presented in the following Figure 3. The overall results produced by Linear SVC were better than the other models. Therefore, for further computations, we utilized Linear SVC. This model was applied to citation sentences, and the sentences were categorized as positive, negative, and neutral sentences. The following table describes the results obtained with the Linear SVC algorithm. In Table 14, we computed the sentiment of citation sentences concerning their number of occurrences. As in the first row, a paper is cited a single time and was classified as Neutral. While the last row explains that a paper was cited twice, and both times, it had different sentiments: one time it was positively cited, and one time it was cited as neutral. C. CLASSIFICATION OF CITATIONS WITH ALL COMBINED FEATURES To classify the citations, we combined all the considered features consisting of 1) Citation Count, 2) Content Similarity, 3) Section-wise in-text citation count, and 4) Citation Sentence Sentiment Score. The Section-wise in-text citation count feature was further divided into four features such as 1) Introduction Citation Frequency, 2) Literature Review Frequency, 3) Methodology Citation Frequency, and 4) Results and Discussions Citation Frequency. Similarly, the Sentiment Scores were divided into three sub-features 1) Positive, 2) Negative, and 3) Neutral citation frequencies. Finally, we combined all these features and classified the citations using machine learning algorithms Support Vector Machine, Random Forest, and Multilayer Figure 5. On both datasets, the Random Forest model outperformed other candidate models. However, for dataset D1, the proposed approach achieved a value of precision of 0.94, and for D2, it was 0.76. There exists a difference in the results of D1 and D2. The reason for achieving higher results for D1 is that the articles in this dataset mostly contain multiple citations against a single article. Conversely, for D2, most articles contain single citations against a single article. D. COMPARISON Using dataset D1, the research work of Valenzuela et al. [7] achieved 0.68 precision values by utilizing metadata and content-based features. This work was further enhanced by Qayyum and Afzal et al. [10]. The author enhanced the precision value up to 0.72 by using metadata-based features only. Similar research was conducted by Aljuaid et al. [5], and the researcher achieved a 0.83 value of precision with the Random Forest algorithm. We have also contributed to important citation identification in our previous research work, and the precision value was enhanced to 0.85. Finally, the state-of-the-art approach of Wang et al. [12] further improved the results and gained a 0.91 precision value. The graphical comparison of different approaches till now is presented graphically in the following Figure 6. achieved a 0.69 value F-measure, and in our previous contribution, we were able to enhance the value up to 0.72. In our proposed approach, we have added sentiment features that improved the results further. The comparison for dataset D2 is presented in Figure 7. The proposed approach utilized section-wise in-text citation weights, sentiment analysis, and article similarity features which were not combinedly considered by any approach. Therefore this approach performed better than previous state-of-the-art approaches and achieved a higher F-measure value. V. CONCLUSION To conduct a research study, a citation is considered a scientific measure to evaluate the significance of the work. Citations are used for computing multiple aspects of research such as impact factor of journals, ranking of researchers, ranking of institutions, etc. While the criteria of computing these important measures, all the citations are counted with equal importance. The research community concluded that all citations are not equally important. The reasons for citations should be incorporated, as a citation providing background and the other one extending the work cannot be of the same worth. Therefore, researchers have developed different approaches to distinguish important citations from non-important citations. These state-of-the-art approaches are content-based, bibliographic based, or meta-data based. However, the achieved accuracy of these state-of-the-art approaches is insufficient for making potential decisions. In this work, we have introduced an approach based on four submodules such as 1) automatically assigning the appropriate weights to sections where the citations were made, using Neural Network, 2) sentiment analysis on citation sentences, 3) calculating the similarity of research articles, and 4) utilizing overall count of citations. We used two datasets, D1 and D2, collected by Valenzuela et al. and Faiza et al. to perform the citation classification. These were earlier used in state-of-the-art approaches. Multilayer Perceptron, Support Vector Machine, and Random Forest were utilized for classification. The Random Forest algorithm achieved the highest value. The results revealed that the proposed approach achieved a 0.94 value, comparatively higher than any other approach. SHAHZAD NAZIR received the M.S. degree in computer science from the National Textile University, Faisalabad, where he is currently pursuing the Ph.D. degree in computer science. His research interests include recommending relevant documents, information systems, deep learning, and natural language processing. MUHAMMAD ASIF received the M.S. and Ph.D. degrees from Asian Institute of Technology (AIT), in 2009 and 2012, respectively, on HEC Foreign Scholarship. During the course of time, he was a Visiting Researcher with the National Institute of Information and Communications Technology, Tokyo, Japan. He has worked on some projects, including the Air Traffic Control System of the Pakistan Air Force. He is currently a Tenured Associate Professor of computer science with the National Textile University, Faisalabad. Before this, he was a Research Scholar with the Department of Computer Science and Information Management, Asian Institute of Technology, Thailand. He also serves as an Associate Editor for IEEE ACCESS, the prestigious journal of IEEE. He is a reviewer of several reputed journals and authored several research papers in reputed journals and conferences. He is also a permanent member of the Punjab Public Service Commission (PPSC) as an Advisor, and a Program Evaluator at the National Computing Education Accreditation Council (NCEAC), Islamabad. SHAHBAZ AHMAD received the M.S. degree in computer science from the National Textile University, Faisalabad. He is currently pursuing the Ph.D. degree in computer science with the Capital University of Science and Technology. He is currently working as a Lecturer with the Department of Computer Science, National Textile University. He has published many high class research papers in journals and conferences. HANAN ALJUAID is currently working as an Associate Professor with the Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), Riyadh, Saudi Arabia. RIMSHA IFTIKHAR received the M.S. degree in computer science from the National Textile University, Faisalabad. She has research and teaching experience. Her research interests include recommending relevant documents, information systems, deep learning, and natural language processing. ZUBAIR NAWAZ is working as an Assistant Professor of data science with the University of the Punjab. He has several years of teaching and research experience. YAZEED YASIN GHADI is currently a Professor and the Director of the Software Engineering and Computer Science Programs, Al Ain University, Abu Dhabi Campus. He has an extensive experience of teaching research and publications. He has published his research in world top class journals and conferences.
8,617
2022-01-01T00:00:00.000
[ "Computer Science" ]
Stiffness Identification of Foamed Asphalt Mixtures with Cement, Evaluated in Laboratory and In Situ in Road Pavements The article presents the possibilities of using foamed asphalt in the recycling process to produce the base layer of road pavement constructions in Polish conditions. Foamed asphalt was combined with reclaimed asphalt pavement (RAP) and hydraulic binder (cement). Foamed asphalt mixtures with cement (FAC) were made, based on these ingredients. To reduce stiffness and cracking in the base layer, foamed asphalt (FA) was additionally used in the analyzed mixes containing cement. The laboratory analyzes allowed to estimate the stiffness and fatigue durability of the conglomerate. In the experimental section, measurements of deflections are made, modules of pavement layers are calculated, and their fatigue durability is determined. As a result of the research, new fatigue criteria for FAC mixtures and correlation factors of stiffness modules and fatigue durability in situ with the results of laboratory tests are developed. It is anticipated that FAC recycling technology will provide durable and safe road pavements. Introduction Recycling of road pavements makes it possible to reuse road materials that have been refined with binders such as asphalt or cement. The main reason for recycling is the decrease in the availability of stone raw materials, the reduction of aggregate transport costs, and thus the relief of the road and rail network, as well as the liquidation of landfills from damaged road pavements. One of the recycling solutions is the use of foamed asphalt as a binder for recycled aggregates. Foamed asphalt is created by injection through a binder nozzle heated to a temperature of approximately 170 • C with the addition of water, as a result of which, the volume of asphalt increases 15 to 25 times, which in turn allows the smallest grains of the mix to be surrounded. The optimum amount of water for foaming, depending on the type of binder, ranges from 2.0%-3.5% [1][2][3]. There are also attempts to use ethanol instead of water to foam the asphalt binder [4]. Foamed asphalt (FA) is used in cold, deep recycling technology during the modernization of road pavement construction and in modern technology of warm mix asphalt (WMA), in which the production temperature of asphalt materials is reduced by about 50 • C compared to traditional production technology of hot mix asphalt (HMA). All over the world, attempts are made to use foamed asphalt in road engineering. Foamed asphalt is used primarily through recycling to make the sub-base layers of road pavement constructions. According to [5], tests have shown that the stiffness modulus of the mixture with foamed asphalt depends on both the stress state and the test temperature. On the basis of triaxial tests, it was found that for mixes without active filler, the hardening of the mixture is generally independent of temperature. According to the Marshall stability results, the water content of the foam has no significant influence on the performance of the foam asphalt mixture [6]. The results from the study [7] suggest that foamed asphalt cold recycling mixtures have a high modulus and small temperature shrinkage stress, reducing early damage caused by pavement cracks. In Saudi Arabia, foamed asphalt is mainly used for the production of the sub-base layer and upper base layer, made of reclaimed asphalt pavement (RAP) [1]. In foamed asphalt production processes, the asphalt of high penetration 160/220 is often used [8]. In Indonesia, attempts are being made to use foamed asphalt in asphalt concretes containing only a mineral mixture. Foamed asphalt replaces the regular road binder [9]. These asphalt mixtures based on foamed asphalt can be used in the upper base layer of road pavements. Mixtures with RAP can be beneficial to the moisture resistance of warm mix asphalt (WMA) and hot mix asphalt (HMA) mixtures. Moisture resistance of asphalt mixtures increases with the increase in RAP content [10]. In addition, the results presented in [11] indicate that the foam processing slightly reduced high-temperature performance and temperature sensitivity while improving the resistance for fatigue cracking. In the USA, foamed asphalt is commonly used for construction layers of road pavements. An innovative solution is the use of foamed asphalt to stabilize foundations based on ashes [12]. In the USA, in Johnson County, Iowa, RAP temperatures were found to have a significant effect on indirect wet tensile strength, asphalt foam blends produced in the cold recycling site. As the RAP temperature increased, the optimal foam asphalt content decreased-this is due to the activation of the asphalt from waste at a higher temperature and facilitating compaction [13]. Furthermore, the type of asphalt binder contained in the recovered asphalt material has a significant impact on the change in the complex modulus of the recycled mixture [14]. During the 8th Conference on Asphalt Pavements for Southern Africa in Sun City, the authors of [15] and the authors of the research presented in [1] showed an increase in indirect tensile strength (ITS) along with an increase in cement content. Additionally, mixtures containing foamed asphalt showed higher strength values (ITS) than mixtures containing asphalt emulsion. For the same mixtures, a similar relationship was observed in determining the stiffness modulus [16]. Furthermore, the authors' experience show that mixtures containing foamed asphalt have higher durability than mixtures containing asphalt emulsion. This is caused by the different properties of these binders. The content of asphalt binder has a significant effect on the wet and dry ITS values of materials stabilized with foamed asphalt [17] but a smaller effect on materials stabilized with asphalt emulsion [1,2,15]. Due to the climatic conditions in Central European countries, road pavements should be waterand frost-resistant. On the basis of the research of recycled foamed asphalt pavement, it was found that the use of foamed asphalt improves its tensile stress strength and the mechanical properties of the pavement [18][19][20][21]. Moreover, the use of foamed asphalt in mixtures ensures higher water and frost resistance, higher creep stiffness modulus, and higher resistance to plastic strain than when using asphalt emulsion [2,19]. The optimal content of foamed asphalt and hydraulic binder (Portland cement) for mixtures of the base layer gives the desired physical (the air void content) and mechanical parameters (wet-dry ITS) [22][23][24]. In the test section on the heavily trafficked Greek highway pavement presented in [24], the results of strain and deformation in a layer made of foamed asphalt and recycled material showed that the critical in situ stress in the FA layer was lower than the maximum expected tensile stress threshold. This fact indicates the improvement of fatigue properties of this type of mixture. Water-based disintegration asphalt emulsions are mainly used for the recycling of asphalt layers in Poland. As a result of mixing reclaimed asphalt pavement (RAP), cement binder and asphalt emulsion, a so-called mineral cement emulsion mixture (MCE) is created. Innovative use of foamed asphalt in mineral-cement mixtures (FAC) in exchange for asphalt emulsion may have a positive effect on the properties of renovated road pavements and the process of building them. The article presents alternative possibilities for using foamed asphalt to produce the base layer. Foamed asphalt was combined with a mineral mix (reclaimed asphalt pavement (RAP) + possible material for improving gradation) and a hydraulic binder (cement). On the basis of these ingredients, foamed asphalt mixtures with cement were made and marked with the symbol FAC. Materials and Methods Materials from recycled degraded pavement (test section) were used in the research process. For this purpose, RAP was used from degraded wearing and base course layers as well as crushed granite stone from the base. Based on control laboratory tests of density, bulk density, Marshall stability, and flow, the technology of production (composition design) of the FAC mixtures was proposed. The durability of the future road pavement is significantly affected by the stiffness and fatigue life of the FAC mixture. Stiffness and fatigue durability in laboratory conditions were determined for FAC-type mixtures. Then, reconstruction of the recycled pavement layers and implementation of the experimental section's pavement layers began. Foamed asphalt, with the addition of cement binder to the base layer, was used. Nevertheless, the presence of too rigid mixtures in the base layers can cause the formation of shrinkage cracks, which copy to the pavement layers of asphalt mixtures in the form of reflected cracks. The use of foamed asphalt in combination with cement allowed the limitation of stiffness and shrinkage cracking of the mixture in terms of its use in the base of the road pavement. Asphalt layers (asphalt concrete and SMA) were laid on the base layer of the test section made of the FAC mixture. After making the pavement on the experimental section, deflection measurements were made, pavement layer modules were determined, and its fatigue durability was determined. As a result of research, an attempt was made to correlate the stiffness and fatigue life determined in the laboratory with the parameters of FAC mixtures of the test section. New fatigue criteria have been introduced for FAC mixtures used in the base layers. Laboratory tests of the stiffness modulus and fatigue life and developed fatigue criteria can be used to estimate the durability of future road pavement constructions, based on base layers of FAC mixes. The flow chart of the research approach is shown in Figure 1. The design of the mixture should be correlated with the design of the pavement construction and the organization of works, depending on the method of its implementation. The procedure for designing the composition of the FAC mixture for rebuilding an existing road requires the following steps: The mineral mixture may consist of the recycled asphalt (RA) material itself obtained directly from milling the pavement or from crushing the lumps from the demolition of the pavement, if it meets the requirements for grading according to [25]. Without this condition, the RA material should be improved with a mineral aggregate. In the research process, materials from the recycling of the degraded pavement (test section) were used in the form of RAP + stone material from the base + 0/31.5 mm material for improving the mixture gradation of the igneous rock fraction-gabbro. Grading boundary curves for mineral cement emulsion mixtures (MCE) were adapted for research purposes [25]. The recovered mixture (with a content of 5.05% asphalt) did not meet the conditions for gradation, therefore the material for improving gradation was used-0/31.5 mm of igneous rock-gabbro. Table 1 describes the grading of the mineral mixture, taking into consideration the grain size and the percentages of individual components. The optimal cement addition was estimated on the basis of the compressive strength test at various foamed asphalt contents, according to [26]. FAC mixtures embedded in the base layers should be characterized by susceptibility to deformation on the one hand, and rigidity associated with the transmission of strains from higher layers on the other. The use of cement in typical asphalt mixtures is usually limited to 6% [27], which is why laboratory tests during this work for FAC mixtures were carried out with cement content of 2.38%, 3.38%, and 4.38%. The optimal water content needed to foam asphalt binder is about 2-3% [1][2][3]28]. According to [29], in a foaming process, the injection of higher foaming water content (FWC) results in a higher volume expansion but lower stability of foamed asphalt at a certain foaming temperature and air pressure. The amount of water used for foaming the asphalt allows optimal foaming of the binder in the amount of 2.5%. Due to the cement binder present in the mixture, additional water content was necessary for its proper compacting and setting. The water content of the mineral mix with cement, guaranteeing its maximum compaction, was determined on the basis of optimization according to Proctor methods-method II, based on [30]. The optimum moisture content of the mix was 6.35%. The FAC mixtures use Nynas Nyfoam 190 of high penetration asphalt with a penetration of 160 -220 [31]. Estimating the content of asphalt binder and allowing for the maximum stability of the mixture is possible on the basis of the Marshall test [32]. However, FAC mixtures should not be too susceptible to deformation, but also not too stiff, because of the possibility of cracking. This condition was adopted, due to the need to reduce the shrinkage of the mixture and the formation of cracks in it that could copy into the upper layers of the road pavement. For this stability, the percentage content of foamed asphalt was estimated, which should be added to the mixture. Two levels of asphalt content of 3.5% and 5.5% were used in the analyzed mixtures. Based on the optimization of foamed asphalt and cement content, a laboratory composition of FAC mixtures was proposed, see Table 2. In order to reduce pavement damage and increase its durability, it is important to verify the material parameters of individual pavement layers and carry out the necessary tests, depending on the operating conditions of these materials. As part of the study, FAC mixtures were tested (Table 2), which were applied to the base layers of the pavement. The qualitative evaluation of the proposed mixtures, collected from recycled old road pavements, consisted of several compatibility tests. Analyzes included in the research program are shown in Table 3. All laboratory samples were compacted using the Marshall method with 75 blows per side [25]. The presented parameters of the FAC mixture are necessary for the correct execution and compaction of the road pavement layer of the test section. The proper load-bearing capacity of the base layer determines the increased durability of the entire road pavement construction. The main research element were analyzes of stiffness and fatigue life of FAC mixtures. The bending tests of the 4-point beam (4PB-PR) were used to perform them: • the complex modulus was determined according to [36] at −10 • C, +10 • C, +30 • C, and +55 • C, • fatigue life according to [37] at +10 • C. The loading frequency in 4PB-PR tests was 10 Hz. The device for testing stiffness and fatigue life is shown in Figure 2. The fatigue machine allows for simultaneous testing of changes in stiffness of the material being analyzed, determining the so-called complex modulus. The tests use prismatic beams with nominal dimensions: effective length (beam span between supports) L = 357 mm, b = 60 mm, h = 50 mm, as seen in Figure 3. The study of the complex modulus of FAC mixture stiffness was carried out by the method of permanent deformation ε = 50 × 10 −6 m/m. During fatigue tests, using the permanent deformation method, 5 load levels were adopted in the form of given strains: ε = 500 × 10 −6 m/m, ε = 400 × 10 −6 m/m, ε = 200 × 10 −6 m/m, ε = 100 × 10 −6 m/m, ε = 50 × 10 −6 m/m. The number of load cycles N f/50 was recorded until the complex stiffness modulus dropped to 50% of the initial value-conventional fatigue criterion. After the laboratory research, the trial field phase was implemented. The FAC mixtures designed in the laboratory were built into the base layer of the road pavement section. The stiffness modulus and fatigue durability of FAC mixtures layer were estimated. Results of laboratory examinations were compared with results of bearing capacity of this test section. Basic Research The designed FAC mixtures are intended for the lower base layer of the test section pavement. Tests were conducted to determine the basic properties of the mixtures on density, bulk density, air void content, Marshall stability, flow and compressive strength for the FAC mixture used. Base on the optimisation process the mixture C3A3 was choosen for further analysis and for applying to trial field phase. Results from the laboratory tests and analyses carried out for optimal mixture C3A3 are given in Table 4. The results given in Table 4 are mean values calculated from a minimum of three representative samples for each feature. The compressive strength of samples from the FAC mixture increases as the curing period increases and takes values from 1.5 to 2.5 MPa over 7 to 28 days. This is influenced by the presence of the cement added and its hydration time in the mixture. The Marshall stability in FAC mixtures has a similar relationship to compressive strength. Its value increases with the length of the sample's "life" in the range of 10.0-12.5 kN. The increase in stability value and decrease in deformation value over time suggests a significant effect of cement presence added to the mixture. Stiffness and Fatigue Durability in Laboratory Conditions On the basis of the analyses, the values of the complex stiffness modulus of the innovative material in a wide temperature range were determine, see Figure 4. The complex stiffness modulus was determined as a mean value form four samples for each applied temperature. As the temperature increases, the stiffness of FAC mixtures decreases. For a set average annual temperature of +10 • C, the complex stiffness modulus of the FAC mixture is about 2883 MPa. Temperature changes affect the stiffness of FAC mixtures, but the gradient of changes is smaller compared to conventional asphalt mixtures. increases and takes values from 1.5 to 2.5 MPa over 7 to 28 days. This is influenced by the presence of the cement added and its hydration time in the mixture. The Marshall stability in FAC mixtures has a similar relationship to compressive strength. Its value increases with the length of the sample's "life" in the range of 10.0-12.5 kN. The increase in stability value and decrease in deformation value over time suggests a significant effect of cement presence added to the mixture. In the fatigue life analysis, the 4-point 4PB bending method was used-the dynamic method at constant deformation. To demonstrate the nature of the work of the FAC recycled material and to determine the fatigue criteria, tests were carried out for FAC mixtures in various environmental conditions. Stiffness and Fatigue Durability in Laboratory Conditions It was assumed that the number of load cycles N f/50 , to achieve a decrease in the complex stiffness modulus to 50% of the initial value, is equivalent to the destruction of the sample, then the test was also discontinued. The N f/50 criterion applies to mineral-asphalt mixtures according to [38]. After the fatigue tests, data was obtained that allowed the estimation of the FAC mix fatigue curve- Figure 5. The fatigue curve is a relationship of fatigue life (on a logarithmic scale) as a function of the applied load value (strain). Fatigue life was determined as a mean value form six samples for each applied load strain. From the slope of the fatigue curve, it should be concluded that the destructive deformation for 1 million load cycles is equal ε 6 = 84.3 × 10 −6 m/m. The different nature of the decrease in the value of the complex stiffness modulus was found in comparison with typical asphalt mixtures during fatigue tests, see Figure 6. The level of the applied load is marked in green, while the red color indicates the decrease in the value of the complex stiffness modulus. FAC mixtures lose a significant part of the complex stiffness modulus relatively quickly and may be characterized by local decreases in stiffness, but they are still able to carry the given load in the form of strain. This is because microcracks appear in the analyzed material, which do not disqualify it for use in the layers of the road pavement base, in which such cracks are acceptable [39,40]. All tested mixtures had a similar character in the fatigue test. Due to the presented conditions of FAC mixtures, the allowable decrease in the complex stiffness modulus should be modified to a level of approx. 30%, compared to the initial value. For the modified fatigue criterion N f/30 , the number of load cycles, to obtain a decrease in the complex stiffness modulus to the level of 30% of the initial value, the level of destructive strain was estimated in the millionth cycle of loading ε 6 . Additional fatigue life tests of FAC mixtures were carried out for this reason, with the modified fatigue criterion N f/30 . To obtain fatigue curves, fatigue tests were performed using strain levels: ε = 200 × 10 −6 m/m, ε = 180 × 10 −6 m/m, ε = 170 × 10 −6 m/m. On the basis of the fatigue characteristics of the material, destructive strains were estimated in a millionth load cycle ε 6 . The results of laboratory tests are shown in Figure 7. On the basis of the obtained fatigue characteristics (fatigue equation - Figure 7), the permissible level of destructive strain ε 6 in a millionth cycle of loading was estimated at level ε 6 = 168.7 × 10 −6 m/m. Similar analyses were made for all FAC mixtures characterized by a different cement content and different asphalt content. The obtained values of destructive strain ε 6 are presented in Table 5. FAC mixtures lose a significant part of the complex stiffness modulus relatively quickly and may be characterized by local decreases in stiffness, but they are still able to carry the given load in the form of strain. This is because microcracks appear in the analyzed material, which do not disqualify it for use in the layers of the road pavement base, in which such cracks are acceptable [39,40]. All tested mixtures had a similar character in the fatigue test. Due to the presented conditions of FAC mixtures, the allowable decrease in the complex stiffness modulus should be modified to a level of approx. 30%, compared to the initial value. For the modified fatigue criterion Nf/30, the number of load cycles, to obtain a decrease in the complex stiffness modulus to the level of 30% of the initial value, the level of destructive strain was estimated in the millionth cycle of loading ε6. While analyzing the fatigue lives of tested mixtures it was set that with increasing the amount of asphalt and decreasing the amount of cement the mixtures became more flexible-the destructive strain ε 6 increased. Using the conducted fatigue tests, the final form of the fatigue equation described by Equation (1), taking into account changes in foamed asphalt and cement content was obtained. where: ε-acceptable tensile strain, ε 6 -tensile strain at which the sample is destroyed after 10 6 load cycles in the following test conditions: bending of a 4-point beam, temperature +10 • C, frequency 10 Hz, N f/30 -number of load cycles to achieve a decrease in the complex stiffness modulus to 30% of the initial value [-], A-"A" asphalt content [%], C-"C" asphalt content [%], E 10 -stiffness modulus of the mixture at +10 • C, E T -stiffness modulus of the mixture at temperature T. Laboratory tests and the developed fatigue equation can be used to predict the fatigue life of structural layers of road pavement, taking into account the shifting factors. Road Pavement Technology Because the embedded materials in the existing road pavement have lost their bearing capacity and fatigue durability, it was proposed to make this pavement in cold recycling technology with existing materials and to make a FAC mixture on their basis. The C3A3 mixture was built into the base layer of the road pavement. Before finishing the test section, the road had numerous damages, which are shown in Figure 8. A mobile deep recycler was used to produce the recycled FAC mixture- Figure 9. To verify the compaction of the base layer, the compaction index was checked. This indicator was determined by comparing the bulk density of samples formed from the FAC mixture in the laboratory with the bulk density of samples out of the finished pavement layer. The compaction index was 0.98, which is a satisfactory value for the base layers [25]. After constructing the recycled layers, the surface of the experimental section was finished with asphalt layers: a base layer and a base course layer of asphalt concrete (with a high stiffness modulus (ACWMS) and a wearing course layer of the SMA mixture, see Figure 10. The design of the innovative road pavement structure assumed the following layering: According to [38], all implemented mineral-asphalt mixtures met the design requirements for the layers of flexible pavement road constructions in Poland. As a result of the applied technology of the road base made of FAC-type mixture, the road durability forecasting was carried out before the road traffic admission. For this purpose, measurements of deflections of the pavement were carried out, along with the identification of layer modules and the subgrade. Identification of Layer Modules The measurements of pavement deflections were done on the street pavement using an FWD (Falling Weight Deflectometer). It is a device that induces a force impulse using a falling weight onto a measuring plate (through a specially designed spring system). The set of displacements determined on a given measuring stand creates the so-called "displacement bowl", which is then used to identify modules of layers and the subrade. Measurements of deflections made by FWD were carried out during the construction of the section on the layers: the FAC layer, the ACWMS layer, and the SMA wearing course layer. The results of deflection measurements were used to estimate the layer modules and the subgrade modules of the road pavement construction. The calculation model presented in Figure 13 was adopted for the identification calculations of the modules of the FAC layer. It is an elastic two-layer system, i.e., a layer arranged on the elastic half-space. Particular layers model the layout of the pavement structure. The h 2 layer models FAC, the h 1 layer-an improved subgrade. The FAC layer is described by the E 2 stiffness modulus and Poisson's ratio ν 2 . The subgrade is described by the modulus of elasticity E 1 and Poisson's ratio ν 1 . The thicknesses were assumed by the in-depth identification h 2 = 0.20 m. It was assumed that Poisson's ratio did not have a significant impact on the state of stress and strains and was assumed to be constant, i.e., ν 2 = 0.3 and the factor ν 1 = 0.35. The essence of identification is to minimize the objective function described by Equation (2): where: w j -theoretical deflections calculated in the model, u j -measured deflections, k-the number of deflections measured at one point, forming the deflection bowl. Of course, the number of layers n should be smaller than the number of k points forming the deflection bowl. Calculations were made on the basis of the CZUG program [41]. As a result of identification, the following values of modules (Ei) of the FAC layer and subgrade were obtained. Measurements of deflections on the FAC layer were made for different temperatures: The obtained modulus values for the subgrade and the FAC layer are summarized in Table 6 for a 95% level of confidence. After the FAC layer was laid and FWD tests were done, mma layers were laid, after which the deflection bowl measurements were again carried out using an FWD deflectometer. Figure 14 shows a model of the pavement structure after laying layers of mma. It is a three-layer system. Two layers are arranged on a half-space. The h 3 layer is a layer with mma described by the E 3 stiffness modulus and Poisson's ratio ν 3 . The h 2 layer is the FAC layer described by the E 2 stiffness modulus and Poisson's ratio ν 2 . The subgrade is described by the E 1 modulus and Poisson's ratio ν 1 . It was assumed that h 3 = 0.24 m, h 2 = 0.20 m, ν 3 = ν 2 = 0.35, and ν 1 = 0.3. The module values were calculated on the basis of the deflection bowl measurements using the FWD deflectometer and optimization calculations. The results are summarized in Table 7. Measurements were taken at the approx. temperature +10 • C. Figure 15 summarizes the test results and compares the FAC mixture modules obtained in the laboratory and in situ layers. When analyzing the results from Tables 6 and 7 and Figure 15 the stiffness modulus value determined in the laboratory is comparable with the modules of the material used in situ. As a result of the analyzes, a good correlation between field and laboratory tests was obtained in the analyzed range of temperatures. The conversion factor of the modules "k" determines the relationship (4), which describes the ratio of the stiffness modules defined in the laboratory to the values of the field modules, as a function of temperature. where: k-module conversion factor [-], The "k" conversion factor takes values from 0.84-0.87 depending on the temperature- Table 8. Fatigue Durability of Pavement Layers Structure Using the obtained module values and the model presented in Figure 14, the fatigue life of the structure was calculated for the designed thicknesses. Equation (5), developed by the authors, describes the criterion for the FAC mixture: where: ε-strain in the FAC layer, ε 6 -strain at millionth load cycle, 0.000168 was adopted, N f/30 -number of load cycles to achieve a decrease in the complex stiffness modulus to 30% of the initial value, A-percentage of new asphalt in the FAC layer, 4.16% was adopted, C-percentage of cement in the FAC layer, 3.38% was adopted, k-a conversion factor of the modules defined in the laboratory to the modules in situ, for a temperature of 10 • C, f 1 -a shift factor dependent on stiffness of FAC mixture, range between 0.8-1.0, was adopted 0.81, f 2 -a shift factor dependent on bearing capacity of subgrade, range between 0.8-1.0, was adopted 0.83, f 3 -a shift factor dependent on heterogeneity of FAC mixture, range between 0.8-0.95, was adopted 0.92. For the identified modules and model from Figure 14, strains at the bottom of the FAC layer were calculated. ε = 0.0000412 was obtained. Using Equation (5), N = 29,500,000 axles of 115 kN were calculated (fatigue life). The required minimal number of load axles for the road pavement is equal 12,000,000 axles 115 kN. Conclusions The conducted analyses for FAC mixtures (foamed asphalt mixtures with cement) showed that: • the proposed conglomerates can be used for incorporation into road base layers. • Identification tests of embedded layers confirmed the results of laboratory tests. The presented results indicate that the innovative technology used allows for the use of recycled materials, which significantly speeds up repairs. • During the research work, a technology for modernizing degraded road pavements was developed. The cold recycling technology based on a FAC mixtures was implemented to provide adequate load-bearing capacity and fatigue durability of new road pavement construction. • The results of in-situ tests in the field of module evaluation correlated with the results of laboratory tests. • A new fatigue criterion for FAC mixtures and a correlation factor for stiffness modules determined in the laboratory and in situ were developed. • Stiffness and bearing capacity tests showed that the pavement construction made with innovative technology, i.e., recycled material bonded with foamed asphalt and cement, has sufficient bearing capacity and fatigue life. • As a result of the bearing capacity analyses, it was found that the layers of the test section meet the requirements for safe exploitation and their durability is satisfactory. Thus, the pavement on the test section could be put into exploitation. Funding:
7,299.6
2020-03-01T00:00:00.000
[ "Engineering", "Environmental Science", "Materials Science" ]
Trace Driven Simulation of Cache Memories This thesis evaluates an innovative cache design called, prime-mapped cache. The performance analysis on various application s and programs shows that the primemapped cache performs better than the conventional cache organizations. The performance gain will increase with the increase of the speed gap between processors and memories. The exact cache behavior of numerical applications namely: matrix multiplication and SPEC benchmarks is st udied by varying the cache parameters such as cachesize, linesize and associativi ty. Traces are collected from these programs and miss ratios for instructions and data accesses are compared. Based on the experimental results and depending on the algorithm used, the miss ratios of the prime-mapped cache are found to be 50 to 100% less than for conventional caches. Depending upon the speed difference between processors and memories, with the prime-mapped cache these algorithms can run 30% to 2 times faster than they do on conventional caches. Performance Analysis Characterization of machines, by studying program usage of their architectural and organizational features, is an essential part of a design process. In order to evaluate the performance potential of any design, performance analysis on various architecture approaches have to be carried out. Evaluating the performance of cache-based computer systems is a difficult task because of the complexity of program behaviors. Locality property of an application and reuse factors have to be considered. Traditional performance evaluation can be broadly classified into three categories: Analytical Modeling, Simulation and Measurement. Analytical Modeling Analytical models provide a quick and insightful performance estimate of a given design[l8). By varying different input parameters over a wide range, analytical model is a good approach for a comparative study of the performance of different alternates of any design particularly cache design [19) . But the analysis and numerical results of analytical models are to some extent hypothetical and are not meant to predict performance of any realistic computer system. 1 In most cases, analytical models are used in the initial development of a new design whereas event-driven and trace-driven simulations are used to validate the design. Measurement Measurement is the only accurate and realistic performance evaluation. Since the cache design space is incredibly diverse with tens of independent design variables per level in a memory hierarchy, it is impossible to explore the entire design space in one study. Also the flexibility is limited which constrains the design methodology. To be able to do measurement, the system needs to be in existence. Therefore it is unsuitable for new designs Event-Driven Simulation Event-Driven Simulation simulates activities of a system by generating random events according to a given distribution. Event-driven simulation can be carried out in various modes of which Time-Driven and Execution-driven Simulation are the most widely used [7]. Time-driven simulation is synchronous in the sense that all the system activities occur at discrete time intervals which are processor cycles. Trace-Driven Simulation In trace-driven simulation, one or more application programs are executed, usually interpretively, and a complete trace is collected from each. The trace typically contains all of the memory addresses referenced , as well as opcodes and possibly timing information. Traces can either be used directly, for example, to evaluate instruction set characteristics [21], or as input to an architectural simulator to predict the performance of different architectural variants. Such trace-driven simulation is most frequently used to study the behavior of cache memories [10]. The validity of trace-driven simulation relies on a crucial assumption: that perturbations to the trace data caused by the tracing process do not affect the simulation results. Unfortunately, it is nearly impossible to collect traces without perturbing program execution in some way. The most common perturbation is execution dilation. [24) validated the use of trace-driven simulation for multi-processors. Variability due to dilation and multiple runs appears to be small. Trace-Driven simulation is based on actual traces of programs running on a system [4). Therefore it provides the most reliable and accurate performance estimates for given programs on a given system. Outline The thesis is organized as follows: Background The fast computation of numerically intensive programs presents a challenge to memory system designers. Numerical program execution can be accelerated by pipelined arithmetic units [2], but to be effective, must be supported by high speed memory access. A cache memory is a well known hardware mechanism used to reduce the average memory access latency [ 6]. Cache Memory Cache memories are high speed buffers which are inserted between the proces- When a physical memory address is generated for a memory reference, the block address field is used to address the corresponding block frame. The tag bit address is compared with the tag in the cache block frame. If there is a match, the information in the block frame is accessed by using the address field. Figure l .a illustrates this organization. • Fully Asso ciative In this mapping, any block in memory can be in any block frame. When a request for a block is presented to the cache, all the map entries are compared simultaneously (associatively) with the request to determine if the request is present in the cache [17] . Although the fully associative cache eliminates the high block contention, it encounters longer access time because of the associativity of a large number of blocks. • Sector Mapping In this scheme, the memory is partitioned into a number of sectors, each composed of a number of blocks [15). Similarly the cache is also divided into sector frames, each composed of a set of block frames. The memory requests are for blocks, and if a request is made for a block not in the cache, the sector to which this block belongs is brought into the buffer. The limitations are that that the mapping of blocks in a sector is congruent. Also, only the block that caused the fault is brought into the cache, and the remaining block frames in this sector frames are marked invalid thus wasting bandwidth. Figure l .d illustrate the sector-cache organization. • Prime Mapping In a prime-mapped scheme, [12) each memory address, same as conventional cache-based computer system [5), is partitioned into three fields: W = log 2 (line size) bits of word address in a line( offset); c = (log 2 (number of sets+ 1) bits of index; and the remaining tag bits. The access logic of the prime-mapped cache consists of three components: data memory, tag memory and matching logic. Same as a set-associative cache, the data memory contains a set of address decoders and cached data; the tag memory stores tags corresponding to the cached lines; and the matching logic checks if the tag in an issued address matches the tag in the cache. The cache lookup process is exactly same as the set associative cache. However, the index field used to access the data memory is not just a subfield of the original address word issued by the processor since the modulus for cache mapping is not . a 2's power any more [16). it is the residue of the line address modulo a Mersenne number. 8 Chapter 3 Cache Simulator This chapter describes the cache simulator used to simulate the two cache designs -set-associative-mapped and prime-mapped. The Simulator can be broadly divided into two parts: the first part called the XSIM[lO) trace generator takes the pixiefied output of any executable code and generates traces for the second part, the DINER0 [21] cache simulator to perform the actual simulation and report the results. The basic fl.ow diagram is given in Figure 2. The explanation for pixie, trace generator and the cache simulator is described below Pixie Generating traces for executable codes running into megabytes imposes severe constraints on the operating system. The trace counts for numerical algorithms typically run into billions which cannot be stored in a hard copy. Hence, for tracedriven simulation to perform accurately, the input code is to be divided into several smaller sub-codes which can be accessed individually. The traces generated from these codes are of smaller size and hence can be recorded. The division of a bigger code into smaller codes is done by Pixie (a DECstation system utility). Operation of Pixie Pixie takes in an executable program from a DECstation compiler, partitions into basic blocks each of size 64k bytes and writes an equivalent program containing additional code that counts the execution of each basic block. A basic block is a region of the program that can be entered only at the beginning and exited only at the end. The input executable code is identified based on its magic number. Pixie exits on an undefined input magic number. The internal division of the code is done based on a dynamic stack allocation. Each block has a unique starting address with which it is identified. There can be correspondence within each block. To optimize the performance, pixie groups those instructions which can fit into the range of 16bit displacement. An error will be generated if the offset exceeds 16 bits (signed). Pixie writes this output code into a default .pixie file extension. As the code is divided and information is to be stored as to their addresses, the pixiefied code is considerably larger than the input code. The branch instructions in the program determine the number of times each basic block in the program text is executed and the sequence in which the blocks are executed. Pixie also supports thirty two 32-bit general purpose registers which are used for data movements. All the operations are register-to-register except for load and store operations which are memory-to-register operations. Options In addition to generating address counts pixie also supports certain important features: • Pixie defines a MIPS instruction set which is compatible with dee based rise machines. • To allow for the individual blocks to be accessed, pixie maintains a file which gives the starting addresses for each of the blocks. • Pixie supports the fortran 77 format statements by putting the original text into the translated output. • To account for the trace references (used by the simulator), pixie enables the issue of memory references which enlarges the code considerably. Care must be taken in using this option as the branch offset may exceed 16 bits range on a subroutine call. • In order to reduce the number of references generated pixie can issue only one memory reference for every N memory references, where N is an user defined number greater than 1. Pixie doesn't work on programs that receive signals as the handler for address to the system calls is not translated. Also since the pixiefied code is considerably larger than the original code, conditional branches that used to fit in the 16-bit branch displacement field doesn't fit which generates a pixie error. This drawback is exposed by the perfectclub benchmark programs wherein the offsets are in the order of 18 bits. Trace Generator The output of the pixified code is fed to a trace generator. The XSIM trace generator reads each basic block of the pixiefied code and converts them into an assembly language code based on MIPS instruction set. This code is then assembled and converted into a trace output. Trace output file. This is an ASCII file with one LABEL and one ADDRESS per line. The rest of the line is ignored so that it can be used for comments. Features • Xsim has provision to generate an entire trace file which can be subsequently fed to any cache simulator which takes in the same kind of input file. This facility is rarely used as the trace file may run into gigabytes. Instead as explained previously traces are fed one at a time without generating a trace file. • Xsim has the ability to suppress tracing and just generate data files. This is equivalent to an assembler. • To stop the trace generation after a fixed amount of time, Xsim has an option to stop the generator after N serial cycles are traced. • To make it more user visible Xsim can generate traces starting from some fixed address. • For comparative study, it may be needed to act only on a fixed amount of traces. This can be done by specifying both the starting and ending addresses 13 for trace generation therby fixing the amount of traces generated and also avoiding accessing undefined addresses. • To make it more user friendly, Xsim has a 9 level debugger which can trace the address accessed on. The debugging includes acting on new basic blocks, analyzing basic blocks, producing results for each of the basic blocks etc. Dinero Cache Simulator The traces generated by the xsim is fed into the Dinero cache simulator. The simulator takes as its inputs the organization of the cache like the unified cache size, instructin cache size, data cache size, block and sub-block sizes, associativity, write back policy etc. Once the cache parameters are fed into the simulator, it checks for discrepancies in the set-up like specifying a blocksize which is not equal to 2c for some positive integer c or an undefined write back policy etc. When the simulator recognizes that valid input parameters are specified, it initializes the address stack and starts fetching the traces. The address part is decoded and the tag and index fields are determined. The simulator looks for data access in the cache. On a miss ( address tag not found in the cache), the main memory is accessed and the cache is updated. The data trace following the instruction trace is then acted upon. All the data and instruction references are recorded including read, write or misc accesses. The misses corresponding to each of the above cases are also filed. The address stack is continously updated based on input specifications like the prefetching mode, flushing of cache etc. The results are written to an output file an example of which is given below. Once the cache simulation is done, the output file is updated with the recorded values of instruction and data references. As can be seen from Figure. 4, the misses are calculated as a percentage of the total number of references for that category. Also the number of memory references is also given which when large degrades the cache performance. Since it is found that the simulator spends 35% to 50% of its Performance .Evaluation Matrix Multiplication Matrix Multiplication has been used to evaluate architecture design for a considerable longtime and hence has been included to analyze the prime-mapped design. Square matrices are used ranging from 64 X 64 to 256 X 256 long integers. Each matrix is divided into blocks(submatrices) of size BlxB2 and the algorithm was run on these blocks. An exhaustive study has been done by varying the blocksize, cachesize, datasize and the blocking factor and the results are plotted below. Blocksize represents the number of bytes of data moved between cache and main memory in a single access. Figure Bl represents the number of times the inner most loop( once loaded) will be executed. Since B2of16 gives the best performance improvement for any B2, a variation in Bl over that value of B2 for miss percentages represent Figure 9. As for B2, the misses increase for increasing B 1, (more replacements, more misses) and decrease for increasing linesizes due to less number of misses. The performance gain when plotted against B2 is given in Figure 10. The performance degrades for increasing B2 due to more number of line interferences. Also increasing blocksizes reduce the gain due to less number of misses. Figure 11 gives the corresponding increase in performance for Bl. As for B2, the performance degrades for larger B 1 (more replacements) and increases for smaller Figure 12 shows the effect of Bl on B2 as a factor for performance gain. decreasing Bl improves the performance as this reduces the number of times the blocks have to be replaced. A miss here will be propagated throughout the matrix multiplication. The peak gain for Bl = 8 and B2 = 8, is substantiated in Figure 13 which illustrates the effect of B2 on Bl. The discussion proves that the prime-version performs better than the un-primed version over various cache parameters, but the peak performance depends on the organization of the matrices as well. This brings out the intricacies in cache behavior that, no one particular architecture can ensure the best performance over the entire range of problem size. · -· -· -· -· -· -· -~---· -· -· -· -·-· -· -· -· - GCC is a GNU C compiler program which converts preprocessed files into optimized sun-3 assembly code which is written in C and cannot be vectorizable. Figure 14 & 15. plots the miss percentages for GCC for varying Cachesize and associativity 1 and associativity 2 respectively. The misses decrease for increasing cachesize due to availability of more data inside the cache thus increasing the probability of a hit. The plots for different associativities indicate that the miss percentage decrease for larger associativity as was proved for the matrix multiplication algorithm. COMPRESS is a C program which performs data compression on a 1 MB file using adaptive Lempel-Ziv coding. The variation in miss percentage as cachesize is varied for COMPRESS is shown in Figures 17 & 18 for different associativities. As for GCC, the misses decrease for increasing cachesize. The performance gain plotted in Figure 19. indicates that a cachesize of 8K bytes gives the lowest gain. For cachesizes, less than 8k bytes, the misses vary significantly (reducing for cachesize = 8k bytes more appreciably for un-primed than for primed version). For cachesizes = 8kbytes, the misses stabilize and hence produce more improvement. Also as the cachesize is increased, associativity plays a major role as can be seen from the curvature of the plots. The benchmark exhibits very high code locality and is not very sensitive to instruction cachesize. One 32KB direct-mapped cache had a miss ratio of less than one half of one percent. On the other hand , the benchmark is quite sensitive to data Hydro2d is an astrophysics application program which solves hydro-dynamical navier stroke equations to compute galacial jets. The input file is modified to change the number of timesteps from 400 to 100. This is the only number in the input file supplied with the benchmark, other input data are generated by the program itself. The output file specifies the time step, the GRID spacing, the viscosity factor and the execution time. The input file specifies the points to be computed. Changing this parameter affected the number of GRID points generated (from 400 to 100) and the viscosity factor. But the GRID points obtained upto 100 steps match with the result file from the benchmark and hence the program application is not changed. version ; e about half of un-prime version. The performance improvement vs cachesize ( Figure 29) shows that rate of improvement decreases as the cachesize is increased. This is due to the decrease in misses as the cachesize is increased. The misses vary very little for larger cachesizes and hence the graph stabilzes for cachesize of 128 kbytes and greater. 51 6.5 6 · · · · · · .. . ,, -> Direct-Mapped billions. The miss percentage reduce as the cachesize is increased due to less number of line conflicts for larger cachesizes. Figure 31 shows the performance improvement for the same cache size variation. The performance shows a increasing gain as the cachesize is made large. This is due to the less number of line conflicts in primeversion compared to unprimed version. The SPEC results prove that the prime-version, when giving better performance gains, tend to stabilize over the range of 8k byte cachesize and varying associativities when reducing the miss percentages, have little impact on the cache performance. Conclusions In this thesis, a new cache organization, Prime-mapped Cache Design was evalu- ated. An existing conventional Cache Simulator is modified to reflect the new design. The design is evaluated using trace generation through the XSIM trace generator, pixie and the Dinero cache simulator. Numerous programs and algorithms are compiled and used to generate the traces. Programs include Matrix Multiplication and the SPEC benchmarks. The SPEC benchmarks provide a valuable source as they are used to evaluate a similar design. Simulation is done for different cache sizes by varying the blocksize and the associativity. Results are obtained for both the unprimed and the primed version. A comparison shows that the prime version shows a performance improvement ranging from 50% to 150% in the miss ratios depending on the algorithm used to evaluate it. Eventhough prime version cache shows considerable improvement, the large variation in the performance reflects the complexity and unpredictable nature of the cache de- sign. An architecture which gives good performance for one algorithm may perform poorly when used for a different algorithm. The wide range of programs used to evaluate the new design takes this into consideration and the results show that the prime-mapped cache design always gives a better performance than unprime-mapped design for scalar processors. 55
4,907
1993-01-01T00:00:00.000
[ "Computer Science" ]
* Resolving resolution dimensions in triangulated categories Abstract: Let be a triangulated category with a proper class ξ of triangles and be a subcategory of . We first introduce the notion of -resolution dimensions for a resolving subcategory of and then give some descriptions of objects having finite -resolution dimensions. In particular, we obtain AuslanderBuchweitz approximations for these objects. As applications, we construct adjoint pairs for two kinds of inclusion functors and characterize objects having finite -resolution dimensions in terms of a notion of ξ -cellular towers. We also construct a new resolving subcategory from a given resolving subcategory and reformulate some known results. Introduction Approximation theory is the main part of relative homological algebra and representation theory of algebras, and its starting point is to approximate arbitrary objects by a class of suitable subcategories. In particular, resolving subcategories play important roles in approximation theory (e.g., [1][2][3]). As an important example of resolving subcategories, Auslander and Buchweitz [4] studied the approximation theory of the subcategory consisting of maximal Cohen-Macaulay modules over an artin algebra, and Hernández et al. [5] developed an analogous theory for triangulated categories. Using the approximation triangles established by Hernández et al. [5,Theorem 5.4], Di and Wang [6] constructed additive functors (adjoint pairs) between additive quotient categories. On the other hand, Zhu [7] studied the resolution dimension with respect to a resolving subcategory in an abelian category, and Huang [8] introduced relative preresolving subcategories in an abelian category and defined homological dimensions relative to these subcategories, which generalized many known results (see [4,9,10]). In analogy to relative homological algebra in abelian categories, Beligiannis [11] developed a relative version of homological algebra in a triangulated category , that is, a pair ( ) ξ , , in which ξ is a proper class of triangles (see Definition 2.4). Under this notion, a triangulated category is just equipped with a proper class consisting of all triangles. However, there are lots of non-trivial cases, for example, let be a compactly generated triangulated category, then the class ξ consisting of pure triangles is a proper class ( [12]), and the pair ( ) ξ , is no longer triangulated in general. Later on, this theory has been paid more attentions and developed (e.g., [13][14][15][16][17]). It is natural to ask how the approximation theory acts on this relative setting of triangulated categories. In [18], Ma et al., introduced the notions of (pre)resolving subcategories and homological dimensions relative to these subcategories in this relative setting, which gives a parallel theory analogy to that of abelian categories [8]. In this paper, we devote to further studying relative homological dimensions in triangulated categories with respect to a resolving subcategory. The paper is organized as follows: In Section 2, we give some terminology and some preliminary results. In Section 3, some homological properties of resolving subcategories are obtained. In particular, we obtain Auslander-Buchweitz approximation triangles (see Proposition 3.10) for objects having finite resolving resolution dimensions. Our main result is the following: Theorem. Let be a resolving subcategory of and , a ξxt-injective ξ -cogenerator of . Assume that is closed under hokernels of ξ -proper epimorphisms or closed under direct summands. For any ∈ M , if  ∈ M , then the following statements are equivalent: for all ≥ n m. , where φ is ξ -proper epic, such that = K Hoker φ satisfying -≤ − K m res.dim 1. In Section 4, we will further study objects having finite resolution dimensions with respect to a resolving subcategory . We first construct adjoint pairs for two kinds of inclusion functors. Then we characterize objects having finite resolution dimensions in terms of a notion of ξ -cellular towers. Throughout this paper, all subcategories are full, additive, and closed under isomorphisms. Preliminaries Let be an additive category and → Σ : an additive functor. One defines the category ( ) Diag , Σ as follows: , , of morphisms in such that the following diagram: commutes. A triangulated category is a triple ( ) , Σ, Δ , where is an additive category and → Σ : is an autoequivalence of (called suspension functor), and Δ is a full subcategory of ( ) Diag , Σ which is closed under isomorphisms and satisfies the axioms ( ) T 1 -( ) T 4 in [11, Section 2.1] (also see [19]), where the axiom ( ) T 4 is called the octahedral axiom. The elements in Δ are called triangles. The following result is well known, which is an efficient tool in studying triangulated categories. an autoequivalence of , and Δ a full subcategory of ( ) Diag , Σ which is closed under isomorphisms. Suppose that the triple ( ) , Σ, Δ satisfies all the axioms of a triangulated category except possibly of the octahedral axiom. Then, the following statements are equivalent: (1) Octahedral axiom. For any two morphisms ⟶ u X Y : and , there exists a commutative diagram in which all rows and the third column are triangles in Δ. (2) Base change. For any triangle , there exists the following commutative diagram: in which all rows and columns are triangles in Δ. , there exists the following commutative diagram: in which all rows and columns are triangles in Δ. We use Δ 0 to denote the full subcategory of Δ consisting of all split triangles. Let ξ be a class of triangles in . (1) ξ is said to be closed under base change (resp. cobase change) if for any triangle in ξ and any morphism ′ ⟶ α Z Z : (resp. ⟶ ′ β X X : ) as in Remark 2.1(2) (resp. Remark 2.1(3)), the triangle in ξ and any ∈ i (the set of all integers), the triangle Definition 2.4. [11] A class ξ of triangles in is called proper if the following conditions are satisfied. (2) ξ is closed under suspensions and is saturated. Throughout this paper, we always assume that ξ is a proper class of triangles in . be a triangle in ξ . Then, the morphism u (resp. v) is called ξ -proper monic (resp. ξ -proper epic), and u (resp. v) is called the hokernel of v (resp. the hocokernel of u). We say that has enough ξ -projective objects if for any object ∈ M , there exists a triangle ⟶ Dually, we say that has enough ξ -injective objects if for any object Remark 2.7. ( ) ξ is closed under direct summands, hokernels of ξ -proper epimorphisms, and ξ -extensions. Dually, ( ) ξ is closed under direct summands, hocokernels of ξ -proper monomorphisms, and ξ -extensions. in ξ and the differential d n is defined as , -exact (resp. (− ) Hom , -exact) for any ∈ n . Definition 2.9. [13, Definition 3.6] Let be a triangulated category with enough ξ -projective objects and X an object in . in with all P ξ i -projective objects. The objects K n as in (2.2) are called ξ -Gorenstein projective objects. We use ( ) ξ to denote the full subcategory of consisting of all ξ -Gorenstein projective objects. Throughout this paper, we always assume that is a triangulated category with enough ξ -projective objects and ξ -injective objects. Let M be an object in . Beligiannis [11] defined the ξ -extension groups , , that is, be a triangle in ξ . By [11,Corollary 4.12], there exists a long exact sequence of "ξxt" functor. If has enough ξ -injective objects and N is an object in , then there exists a long exact sequence Following Remark 2.10, we usually use the strategy of "dimension shifting," which is an important tool in relative homological theory of triangulated categories. Now, we set For two subcategories and of , we say ⊥ if ⊆ ⊥ (equivalently, ⊆ ⊥ ). 3 Resolution dimensions with respect to a resolving subcategory res.dim inf 0 there exists a exact complex 0 0 in with all objects in . [13] as ξ -Gorenstein projective dimension. We use  to denote the full subcategory of whose objects have finite -resolution dimension. 1. Consider the following triangle: As a similar argument to that of [11,Proposition 4.11], we get the following ξ -exact complex Similarly, we have the following ξ -exact complex Since is resolving, we have that X and Y are objects in . Consider the following triangles: . But from the following triangles in ξ . Then, the following statements are equivalent: Proof. Apply Lemma 3.2. □ Now we can compare resolution dimensions in a given triangle in ξ as follows. Proposition 3.4. Let be a resolving subcategory of , and let be a triangle in ξ . Then, we have the following statements: and -= C n res.dim . We proceed it by induction on m and n. The case = = m n 0 is trivial. Without loss of generality, we assume ≤ m n, then we can let As a similar argument to that of [11,Proposition 4.11], we get the following ξ -exact complex: and the desired assertion are obtained. (2) Assume -= B m res.dim and -= C n res.dim . We proceed it by induction on m and n. The case = = m n 0 is trivial. Without loss of generality, we assume ≤ − m n 1, then we can let and a triangle in ξ , it follows that ∈ ( ) K ξ by Remark 2.7. Thus, -≤ − A n res.dim 1 and the desired assertion is obtained. (3) Assume -= A m res.dim and -= B n res.dim . We proceed it by induction on m and n. The case = = m n 0 is trivial. Without loss of generality, we assume + ≤ m n 1 , then we can let = P 0 i A for > i m. By [18, Theorem 3.8], we have the following ξ -exact complex and the desired assertion is obtained. □ As direct results, we have the following closure properties for the subcategory  . (1)   ⊆ . (2) If is resolving, then for any res.dim if and only if  ∩ = . In particular, if ⊥ , and is closed under hokernels of ξ -proper epimorphisms or closed under direct summands, then  ∩ = . Proof. . By the assumption, we have - . Clearly, ≤ m n. Consider the following ξ -exact complexes: Then, , and thus, -≤ M m res.dim and the desired equality is obtained. Now, we assume that ⊥ and is closed under hokernels of ξ -proper epimorphisms or closed under direct summands. Clearly, We need the following easy and useful observation. (1) If ⊥ , then  ⊥ . In particular, if ⊥ , then  ⊥ . Proposition 3.10. Let be a subcategory of closed under ξ -extensions, and let be a subcategory of such that is a ξ -cogenerator of . Then, for each ∈ M with -= <∞ M n res.dim , there exist two triangles and In particular, if ⊥ , then the ξ -proper epimorphism ⟶ X M is a right -approximation of M. Proof. We proceed by induction on n. The case for = n 0 is trivial. If = n 1, there exists a triangle in ξ with ∈ H and ′ ∈ X 1 . Applying cobase change for the triangle (4) along the morphism ⟶ X H 1 , we get the following commutative diagram: Since ξ is closed under cobase changes, we obtain that the triangle is in ξ with -= H res.dim 0. Note that ′ = α u α is ξ -proper epic, so we have that ′ α is ξ -proper epic by [16, Proposition 2.7]; hence, the triangle and ″ ∈ X 0 . Applying cobase change for the triangle (5) along the morphism ′ ⟶ X H 0 0 , we get the following commutative diagram: . For ′ K , by the induction hypothesis, we get a triangle . Applying cobase change for the triangle (7) along the morphism ′ ⟶ K K , we get the following commutative diagram: is in ξ . It follows that ∈ X from the assumption that is closed under ξ -extensions. Since ξ is closed under cobase changes, we obtain the first desired triangle in ξ with -≤ − K n res.dim 1 and ∈ X . For X, since is a ξ -cogenerator of , we get the following triangle and ′ ∈ X . Applying cobase change for the triangle (8) along the morphism ⟶ X H 1 , we get the following commutative diagram: As a similar argument to that of the diagram (6), we obtain that the triangles are in ξ . Thus, (9) is the second desired triangle in ξ with -≤ W n res.dim and ′ ∈ X . In particular, suppose ⊥ , by Lemma 3.9, we have (2) and (3), we have -= − K n res.dim 1 and - and is resolving, then there is a triangle Proof. (1) Suppose is resolving. Applying Corollary 3.6(2) to the triangle (2) (2) Since ⊥ , we have  ⊥ by Lemma 3.9, and so the result immediately follows from (1). (10) in ξ with ∈ 0 and -= − K n res.dim 1. By (2), there is a triangle . Applying cobase change for the triangle (10) along the morphism ⟶ ″ K K , we get the following commutative diagram: One can see that the triangle and so, Hom , 0 is exact. Thus, the ξ -proper epimorphism ′ ⟶ X Mis a right -approximation of M and -″ = − K n res.dim 1 in the triangle (11). Note that ″ ∈ ⊥ K , so we have - Let be a subcategory of with ⊥ . Assume that is closed under hokernels of ξ -proper epimorphisms or closed under direct summands. Then, . Consider the following ξ -exact complex: Our main result is the following. Theorem 3.14. Let be a resolving subcategory of and a ξxt-injective ξ -cogenerator of . Assume that is closed under hokernels of ξ -proper epimorphisms or closed under direct summands. For any , then the following statements are equivalent: , where φ is ξ -proper epic, such that = K φ Hoker satisfying - Additive quotient categories and ξ-cellular towers with respect to a resolving subcategory In this section, we will further study objects having finite resolution dimension with respect to a resolving subcategory . We first construct adjoint pairs for two kinds of inclusion functors. Then, we characterize objects having finite resolution dimension in terms of a notion of ξ -cellular towers. Adjoint pairs Suppose that and are two subcategories of . Denote by [ ] the ideal of consisting of morphisms factoring through some object in . Thus, we have a quotient category /[ ], which is also an additive category. is a morphism in with ∈ X and  ∈ M , then the following statements are equivalent: (1) f factors through an object in . (2) f factors through an object in  . Proof. It suffices to show that ( ) ⇒ ( ) 2 1. Suppose that f factors through an object and → g L M : . Consider the following triangle which we call the ξ -cellular tower of M with respect to . According to the above construction, one can obtain the following result by Proposition 3.3. Applications In this section, we will construct a new resolving subcategory from a given resolving subcategory, which generalizes the notion of ξ -Gorenstein projective objects given by Asadollahi and Salarian [13]. By applying the previous results to this subcategory, we obtain some known results in [13][14][15]. Proof. Let P be a ξ -projective object. Consider the following ξ -exact complex: , -exact. In particular, ⟶ ⟶ ⟶ ⟶ ⟶ ⟶ P P P P P 0 0 a n d 0 Σ 0 i d 0 i d 0 0 P P are corresponding triangles in ξ . Since ∈ ∩ ⊥ P by Remark 5.2(1). we have ( ) ⊆ ( ) ξ ξ. As a similar argument to the proof of [18, Theorem 4.3(1)], we obtain that ( ) ξ is closed under ξ -extensions and hokernels of ξ -proper epimorphisms. Thus, ( ) ξ is a resolving subcategory of . □
3,838.8
2021-01-01T00:00:00.000
[ "Mathematics" ]
On non-abelian U-duality of 11D backgrounds In this letter we generalised the procedure of non-abelian T-duality based on a B-shift and a sequence of formal abelian T-dualities in non-isometric directions to 11-dimensional backgrounds. This consists of a C-shift followed by either a formal U-duality transformation or taking a IIB section. We investigate restrictions and applicability of the procedure and find that it can provide supergravity solutions for the SL(5) exceptional Drinfeld algebra only when a spectator field is present, which is consistent with examples known in the literature. Introduction String theory is known to respect a rich set of various symmetries, among which those that transform target space-time keeping physics the same are of special interest. The most known example of such duality symmetries is the perturbative T-duality symmetry of Type II string theory, that acts along toroidal directions of target-space according to the so-called Buscher rules [1,2]. The procedure for recovering background fields transformations from the string partition function is well known. One starts with the string partition function defined by the action S 0 [θ] symmetric under global θ → θ + α with θ corresponding to a circular direction. The symmetry is then gauged by introducing a 1-form field dθ → Dθ = dθ + A and the corresponding Lagrange termθF with F = dA to keep the 1-form pure gauge. The resulting partition function defined by the action S 1 [θ, A,θ] can then be reduced to the initial one, integrating outθ, that sets A = dα. Alternatively, one integrates out the 1-form field A obtaining a string action S 2 [θ] defined on a different background related to the initial one by Buscher rules. The scalar field θ(σ, τ ) gets replaced by the fieldθ(σ, τ ) representing dual string coordinates corresponding to winding modes [3,4]. Transformation of the dilaton ensures that measure in the partition function is invariant at one loop. One can be more general and consider backgrounds of the form M × T d in which case T-duality group will be O(d, d; Z). A natural question is whether one may consider backgrounds with isometries represented by more involved groups than abelian U(1) d , say a sphere or a non-abelian group manifold. The answer is positive and the corresponding dualisation procedure has been considered in [5]. Essentially nonabelian T-duality of string partition function goes along the same lines as the abelian one. The difference comes from more involved definition of the field strength F = dA + [A, A], that is now an element of the corresponding algebra and hence the Lagrange term reads Tr [θF ]. Hence, one dualises the whole set of group coordinates basically replacing left-invariant 1-forms σ a by dual forms dθ a . The original procedure for NS-NS fields has been complemented by transformation rules for RR fluxes in [6,7]. Explicit canonical formulation of non-abelian T-duality for principal sigma-model has been provided in [8]. Additionally, the work [7] provided a procedure of non-abelian T-dualisation for coset space geometries G/H based on fixing gauge degrees of freedom corresponding to action of the subgroup H. In contrast to abelian T-duality its non-abelian generalisation does not preserve isometries of the original background (in the usual sense) and hence has many in common with deformations of supergravity backgrounds. In particular, NATD techniques have been widely used to generate new supergravity backgrounds interesting from the point of view of holography, and in [9] some explicit examples of such relation have been provided. Breaking of the initial background isometries by a non-abelian T-duality transformation is in severe contrast with mechanics of the standard abelian T-duality transformations, where preservation of isometries allows to perform T-duality twice making it an involutive symmetry. For a way out of this problem, one looks at Noether currents of the two-dimensional string sigma-model and their Bianchi identities. Starting with sigma-model on a background with isometry algebra defined by structure constants f ab c one is able to construct conserved Noether currents J a , that satisfy dJ a = 0. (1.1) Non-abelian T-dualising along the isometry directions one ends up with sigma-model on a background with no initial isometries, which however still allows to define Noether currents J a , that satisfy [10] dJ a =f a bc J b ∧ J c . Here the algebras g andg defined by the structure constants f ab c andf a bc form the so-called Drinfeld double D. This is defined as a Manin triple (D, g,g) with the non-degenerate form given by the O(d, d) invariant metric η. Such algebraic construction allows to reverse the NATD transformation applying a Poisson-Lie T-duality transformation, that basically means solving consistency constraints for the Drinfeld double and constructing a background with such isometries (dressing the generalised vielbein in DFT terms). More details on Poisson-Lie T-duality and NATD can be found in the original works [11,12] and in review papers [13][14][15]. For developments from the generalised geometry side one refers to [16][17][18][19]. Explicit examples of backgrounds resulting from PLTD and/or NATD can be found in [7,[20][21][22][23][24]. Representation of Yang-Baxter bi-vector deformations as a B-shift followed by an NATD transformation has been considered in [25]. To some extent the above constructions generalise to M-theory in the sense of membrane dynamics and 11-dimensional supergravity. From the membrane point of view non-abelian U-duality have been addressed in [26], where in particular an analogue of Bianchi identities for currents of 2-dimensional sigma-model have been derived and implemented into the SL(5) exceptional field theory. The notion of Drinfeld double (Manin triple) have been generalised to the so-called exceptional Drinfeld algebra in the series of works [27,28], which however does not carry the structure of a bi-algebra. Instead, the algebrag dual to the isometry algebra g is defined via tri-algebra structure constantsf a bcd , that is in consistency with the current algebra of [26]. Finally, certain explicit results for non-abelian U-dualised backgrounds and their relation to non-abelian T-duality have been presented recently in [29]. This letter considers a generalisation of the approach of [25] to non-abelian T-duality in the formalism of exceptional field theory. In [25] explicit Buscher rules for non-abelian T-duality transformation have been provided written in terms of undressed fields, that can be represented as O(d, d) transformations of the corresponding generalised metric of double field theory [18,30]. Dependence on parametersx a enters in the final expression that finally get interpreted as dual coordinates. Given the embedding into DFT the procedure can be generalised to M-theory backgrounds in terms of exceptional field theory generalised metrics and dual coordinatesx ab corresponding to winding modes of membranes. The text is structured as follows. In Section 2 we review the NATD procedure as an O(D,D) rotation for group manifolds. As an explicit example Bianchi II space-time with vanishing dilaton is considered. In Section 3 we generalise the approach to non-abelian U-duality transformations of 11-dimensional backgrounds. In Section 4 we analyse the suggested procedure for ExFT's based on U-duality groups SL(5) and SO (5,5) and derive conditions upon which a solution of 11-dimensional supergravity can be generated. Sigma-model perspective Non-abelian T-duality transformations generalise standard T-duality Buscher rules and can be written in a very similar form [25]. The case of our interest here is backgrounds of the form M × G where G is a group manifolds, however the sigma-model procedure can be generalised to coset spaces. To set up the notations we briefly discuss the procedure of [25] here. One starts with the sigma model action of the form where the vielbein 1-form Eα is defined as usual as Unpacking these notations on may write for the first term in the sigma-model action where one defines metric components (2.5) The 2-form Kalb-Ramond field B is defined as usual as pullback of the corresponding target space- The fields G ab , B ab are usually referred to as undressed fields as these are free of dependence on group coordinates y m , which has all been left in the 1-forms σ a . The procedure of NATD of the sigma-model action then proceeds with replacing (g −1 dg) a → A a and adding a Lagrange multiplierỹ a F a . Performing integration overỹ a one recovers the initial action, while integrating over A a one turns to a dual action, that now has no dependence on y m since the 1-forms σ a no longer present. Instead, a dependence onỹ a enters the dual background originating where f ab c encode structure constants of g. This procedure can be summarised nicely by presenting a generalisation of Buscher rules, explicitly providing dual background fields. For that one defines a matrix The transformation rules are then written as follows (2.9) These have been shown in [18] to be upliftable to the double field theory formalism where the transformation of the fields becomes an O(d, d) matrix with d = dim G as expected. Double field theory perspective Non-abelian T-duality transformation of a 10-dimensional (group manifold) background as described above is known to be equivalent to a sequence of a B-shift and T-duality transformations, equivalently, O(d,d) reflections [30]. The procedure can be generalised to coset spaces as well, where one chooses d Killing vectors in a d-dimensional space and makes basically the same steps. Crucial is that the symmetry group acts without isotropy. In the present text we focus at the case of group manifolds to illustrate the procedure and to make further analysis of its restrictions simpler. Given the results of [30], generalisation to coset spaces must be straightforward. One starts with noticing, that to generalise the NATD transformation rules written in the form (2.9) to 11d backgrounds these can be conveniently rewritten in terms of O(d,d) rotation of a DFT background. Following [30] the algorithm is as follows • undress background fields; • perform B-shift B ab → B ab +ỹ c f ab c , withỹ a understood as coordinates dual to y m . • perform formal abelian T-dualities along all directions of the group manifold to turnỹ a into geometric coordinates. Schematically the procedure is depicted on Fig.1. For further reference and to setup notations let us consider the procedure in more details. The first step splits coordinate dependence to external coordinates x µ and group manifold coordinates Further B-shift introduces additional dependence dual coordinatesỹ a that is not obvious to check against section constraint. However, one notices that the dependence on y m is of very restricted form hidden in the 1-forms σ a . For this reason working with undressed fields allows to overcome this issue. Below we show that on explicit examples for both DFT and ExFT, while here we will try to develop some intuition allowing to work with such transformation. One starts with an abelian T-duality transformation in the DFT formalism that corresponds to replacing x m byx m , or better to say, to switching their roles as geometric and non-geometric coordinates. Most transparently this is seen when considering doubled pseudo-interval 1 Here and in what follows capital Latin indices M, N, . . . label directions of the extended space and in case of DFT run 1, . . . , 2dim G. Assigning to y m andỹ m the roles of geometric and dual coordinates respectively, one thus fixes H mn = g mn . To perform T-duality transformation one keeps the pseudointerval the same, switching instead roles of coordinates. Say y 1 now becomes dual, whileỹ 1 becomes geometric. This implies, that H 11 =g 11 is now component of the transformed metric. This procedure has been employed to generate exotic brane solutions and to unify them into a single DFT/ExFT solution in [31][32][33][34]. For the case in question it is tempting to writes instead where dependence on y m has been recollected into the 1-forms σ a = σ a m dx m . For now, the dual coordinates are still represented by exact forms dỹ a . The procedure described guarantees, that one ends up with a solution of supergravity equations of motion if started with a solution. Indeed, let us start for simplicity with a background, that depends purely on group manifold coordinates, i.e. G ab = const, B ab = const. Hence, one may encode the where G ab is simply the inverse of G ab . Turning to flux formulation of DFT [35] For the NATD procedure one starts with the undressed fields packed into the "flat" generalised metric H AB and first preforms a B-shift, that can be encoded as (2.14) Instead of T-dualising all coordinates and checking equations of motion of supergravity, double field theory allows to check H ′ AB explicitly, which is much simpler due to linear dependence on the dual coordinatesỹ a . Indeed, in the above expression the matrix O A B can be understood as a generalised vielbein, and the corresponding generalised flux precisely has the same non-vanishing components F ′ ab c = f ab c as that for E M A . Since the "flat" generalised metric H AB is the same, the background encoded in H ′ AB satisfies equations of motion of double field theory. Note, that the B-shift is arranged is such a way as to generate a background with the same generalised flux F ′ ABC = F ABC , which however depends only on dual coordinates. Finally, performing T-dualities along all ofỹ a 's one obtains a supergravity solution, since replacingỹ ↔ y with the corresponding transformation of fields (Buscher rules) is a symmetry of DFT. It is worth mentioning, that after Tdualities the flux components change and one finds non-vanishing F c ab components, since T-duality along each direction replaces a ↔ a [36]. Before turning to an illustrating example, one observes, that the last step where all directions of the group manifold get T-dualised is crucial for ending up with a supergravity solution. It is clear, that one is always able to perform the necessary set of T-dualities to turn allỹ a into geometric coordinates. Picture however gets more complicated in the case of non-abelian U-dualities and such a set may not exist. We discuss this important point in more details in Section 4. Bianchi II example As an explicit illustration of the above procedure, consider the standard examples of Bianchi II cosmological space-time embedded into 10 dimensions. The metric is can be chosen to be where the 1-forms σ a and the functions a a read 16) and the constants are constrained by p 2 p 3 = p 2 1 . In what follows we set p a = 1 to avoid the dilaton. Note, that the 1-forms only depend on the coordinates y 1 , y 2 , y 3 on the group manifold generated by the Heisenberg-Weyl algebra The undressed metric is then Since the time direction x 0 is not dualised and the metric does not have mixed g 0a components, it is enough to focus only at the block 1, 2, 3 and consider O(3, 3) double field theory. The corresponding generalised metric is simply given by with ∆B ab =ỹ c f ab c whose only non-vanishing components are Next one is supposed to perform abelian T-dualities along all directionsỹ a . T-dualising along all three directions renders all x a non-geometric as well as the corresponding forms, and one reproduces the well known result for the dual background [21] ds ′2 = ds 6 − a 1 2 a 2 2 a 3 3 (dx 0 ) 2 , Note thatx a are now proper physical coordinates. The dilaton is recovered from the invariant dilaton where g = det ||g ab || is determinant of the undressed metric. 3 Non-abelian U-duality in SL(5) ExFT Let us now try to generalise the above algorithm of NATD to the case of exceptional field theory. As the very first example one may take SL(5) exceptional field theory, that is a 7 + 10-dimensional field theory, local coordinate transformations include U-dualities of D = 7 maximal supergravity [37,38] (for a review on exceptional field theories see [39][40][41] Field content of the theory can be written in irreps of the duality group SL(5) as follows Here d is the invariant dilaton of double field theory,h ab is the 3-dimensional block of the full 10-dimensional metric and the matrix M ij encodes the degrees of freedom of the axion-dilaton The pair of vectors V i a encode internal parts of the NS-NS Kalb-Ramond 2-form B ab and RR field where ǫ abc is the Levi-Civita symbol ǫ 123 = 1 It is important to notice, that the parametrisation used here differs from that of [29] by rescaling of the metric and 2-form fields by certain power of e φ . More precisely, the parametrisation of [29] provides formulation of IIB supergravity explicitly covariant under the SL(2) duality symmetry, that is reflected in the fact, that all dependence on the dilaton is hidden inside the SL(2)/SO(2) matrix. In contrast, the parametrisation given above provides fields T-dual to the IIA fields, that can be obtained from the standard 11D parametrisation. For the purpose of this paper, the latter is more convenient. Now, following the analogy between DFT and ExFT extended spaces one proposes the following non-abelian U-duality scheme for 11D backgrounds 1. undress the metric and the C-field g mn = σ m a σ n b g ab , C mnk = σ m a σ n b σ k c C abc and compose As before, C-shift turning m AB to m ′ AB can be understood as a generalised vielbein with the following generalised fluxes Although here we restrict ourselves to the case of SL(5) ExFT for simplicity, the first two steps of the procedure have straightforward generalisation to higher U-duality group simply by including more winding coordinates. In contrast, the last step appears to be much more restricted for the 10D: Figure 2: Relationship between backgrounds with spectator fields upon the non-abelian U-duality procedure. Here taking a IIB section represents an uplift of three T-dualities with further reduction to 10 dimensions. In this case the bottom line represents the usual non-abelian T-duality. SL(5) theory than for theories with more winding directions. As we show below, at least for group manifolds full dualisation of all four coordinates is possible only when at least one of the coordinates is an abelian isometry, i.e. corresponds to a spectator field. We will conclude that the described procedure for the SL (5) ExFT is always an uplift of an NATD transformation. Similar observations based on the construction of exceptional Drinfled algebras for the SL(5) theory have been made in [29]. Schematically, this is illustrated on Fig. 2. Geometrically such defined Drinfeld double can be realised by choosing a maximally isotropic subalgebra say g to be a "physical" subalgebra. Group element g = exp x a T a of the corresponding Lie group G defined by generators of the physical subalgebra will define left-invariant 1-forms σ = g −1 dg on the group manifold. In this setup a non-abelian T-duality corresponds to transfer the rope of the physical subalgebra to the dual algebrag and constructing space-time 1-forms from group element g = exp x aT a . Note, that here x a is a physical coordinate and not further T-duality is required. [30] for more details) One notices, that according to the B-shift+T-dualities procedure, one has to replace all winding coordinates by their geometric partners, which can be done in a unique way for O(d, d) theory (for groups-manifolds that are not a product of Lie groups). This seems to be in tension with the Poisson-Lie T-plurality picture, where a given Drinfeld double can be decomposed into a set of more than two Manin triples [44]. Backgrounds corresponding to such Manin triples generate the same Drinfeld double and hence are indistinguishable from the point of view of the two-dimensional sigma model. Examples of such backgrounds can be found in [22]. In the O(d, d) language Poisson-Lie T-plurality corresponds to performing a rotation by an O(d, d) matrix C A B , preserving the Drinfeld double, which in particular can be a set of d reflections [30]. This latter case is precisely the transformation, that turns all winding coordinates into geometric ones. Hence, in all other cases one would expect backgrounds, which do not solve equations of motion of normal supergravity due to remaining dependence on winding coordinates. Indeed, as has been shown on explicit examples in [22] such procedure in particular gives solutions of generalised supergravity equations. More generally, one always ends up with a DFT background. Simply speakin, equivalent gl(d) embeddings into o(d, d) can be obtained from a given one by O(d, d) rotations and by the external automorphism of the algebra. More generally a Poisson-Lie T-duality is a constant transformation of the generators T Only the latter turns the fundamental of a given embedding of gl(d) into the antifundamental of the dual embedding. Crucial here is that no weight belongs to both these representations, which is apparent for the o(d, d) algebra but is not always true for symmetry algebras of exceptional field theories. To conclude, one starts with an irrep R 1 of the abelian T(U)-duality group in which extended coordinates transform. Upon an embedding of the geometric GL(d) subgroup this decomposes into where d corresponds to geometric coordinates and ellipses denotes irreps under which winding coordinates transform. Now, one considers a different embedding of the geometric GL(d) where d ′ is the fundamental of GL(d) none of whose weights inside R 1 coincide with any of the weights of d. Let us provide more details for U-duality groups SL (5), where this cannot be done, and SO (5,5), that can be shown to allow 11-dimensional NAUD. U-duality and exceptional Drinfeld algebras We start with the set of the simple roots of the Lie algebra sl(5) in the canonical ω-basis of fundamental weights are given by the following In addition, one has the same number of negative roots and four Cartan generators. Weight diagram of the fundamental representation 5 of sl(5) is depicted on Fig. 4, where µ 1 , . . . , µ 5 denote basis vectors. Notations for the simple root of the algebra are chosen in such a way that, say the root α 12 sends the weight vector µ 1 to µ 2 . Or, equivalently, the exponent exp(ωα 12 ) acts by SL(2) rotations on the plane (µ 1 , µ 2 ). It is important to note, that the weights µ 2 , µ 3 , µ 4 belong to a 4 for both of the decompositions, while one of µ 1 , µ 5 becomes a singlet. Following the analogy with the NATD one is interested in embeddings of the physical gl (4) subalgebra related by the external automorphism. In particular for the SL(5) theory we are interested in decomposing the 10 of sl(5) upon two embeddings of gl (4), that are shown in Fig. 5. Consider first the decomposition corresponding to deleting the root α 45 (cutting blue arrows). In this case weight vectors X 5a with a = 1, . . . , 4 belong to the 4 of gl(4) while the rest X ab belong to the 6. In the ExFT language, the former get identified with geometric coordinates, while the latter represent winding modes. Now, according to the procedure of NAUD described above, one need to find such a different embedding of gl (4), that all weights contributed to the irrep governing geometric coordinates of the Figure 5: Weight diagram of the 10 of sl(5) with two possible embeddings of the gl(4) subalgebra. first embedding belong to that governing winding modes. Explicitly, all weights from the old 4 must belong to the new 6, which is impossible, according to Fig. 5. Indeed, suppose one starts with four left-invariant 1-forms σ a , that depend on four coordinates on the (unimodular) group manifold x 1 , x 2 , x 3 , x 4 . Next one constructs a background with flat metric and C-field given by C abc = −3ỹ d[a f bc] d withỹ ab = 1/2ǫ abcd X cd being coordinates along winding directions. This has been shown to solve equations of motion of ExFT, however to end up with an ordinary supergravity solution one has introduce such turn allỹ ab into geometric coordinates such that all X 5a become non-geometric. From Fig. 5 one concludes that the automorphism acts by interchanging indices 1 ←→ 5 upon which the directions X 25 , X 35 , X 45 become non-geometric since belong to the new 6, while X 15 belongs to the new 4 and hence must be thought of as a geometric direction. According to the speculative discussion around (2.12) these directions correspond to 1-forms, rather than coordinates and hence the forms σ 2 , σ 3 , σ 4 must be thought of as "non-geometric" while σ 1 as a "geometric". It is suggestive to understand a (non-)geometric 1-form components as those, which depend on (non-)geometric coordinates. Unless dσ 1 = 0, one ends up with a contradiction, when the same set of coordinates on which σ a depend should be understood as non-geometric and as geometric at the same time. The above conditions can be fulfilled when T 1 commutes trivially with the rest three generators. In this case dσ 1 = 0 and it can be chosen to depend say on x 1 one which the other 1-forms σ α with α = 2, 3, 4 do not depend. Indeed, otherwise dσ α would give σ 1 on the RHS generating non-vanishing f 1a α . One concludes, that the described procedure applied to a 4-dimensional group manifold provides a solution of supergravity equations of motion only when at least one spectator field is included. This is in consistency with observations made in [29]. Another option would be to generalise the notion of T-plurality to the case of non-abelian U-duality. From the DFT point of view T-plurality generates backgrounds with dependence on dual coordinates, which in particular cases solve generalised supergravity equations. However, no generalised supergravity extension to 11 dimensions is known, and moreover this is widely accepted to not exist. More strict and rigorous formulation of these points is required and it is tempting to believe that this can be achieved in the formalism of DFT WZW [16,45,46].More detailed investigation of such formulations is reserved for future work. Consider now more fruitful case of five dimensions and U-duality algebra so (5,5). Its Dynkin diagrams with two possible deletions of simple roots giving gl (5) is depicted on Fig. 6. This has three simple roots generating vector representation, antisymmetric tensor of second and third rang representations and two spinorial representations. Now, from the diagram it is clear that upon the first embedding the geometric coordinates (equivalently, "physical" generators of the SO(5,5) EDA) correspond to the weights (X 1 , . . . , X 5 ), while the rest correspond to winding modes. Upon the second embedding the "physical" subaglebra of EDA is spanned by generators corresponding to the weights (X 12 , . . . , X 15 ). One notices, that the two sets of physical coordinates do not intersect and one is able to perform such an SO (5,5) transformation as to shift all 1-forms σ a into the non-geometric set. Equivalently, this demonstrates existence of two possible choices of the "physical" subalgebra inside exceptional Drinfeld algebra with SO(5,5) symmetry, which do not conflict. Discussion In this letter a generalisation of the non-abelian T-duality Buscher rules for 10D supergravity backgrounds to 11D backgrounds has been proposed. suggests to understand NAUD transformation as a switch between two "physical" algebras gl(d) by external automorphism of the corresponding exceptional symmetry algebra. We show, that for the algebra sl(5) such procedure can generate solutions of the conventional supergravity only when a spectator field presents, which is in consistency with observation made in [29]. Investigating the example of the algebra so(5, 5) one concludes that larger U-duality symmetry groups allow such non-abelian U-dualisation and a solution of equations of motion of 11-dimensional supergravity can be constructed. Investigation of explicit examples based on the SO(5,5) and E 6 exceptional Drinfeld algebra is reserved to future work. One becomes naturally interested in generalisation of the obtained results to exceptional field theories to general manifolds with isometries along the line of [27,28,30]. In this case symmetries manifest themselves in the algebra of Killing vectors, which can be used to organise an tri-vector shift, in contrast to the 3-form shift in the present paper [47,48]. This provides tri-vector deformations of 11-dimensional backgrounds, which in certain cases follow the same scheme as in Figure 2. E.g. one considers tri-vector deformation of Minkowski space-time, which in the IIB frame is again a Minkowski space-time, while solves equations of motion of generalised supergravity in the IIA frame [47]. More detailed analysis of relations between deformations and non-abelian dualities is required. Acknowledgements The author thanks vivid discussions with I. Bakhmatov, K. Gubarev, E. Malek and N. Sadik Deger that motivated this project. The author thanks Yuho Sakatani for useful comments and suggestions. This work was supported by the Foundation for the Advancement of Theoretical Physics and Mathematics "BASIS" and by Russian Ministry of education and science (Project 5-100). In part the work was funded by the Russian Government program of competitive growth of Kazan Federal University.
6,885.2
2020-07-02T00:00:00.000
[ "Mathematics" ]
Bootstrapping fermionic rational CFTs with three characters Recently, the modular linear differential equation (MLDE) for level-two congruence subgroups Γθ, Γ0(2) and Γ0(2) of SL2(ℤ) was developed and used to classify the fermionic rational conformal field theories (RCFT). Two character solutions of the second-order fermionic MLDE without poles were found and their corresponding CFTs are identified. Here we extend this analysis to explore the landscape of three character fermionic RCFTs obtained from the third-order fermionic MLDE without poles. Especially, we focus on a class of the fermionic RCFTs whose Neveu-Schwarz sector vacuum character has no free-fermion currents and Ramond sector saturates the bound hR ≥ C24\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \frac{C}{24} $$\end{document}, which is the unitarity bound for the supersymmetric case. Most of the solutions can be mapped to characters of the fermionized WZW models. We find the pairs of fermionic CFTs whose characters can be combined to produce K(τ), the character of the c = 12 fermionic CFT for Co0 sporadic group. Introduction and summary The classification of two-dimensional rational conformal field theory(RCFT) has been a longstanding problem. Often, two-dimensional RCFTs are known to possess an extended chiral algebra, beyond the Virasoro algebra, which has been widely used to explored various observables such as the torus partition function and correlation functions. Altogether with the modular invariance of the partition function or the crossing symmetry of the correlation function, the unitary RCFT with c < 1 has been completely classified [1]. However, a full classification of RCFT with c > 1 is still far-fetched. A promising approach to classify RCFT is to utilize the modular property of the characters. Due to the fact that the characters, whose number is assumed to be N , can be regarded as components of a vector-valued modular form of weight-zero, the characters are identified with the independent solutions of an N -th order differential equation which is invariant under SL 2 (Z) [2][3][4]. Based on this observation, significant progress has been made for the classification of two-character and three-character RCFTs [5][6][7][8][9][10][11][12]. JHEP01(2022)089 It is natural to extend the classification problem to the fermionic RCFT. This problem has been recently initiated by present authors with help of the fermionic modular linear differential equation (MLDE) [13]. To introduce a fermionic RCFT, we start by putting the theory on a torus with a spin structure. The torus entails four distinct spin structures that are characterized by the anti-periodic(A) and periodic(P) boundary conditions along the two cycles of torus. In this paper, we denote four spin structures associated with (A,A), (P,A), (A,P) and (P,P) as NS, NS, R and R-sectors. Specifically, the characters of the NS, NS and R sectors are known to form a vector-valued modular form under the congruence subgroups Γ θ , Γ 0 (2) and Γ 0 (2), respectively. Again, N -characters are expected to be identified with the solutions of an N -th order MLDE associated with the congruence subgroups Γ θ , Γ 0 (2) and Γ 0 (2). The main purpose of this paper is to explore the theory space of three-character fermionic RCFT by analyzing the fermionic third-order MLDE. The classification program is closely related to the modular tensor categories (MTC) which encode the algebraic structures of both RCFT and three-dimensional topological quantum field theory(TQFT) [14][15][16]. Apart from the low rank MTCs [17,18], the classification of the low rank fermionic MTCs have been recently studied in [19]. By analyzing the fermionic RCFT with MLDE, we aim to see if our classification arrives at the same consequences as the fermionic MTCs. As the first step, we restrict to the cases where the modular forms involved in the MLDE do not have poles inside the fundamental domain of congruence subgroups. In addition, we focus on the two special types of solutions. The first type solutions can be identified with the fermionic characters constructed from the characters of three-character bosonic RCFTs [9,12,20]. More precisely, we take the tensor product of N -copies of the Majorana-Weyl free fermion and arbitrary three-character bosonic RCFT. Regardless of N , we show that characters of the above fermionic RCFT can be regarded as the solutions of a thirdorder fermionic MLDE. On the other hand, the feature of the second type solution is that it saturates the Ramond sector supersymmetric unitarity bound, i.e., h R ≥ c 24 . For this reason, we refer them as to the BPS solutions and they are listed in table 1. We claim that most of the BPS solutions have relations with the WZW model or its Z 2 orbifold theory. More concretely, the BPS solutions turn out to realize the characters of the fermionized WZW models. Those fermionic RCFTs arose as a consequence of the generalized Jordan-Wigner transformation, which was employed in recent studies [21][22][23]. One exception is the BPS solution with c = 66 5 . It is not clear if this solution is related to any WZW model or its orbifold theory via fermionization. The characters of RCFT are the main ingredients of the partition function. Nevertheless, the method of MLDE does not explicitly tell us how to combine the solutions to construct a modular invariant partition function. In addition to solving the MLDE, we need to know their S-matrix to construct a partition function. To this end, we rewrite the BPS third-order MLDE in terms of the modular λ function and show that the BPS solutions can be expressed in terms of the hypergeometric function in λ. With the closedform solutions, we determine the S-matrix using the monodromy matrix. Such S-matrix enables us to study the fusion rule algebra via the Verlinde formula. JHEP01(2022)089 Class (c, h 1 Sometimes, it has been known that the characters satisfy a novel bilinear relation. For instance, the characters of the Ising model and babymonster CFT are combined to produce J(τ ) [9]. The bilinear relation has been utilized to analyze the monstralizing commutant pair in [24]. It turns out that the solutions of the fermionic MLDE also exhibit bilinear relations for some cases. Especially, we find that the fermionized WZW models with SO(m) 3 for m = 2, 3, 4, 5 and their bilinear pairs span the BPS-type solutions of the fermionic second and third-order MLDEs. Based on these observations, we conjecture that the fermionized WZW models with SO(6) 3 and SO(7) 3 can be considered as the BPS-type solutions of the fermionic fourth-order MLDE. The fermionic RCFT involves the superconformal field theories. Thus, the fermionic MLDE provides a way of classifying the supersymmetric RCFT. The classification of the supersymmetric RCFT based on the WZW algebra has been discussed in [25,26]. It would be plausible to explore the supersymmetric RCFT that cannot be mapped to the WZW models via bosonization. To promote fermionic RCFT to the superconformal field theories, their NS-sector vacuum character should possess chiral primary of weight h = 3/2 which can be interpreted as a supersymmetric partner of the stress-energy tensor. Furthermore, the primaries of the R-sector ought to satisfy the unitarity bound h r ≥ c/24 and the R-JHEP01(2022)089 sector partition function would be constant due to the boson-fermion cancellation. The solutions listed in table 1 turn out to fulfill the supersymmetry conditions. 1 Some of them are not listed in [26] since they are related to the WZW models with product groups. This paper is organized as follows. In section 2 we review the construction of MLDE, the bosonization and fermionization of two-dimensional field theory, and the Rademacher expansion applied to the vector-valued modular form. The classification of solutions of the BPS third-order MLDE is provided in section 3. We further discuss the closed-form expression of the BPS solutions and their S-matrix. We focus on the bilinear relation of the BPS solutions in section 4, adopting a view of the deconstruction. The technical details are presented in appendices. Note added. While this work is completed, [27,28] appeared which contain some overlap in the classification of bosonic RCFT with three characters. In this paper, we use the analytic expressions of the solutions to establish the classification. Review on the modular linear differential equations In this subsection, we review an approach of classifying bosonic RCFTs via MLDE [3]. We also discuss an extension of MLDE to the fermionic RCFTs [13] using the congruence subgroup of level two. To illustrate the main idea of the classification scheme, we start by putting a RCFT on a torus. The torus partition function can be regarded as a trace over the Hilbert space of states on S 1 , which can be further decomposed in terms of conformal characters f i (τ ) andf j (τ ), In the second line of (2.1), we decompose H S 1 into irreducible representations V h i , Vh j of the left and right copies of an extended chiral algebra with multiplicity M h,h . The characteristic feature of RCFT is that its partition function consists of finitely many characters f i (τ ). Each character represents an irreducible representations V h i (Vh j ) and is characterized by the conformal weight h i (h j ). It is well-established that the torus has a global diffeomorphism group known as the modular group SL 2 (Z). Thus the torus partition function ought to be modular invariant and the characters should transform linearly under the SL 2 (Z). More concretely, the S and T transformation rules of the characters are given by JHEP01(2022)089 where N denotes the number of characters. The modular matrices S ab and T ab satisfy the relations, where C is the charge conjugation matrix, as well as where h a means the conformal weight of the primary associated to f a (τ ). In other words, the set of characters f a (τ ) forms a vector-valued modular form of weight zero. It has been known that the characters f a (τ ) can be identified with the solutions of a modular linear differential equation (MLDE) of order N [4]. An explicit form of the MLDE is given by where the Ramanujan-Serre derivative is defined as and φ s (τ ) = (−1) N −s W s /W N are modular forms of weight 2N − 2s. Here we define the Wronskian W s as therefore the coefficient function φ s (τ ) can have the pole when W N becomes zero. A useful way to understand the order of poles in φ s is to utilize the valence formula. To phrase it more precisely, let us suppose g(τ ) is a modular form of weight k. We define the order ν p (g) as the leading exponent of the Laurent expansion of modular form g(τ ) near τ = p. For instance, if the Laurent expansion of g(τ ) is given by g(τ ) ∼ (τ − p) n then ν p (g) = n. The valence formula relates the order of g at the various points p in the fundamental domain. An explicit relation among ν p (g) is given by [29,30] where ω = exp( 2πi 3 ). Let us take a modular form g as the Wronskian W N with modular weight k = N (N − 1). 2 Since the leading behavior of the Wronskian in q-expansion is given it is straightforward to see ν i∞ (g) = − N c 24 + a h a . Therefore, the valence formula reads Note that runs for 2, 3, 4, · · · . Since the ring of holomorphic modular forms is generated by Eisenstein series E 4 and E 6 , that vanish at τ = i and τ = ω respectively, one can often deduce the form of W N in terms of E 4 and E 6 once is specified. Next, we turn to constructing the MLDE whose solution can be identified with the characters of fermionic RCFT. We start from the fact that defining fermionic RCFT needs the presence of spin structure. The spin structure arose since one can assign periodic (R) or anti-periodic (NS) boundary conditions along the non-trivial one-cycles. On the torus, there are four possible spin structures (NS,NS), (R,NS), (NS,R) and (R,R) sectors. For the notational convenience, we denote them as NS, NS, R and R, respectively. The first three sectors are transformed into each other by the modular transformation, therefore the partition function of each sector is not invariant under the SL 2 (Z). In fact, the partition functions of NS, NS and R sector are invariant under the level-two congruence subgroups Γ θ , Γ 0 (2) and Γ 0 (2), respectively. The level-two congruence subgroups are defined as follows, (2.12) Their fundamental domains can be chosen as in figure 1. It is necessary to understand the space of modular forms of each congruence subgroup. The modular forms of each subgroup are known to be generated by the Jacobi theta functions as presented below. (2.13) Having classified the basis of modular forms of each congruence subgroups, one can utilize the MLDE (2.5) to search the characters of fermionic RCFT. Let us first restrict our interest to the NS-sector partition function. The relevant MLDE again has a form of However, now the coefficient function φ s (τ ) should be a meromorphic modular form of Γ θ instead of SL 2 (Z). The fermionic MLDE can be written in terms of the basis of modular form, i.e., (2.13). For instance, the structure of the second-order NS-sector MLDE reads where we assumed that the coefficient function does not involve any pole in the fundamental domain. The MLDE for the NS sector or R-sector is immediately obtained by applying T or ST transformation to (2.15). Let us briefly discuss the valence formula and its consequence. For the details, we refer readers to [13]. The valence formula for Γ θ is known to have a form of We now apply (2.16) to the Wronskian W N constructed from f NS i (τ ). One can show that the order of Wronskian at τ = i∞ and τ = 1 is given by Therefore the valence formula leads to the following relation, JHEP01(2022)089 Bosonization and fermionization We briefly review the modern perspective of bosonization and fermionization in twodimensional spacetime. The bosonization is known as the GSO projection and we will refer to fermionization as the generalized Jordan-Wigner transformation following [22,31]. Let us denote a bosonic theory with a non-anomalous Z 2 symmetry as B. To construct a fermionic theory F from B, we first introduce a non-trivial two-dimensional invertible fermionic topological order on the Riemann surface Σ g with spin structure ρ. This theory is known as the Kitaev Majorana chain. Its partition function is described by the mod 2 index that is often referred to as the Arf invariant. The Arf invariant is 1 for even ρ and 0 for odd ρ. An explicit form for the partition function of the Kitaev Majorana chain is, (2.20) When the Kitaev Majorana chain is coupled to the background gauge field T of the fermionic parity (−1) F , the partition function becomes e iπArf[T +ρ] . The idea of fermionization is to couple the bosonic theory B with the Kitaev Majorana chain. Then the corresponding partition function has a form of [S+ρ] , (2.21) where S is the background field for the non-anomalous Z 2 symmetry of B. The next step is gauging the diagonal Z 2 , which amounts to promoting the background field S to a dynamical field s. Then one obtains the partition function of F , where the sum s counts all distinct gauge fields for the Z 2 and the product ∪ denotes the cup product on cohomology classes. The map (2.22) will be called as the generalized Jordan-Wigner transformation. To bosonize a given fermionic theory, we gauge the fermion parity (−1) F so the resulting theory does not depend on the spin structure. After this procedure, the partition function takes the form of where t is a dynamical gauge field for the fermion parity. On the other hand, it is known that gauging the fermion parity yields a Z 2 symmetry in the gauged theory. Thus the partition function Z B should be associated with the background field S. A full expression of the partition function with the background field S is given by One can show that Z B [S] is independent of the spin structure ρ, as desired. JHEP01(2022)089 Let us comment on the Z 2 orbifold theory B = B/Z 2 . We use s for the dynamical gauge field corresponding to the non-anomalous Z 2 symmetry of B. We also use the known fact that the orbifold theory B possesses a quantum Z Q 2 symmetry and denote T as a background field for Z Q 2 . Then, the partition function of B reads The above relation is often called the Kramers-Wannier duality. We also note that the new fermionic theoryF can be obtained from the fermionic theory F , by attaching Z KM to the partition function of F . More precisely, the partition function ofF is given by (2.26) Let us illustrate an explicit application of the generalized Jordan-Wigner transformation on Σ g = T 2 . We start with the bosonic theory B having a non-anomalous Z 2 symmetry. Its Hilbert space can be decomposed into the untwisted sector(H u ) and twisted sector(H t ). Since we choose Σ g = T 2 , there are four distinct gauge fields for the discrete symmetry. Let us define the partition functions for four distinct configurations as follows, Here g is a group element of the discrete group of our interest. Z (g,h) denote the partition function with a topological line defect L g and L g inserted along the spatial direction and time direction, respectively. From (2.25), it is straightforward to check that the partition function of an orbifold theory B can be expressed as a sum of the partition functions listed in (2.27). Explicitly, we have After applying the Jordan-Wigner transformation to the partition function of B, the transformation formula (2.22) suggests that the partition function of each spin structure can be expressed as, , (2.29) JHEP01(2022)089 Once we compute Z (g, 1) , the other partition functions Z (1,g) and Z (g,g) are followed by applying modular transformations to Z (g, 1) . Therefore, the main challenge here is to compute the Z (g,1) with insertion of a certain line defect L g . In a rational CFT, a topological line defect L g can be realized as a Verlinde line operator. The Verlinde line operators are in one-to-one correspondence with chiral primaries, and preserve the left and right chiral algebra separately We denote by L h i a Verlinde line associated with a primary |φ i of conformal weight h i . Its action on a primary state φ k is then given by where S i,k denotes the (i, k) entry of the S-matrix. When the eigenvalues of L h are either 1 or −1, it can be regarded as a Z 2 symmetry generator. In the case of solvable models such as the WZW models, it is easy to compute Z (g,1) since the S-matrix is well-known. Rademacher expansion Let us introduce an effective technical tool to compute the characters of a given bosonic RCFT. The main idea is originated from the work of the Rademacher [32], who showed that the exact Fourier coefficients of any modular form of non-positive weight can be determined by the singular terms and modular covariance. Here we present the Fourier coefficients of the vector-valued modular form by exploiting Rademacher's idea. An application of the Rademacher expansion to the vector-valued modular form has been discussed in the literature, e.g., [24,33]. Let us consider a weight-zero vector-valued modular form f µ (q), whose series expansion is given by where q = e 2πiτ = e −β+2πi r s and p µ denote the leading exponent of f µ (q). 3 The characters of bosonic RCFT f µ (q) transform under the SL 2 (Z) with a unitary n-dimensional representation M (γ) : SL 2 (Z) → GL(n, C). More explicitly, where γ denotes a group element of SL 2 (Z). We also use the notation q = e 2πiτ = e − 4π 2 βs 2 +2πi a s . The Fourier coefficient F µ (n) can be read off from the contour integral. An explicit formula for the Fourier coefficient F µ (n) is given by (2.33) 3 Here, our notation for an action of the modular group on τ is τ → aτ +b sτ −r , ar + bs = −1, where a, b, s, r are integers and (r, s) = 1. JHEP01(2022)089 In the above formula, K (s) denotes the Kloosterman sum, which is defined as e −2πi r s (pµ+m) e 2πi a s (n+pµ) , (2.34) and I α (z) is the modified Bessel function of the first kind, (2.35) Clearly, the input data are the singular part of f µ (q) satisfying m + p µ < 0 and the representation M (γ). Classification The main goal of this section is to explore the solution space of the fermionic third-order MLDEs and identify the solutions with the characters of a certain fermionic RCFT. The general fermionic 3rd order MLDE for each spin structure is given as following: where e 2 , e 2 and e 2 are weight two modular forms of Γ θ , Γ 0 (2), and Γ 0 (2), respectively and given below: The solutions of the above fermionic MLDE can be expressed in q-series. For example, the characters on NS sector of fermionic CFT with central charge c and conformal weights h ns 1 , h ns 2 of NS-sector primaries other than the vacuum would have the following expansion; We will focus on the two special cases. First, we find that the infinitely many solutions of (3.1) can be constructed from the characters of the three-character bosonic RCFTs. More precisely, we consider the tensor product between the three characters of bosonic RCFT and N -copies of the Majorana-Weyl fermion. Based on the known classification of the bosonic RCFT with three-characters [9,12,20], we claim that there are infinitely many solutions of (3.1). JHEP01(2022)089 Second, we consider the special case of µ 5 = − 1 4 µ 4 . We refer to this case as the BPS equation since there is an R-sector primary saturating the unitarity bound h R = c 24 . In addition, we require that there is no free fermion contribition and so the coefficient a 1 of q 1 2 in the vacuum character (3.3) of the NS sector vanishes. We provide a full classification of this "BPS fermionic MLDE" and identify them with the WZW models via the generalized Jordan-Wigner transformation. Fermionic solutions from the bosonic MLDE Bosonic third-order MLDE. Let us make a few comments on the classification of bosonic RCFT with three characters. It has been known that there are finitely many bosonic RCFTs with = 0 having no Kac-Moody algebra [7,9]. More recently, the classification has been extended to the case where the theory involves Kac-Moody currents [12,20]. In addition to the solutions listed in [12,20], we further find a few more unitary solutions that have positive integer coefficients in the q-expansion. Let us start with the third-order MLDE with = 0 whose general form is given by By introducing a parameter L = 12 3 j and θ L = L d dL , the third-order differential equation (3.4) can be recast as [12,34], with help of the identity q dL dq = E 6 E 4 L. Utilizing the indicial equation of (3.4), one can express the coefficients µ 1 and µ 2 by c, h 1 , h 2 as follows, 4 (3.6) After plugging in (3.6) into (3.5), the differential equation becomes (3.7) which takes the form of a hypergeometric equation given by [35]. As far as the difference between any two of conformal weights is not integer, three independent solutions of (3.7) JHEP01(2022)089 are given by The overall normalization is tuned to provide integer coefficients in q-expansion. Now the classification can be done by exploiting the fact that any characters of unitary RCFT should possess non-negative integer coefficients in q-expansion. 5 Let us note that the coefficient of the linear term of the vacuum character, which will be denoted as a 1 , can be expressed in terms of c and h 1 . The idea is to consider rational variable c = m 1 /m 2 and non-negative integer a 1 as two free parameters. By running the non-negative integers m 1 , m 2 and a 1 in the following range, we search for the case of the Fourier coefficients of the characters exhibit non-negative integers. The list of solutions are presented in table 4 of appendix B. In addition to the solutions reported in [9,12,20], we further find 11 unitary solutions of (3.4) that admit non-negative integer coefficients in q-expansion. The new solutions are can be found in table 5. We remark that some of the new solutions can be identified with characters of the WZW models. For instance, the solution number 84 of table 5 corresponds to characters of the WZW model for (SU(3) 1 ) 2 . One can show that the solutions number 84 and 85 satisfy a bilinear relation of the form and use it to guess an identification of the solution number 85. The right-hand side of the above bilinear relation can be interpreted as the partition function of c = 24 self-dual CFT [36]. More precisely, three self-dual theories number 24, 26, 27 of Schelleken's list that are associated with the algebra A 12 2,1 , A 2 5,2 C 2,1 A 2 2,1 , A 2 8,3 C 2,1 A 2 2,1 can admit the WZW model for (SU(3) 1 ) 2 in the bilinear relation. We propose that the seventh solution has a relation with the algebra A 10 2,1 . As a consistent check, we find that the vacuum solution f It would be interesting to check if the Z N orbifold applied to the (SU(3) 1 ) 10 WZW model can leads to the three-character fermionic RCFT with f JHEP01(2022)089 We next focus on the solution number 87 with (c, h 1 , h 2 ) = (12, 2/3, 4/3). It is noteworthy that the solution number 87 forms a self-dual relation, namely (3.12) We interpret j(τ ) + 312 as the partition function of self-dual theory number 58 of the Schelleken's list. Since an algebra of it is given by (E 6,1 ) 4 , the solution f Fermionic solutions. Our next target is to analyze the solutions of (3.1) that are constructed from the characters of the above bosonic RCFTs. To this end, we take the tensor product of N copies of the Majorana-Weyl fermions to the three characters of bosonic RCFT. The characters of the individual spin structures take the form of where the characters of the Majorana-Weyl fermions are given by (3.14) As an illustrative example, let us consider the characters of the babyMonster CFT with c = 47/2. The three bosonic characters are known to have the following q-expansion [9], 6 f B 0 (q) = q −47/48 + 96256q 49/48 + 9646891q 97/48 + 366845011q 145/48 + · · · , f B 1 (q) = 4371q 25/48 + 1143745q 73/48 + 64680601q 121/48 + 1829005611q 169/48 + · · · , f B 2 (q) = 96256q 23/24 + 10602496q 47/24 + 420831232q 71/24 + · · · , (3.15) and tensoring them with the one Majorana-Weyl fermion yields the following characters, (3.16) It is easy to see that the characters (3.16) solve the fermionic MLDE (3.1) with We further remark that tensoring the bosonic theory with the arbitrary number N of the Majorana-Weyl fermion provides the solutions of (3.1). To see this point, let us revisit the valence formula. From (3.13), one can see that the central charge c F and conformal weights of the tensored theory are given by JHEP01(2022)089 and therefore the valence formula reads In the last equation, we use the Valence formula of the bosonic RCFT with three characters and = 0. Equation (3.19) shows that the tensored theory with an arbitrary number of Majorana-Weyl fermions arose as the solutions of a holomorphic fermionic MLDE with = 0. The third-order MLDE of BPS type For this type of solutions with µ 5 = − 1 4 µ 4 , (3.1) becomes (3.20) The differential equations for NS and R-sector are obtained by taking T and T S transformation to the NS-sector MLDE. In other words, the solutions of both NS and R-sectors can be obtained from the NS-sector solutions. For this reason, we mainly focus on the NS-sector solutions with series expansion (3.3). By using the MLDE in the NS-sector, we can fix the coefficients µ i of (3.20) in terms of c, h ns 1 and h ns 2 and the condition a 1 = 0 of the NS vacuum character (3.3) as follows; , (3.21) Since the differential equation (3.20) is invariant under the action of the congruence subgroup Γ θ , the NS-sector conformal characters f NS i (τ ) form a vector-valued modular form under Γ θ . Although the most generic third-order MLDE (3.1) for Γ θ with˜ = 0 needs two parameters µ 4 , µ 5 for the weight-six coefficient function, we require the characters of the R-sector primaries have conformal weights satisfy the unitarity bound. To see this, we note that the saturation of the unitarity bound requires the MLDE for the R-sector, which is associated with the principal Hecke subgroup of level two, Γ 0 (2), to have weight-six coefficient functions vanishing at the cusp at infinity which happens when µ 5 = − 1 4 µ 4 . It JHEP01(2022)089 is easy to show that the weight-six coefficient function of (3.20) maps to the Γ 0 (2) modular form that vanishes at τ = i∞. More concretely, we have and actually the image under T S transformation is the only weight-six modular form of Γ 0 (2) that vanishes at τ = i∞. As h r 0 = c 24 and h r 1,2 > c 24 , we refer to (3.20) as the BPS third-order MLDE, in what follows. With help of the R-sector MLDE (3.20), we can find the conformal weights h r 1,2 of two other primaries in the R-sector in terms of c, h ns 1,2 . Explicitly, we find that h R i have the form of and of course they are consistent with the valence formula. further. The main goal of this section is to classify the solutions of the BPS third-order MLDE. To this end, let us first find the closed-form expression for the solutions of (3.20). We start with recasting the BPS third-order MLDE (3.20) in terms of the modular λ(τ ) function as follows. Here, the modular lambda function is defined in terms of the Jacobi theta functions by . (3.25) In addition, the superscript α in (3.24) denote the spin structure, namely α ∈ {NS, NS, R}. For α = NS, we choose the coefficients as to identify (3.24) and NS-sector differential equation of (3.20). For the other spin structures, the coefficients are given by Note that the coefficients (3.27) and (3.28) are obtained from (3.26) by applying aforementioned T and T S transformations of the λ variable Therefore, it is sufficient to focus on the analytic solution of the NS-sector. For the technical convenience, let us write the characters as and introduce a new variable z := 4λ(1 − λ). In terms of the z variable, one can rewrite (3.24) as the following ordinary differential equation, The solution of (3.31) can be qualified as the hypergeometric function of order three, namely 3 F 2 function. It turns out that the behavior of the ODE (3.31) depends on whether one of the NS-sector conformal weight is equal to 1/2 or not. If none of the conformal weights of primaries equals 1/2, the solutions of (3.31) have the form of where the hat meaning omission of the 1 ≡ 1 + β i − β i . The coefficients of the analytic solutions (3.33) are expressed in terms of the central charge and NS-sector conformal weights as follows. (3.34) JHEP01(2022)089 The S-matrix of the characters can be obtained with help of the monodromy of the hypergeometric function. We find that an element of S-matrix is given by where detailed derivation is presented in the appendix A. Suppose the NS-sector involves the primary of weight h NS = 1/2. In that case, the characters are expressed as the regular hypergeometric function, and the S-matrix for the above characters is given by (3.37) Having constructed the closed-form solutions (3.33) and (3.36), we are now ready to make a classification on the BPS third-order MLDE. The strategy is simple: we find the rational parameters c, h NS 1 and h NS 2 that allow the characters (3.33) and (3.36) to have non-negative coefficients in q-expansion. To this end, the free parameters are set to be the rational numbers of non-negative integers m i , n i , l i , and we search if there are integers (m i , n i , l i ) in the range of that allow characters to have non-negative integer coefficients in the q-expansion. The exhaustive list of solutions is given in table 2, where we divide them into four classes for illustration purposes. We refer the reader to the appendix C for the complete list of characters of the NS and R-sector. We present below the solutions of classes I, II, and III with details. We will not make comments on class IV separately, since their S-matrix cannot provide a consistent fusion rule algebra. 3 16 , 15 16 , 9 16 ) , (5, 1 3 , 1 2 , 5 24 , 7 8 , 5 8 ), 5 16 , 15 16 , 9 16 ), ( 42 5 , 3 5 , 7 10 , 7 20 , 19 20 , 3 4 ) , (10, 2 3 , 5 6 , 5 12 , 13 12 , 3 4 ) 13 16 , 33 16 , 23 16 ), ( 66 5 , 4 5 , 11 10 , 11 20 , 27 20 , 3 4 ), (22, 5 6 , 5 3 , 11 12 , 9 4 , 19 12 ) [13,26]. The solutions in class IV cannot have a consistent fusion rule algebra. Identification of Class I solutions Having classified the solutions of the BPS third-order MLDE, our next goal is to verify the solutions listed in table 2. Especially, it turns out that the class I solutions of table 2 are related to the (SO(m) 1 ) 3 WZW model, after the generalized Jordan-Wigner transformation is applied. and their S-transformation rule is given by JHEP01(2022)089 for m odd and (3.42) for m even. From the above S-matrices, it is straightforward to read off the form of diagonal modular invariant partition functions, for positive integer m ≥ 2. When m = 1, (3.43) reproduces partition function of the Ising model. We remark that the SO(m) 1 WZW models possess the Verlinde line L h= 1 2 in common, which is associated with Z 2 symmetry. Then, an application of the generalized Jordan-Wigner transformation yields a tensor product of m copies of the Majorana-Weyl free fermions. This is the reason why characters of the SO(m) 1 WZW models (3.41) and (3.42) can be decomposed by the holomorphic partition function of the Majorana-Weyl free fermions. Now let us consider the triple product of the level-one WZW model for SO(m). A diagonal partition function of triple product theory is readily followed from (3.43). The triple product theory involves the Verlinde lines L h=1/2 , L h=1 and L h=3/2 that are associated with the Z 2 symmetry. The action of Z 2 is easy to obtain from the S-matrix, see the formula (2.30). Of particular interest here is an application of fermionization with the Verlinde line L h=3/2 . After some computation, we find that the NS-sector partition function of the fermionized theory is given by correspond to the characters of primaries with h = 0, 1 2 , m 8 . In terms of the characters of SO(m) 1 WZW model, they are expressed as follows, for m ≥ 2 and their S-transformation rule reads On the one hand, one can read off the partition functions of different spin structures using the generalized Jordan-Wigner transformation. The R-sector partition function is contributed by the following three characters, JHEP01(2022)089 while theR-sector partition function is constant. Based on the fact that the characters (3.45) and (3.47) agree with the class I solutions of the BPS second-order modular differential equations, we conclude that the class I solutions are identified with the fermionized (SO(m) 1 ) 3 theory. Let us make side comments on the fusion rule algebra of the NS-sector. It turns out that the NS-sector of the fermionized (SO(m) 1 ) 3 theories has the consistent fusion rule algebra. To see this, we consider an extended S-matrix which acts on the vector-valued modular form One can show that the above extended S-matrix provides a consistent fusion rule algebra. As a second remark, we note that the fermionized (SO(m) 1 ) 3 WZW models with c ≤ 24 satisfies a special relations of the form (3.50) The degenerated case with m = 8 is the (SO(8) 1 ) 3 theory of c = 12. In this case, the bilinear relation (3.50) simply reduced to f (8) , which indicate that two characters of the fermionized (SO(8) 1 ) 3 WZW models are combined to produce K(τ ). We observe that similar happens for m = 16. More precisely, the linear combinations of two characters f (16) Comparison with simple current. Let us now make some comments on the connection to the so-called simple current (or bonus symmetry in the terminology of [38]). Readers can see [38][39][40] for more detailed discussions. A simple current, by definition, means a primary field J whose fusion with any other primaries φ i takes a simple form: (3.51) On the right-hand side, there is only a single primary field denoted by φ J(i) . Therefore the fusion coefficient reads N k J,i = δ k,J(i) . For the purpose of this paper, we further assume that [J] fusing with itself gives the identity. In other words, the fusion product with [J] is JHEP01(2022)089 a Z 2 automorphism of all conformal primaries by virtue of the associativity of the OPE. [J] organizes the primaries into orbits of length l i = 2/p where p is 1 or 2, (3.52) For simplicity, we abbreviate the fields φ J α (i) in this orbit as (α, i). We can also define a monodromy charge associated with φ i in terms of their weights, It is not difficult to show that all the elements in a given orbit share the same monodromy charge, and as a corollary, the conformal weight of J is either an integer or a half-integer. The existence of a simple current enables us to construct a non-diagonal partition function out of the diagonal one [40], It can be shown that the above partition function is always invariant under the modular S transformation. Moreover, when h(J) ∈ Z, since the monodromy charge Q(φ i ) is integral, the difference between h(φ i ) and h(φ J(i) ) is always an integer. So (3.54) is further invariant under T transformation hence under the full SL 2 (Z). As an example, this corresponds to the orbifold construction using the Z 2 symmetry generator with h = 1 in the (SO(m) 1 ) 3 theory. On the other hand, if h(J) is a half-integer which we may call a fermionic simple current, the difference between h(φ i ) and h(φ J(i) ) is always half-integral. In this case (3.54) is only invariant under T 2 transformation, which altogether generates the symmetry group Γ θ . Namely, we actually construct the partition function in the NS sector. The other sectors can also be obtained after S and T transformations. In section 2.2, we already introduce the Verlinde line operator L associated with a conformal primary. Using identities of the S-matrix, one can show that each simple current [J] gives rise to an L J that generates a Z 2 symmetry of the underlying CFT. 7 As an example, the fermionization of the (SO(m) 1 ) 3 theory shown above can be partially understood from the simple current perspective. Class I solutions: one-parameter family Among the fermionized (SO(m) 1 ) 3 theory, We find that the one-parameter solutions arise for c = 12, 18, 24 (m = 8, 12, 16). For these three cases, the NS sector involves a primary of h = m 8 , which become half-integers and integers. Therefore, one can combine two independent solutions of h = 1 2 and h = m 8 to produce a vector-valued modular form of two components. JHEP01(2022)089 Let us illustrate the above explicitly. For c = 12, the analytic form of solution involves two free parameters a 2 and b 1 , where the S-transformation of the above characters are governed by following S-matrix, (3.56) To have a consistent fusion rule algebra and non-negative integer coefficients in q-series, two free parameters a 2 and b 1 ought to be restricted as follows, (3.58) Here the q-expansion of g 0 (τ ), g 1 (τ ), g 2 (τ ) are given by (3.59) One can show that the S-matrix of the characters f NS i (τ ) is given by 60) 8 We pause to remark that the vacuum character with a2 = 20 reproduces the Mckay-Thompson series for class 4C. However, a2 = 20 cannot provide the consistent fusion rule algebra. Furthermore, the vacuum character involves negative integer coefficients. JHEP01(2022)089 The parameters m, n are constrained in order to have an extended S-matrix as follows: n < 4096, 12m < n + 4096. (3.62) Unless n ≥ 33044, the vacuum solution g NS 0 (τ ) have the negative coefficients in higher order of q. For n = 49428 and m = 1024, the above solutions can be identified with the characters of the fermionized SO(16) 3 1 WZW model. Identification of Class II solutions In this subsection, we find the relations between class II solutions and the WZW models with help of the generalized Jordan-Wigner transformation. After all, we find that characters of the certain WZW models can be used to express the class II solutions. (N = 2 minimal model) 2 . The NS-sector and R-sector characters of the first unitary N = 2 supersymmetric minimal model are known to solve the BPS second-order MLDE [13]. The NS-sector partition function of the N = 2 minimal model of our interest consists of the two functions where f NS 0 (τ ) and f NS 1 (τ ) correspond to the conformal characters of the vacuum and primary of h = 1 6 , respectively. Let us take a tensor product of the above N = 2 supersymmetric minimal models. The central charge of product theory is two, and the NS-sector partition function involves three characters. Explicitly, the NS-sector partition function has a form of where individual characters g i (τ ) read JHEP01(2022)089 The above three characters solve the NS-sector BPS third-order MLDE. It is easy to check that similar holds for the R-sector solutions. For this reason, we claim that the class II solution of c = 2 can be identified to the tensor product of the first unitary N = 2 supersymmetric minimal model. Fermionization of (SU(2) 3 ) 2 WZW model. Let us discuss the tensor product of two SU(2) 3 WZW models. This product theory is equivalent to the level-three WZW model for SO (4). The central charge of product theory is c = 18/5 and it involves ten primaries of conformal weights Since the center symmetry of SU(2) is Z 2 , the global symmetry of product theory is given by Z 2 × Z 2 . We take the Z 2 subgroup of it which is generated by the Verlinde line L h=3/2 . An action of the Verlinde line L h=3/2 can be obtained from the S-matrix, as discussed in the previous section. To analyze the partition function of fermionized theory, we apply the generalized Jordan-Wigner transformation using the Verlinde line L h=3/2 . After some computation, we find that the NS-sector characters are given by Fermionization of (SU(2) 6 ) 2 /Z 2 . Let us first consider the tensor product of two SU(2) 6 WZW models. It is RCFT with c = 9/2 and exhibits non-anomalous Z 2 symmetry that is generated by the Verlinde line L h=3 . The representation of h = 3 has the form of [6; 6] ⊗ [6; 6], which is presented in terms of the Dynkin labels of SU(2) 6 . As a first step, let us take an Z 2 orbifold via the Verlinde line L h=3 . The resulting orbifold partition function is contributed by 13 primaries and especially it involves the representation of conformal weight h = 3/2. In terms of the Dynkin labels of SU(2) 6 , the above representation with h = 3/2 can be written as JHEP01(2022)089 next step is to apply the generalized Jordan-Wigner transformation to find the partition functions of each spin structure. After some computation, we find that the NS-sector characters are given by where χ a 1,6 h (τ ) denote characters of the SU(2) 6 WZW model for the representation of weight h. The characters (3.70) agree with the NS-sector solution of the BPS third-order fermionic MLDE with c = 9/2, therefore we conclude that those solutions are related to the (SU(2) 6 ) 2 theory via fermionization. The Ramond sector characters can be obtained from the fermionization. In terms of the characters of the SU(2) 6 WZW model, it is given by and they are matched with the R-sector solution listed in table 9. We further remark that the R sector partition function becomes a constant, therefore satisfies the SUSY criterion discussed in [26]. Fermionization of Sp(4) 3 WZW model. Here we discuss a fermionization of the Sp(4) 3 WZW model. The central charge of this theory is five and there are ten primaries involving [0; 3, 0] which is a primary of conformal weight h = 3/2. To fermionize the theory, we introduce the Verlinde line L h=3/2 that is associated with a non-anomalous Z 2 symmetry. Applying the generalized Jordan-Wigner transformation, we find that the NS-sector partition function consists of the following three characters, and the above fermionic characters turn out to solve the NS-sector BPS third-order MLDE. Similarly, the R-sector characters read JHEP01(2022)089 Fermionization of Sp(6) 2 WZW model. We repeatedly apply the fermionization to the Sp(6) 2 WZW model. It is an RCFT with c = 7 and includes ten primaries. Especially, we focus on the primary [0; 0, 0, 2] whose conformal weight is h = 3/2. The Verlinde line L h=3/2 has a role of the generator of non-anomalous Z 2 symmetry and this line defect provides us the characters of fermionic WZW models. Utilizing the Jordan-Wigner transformation (2.22), we obtain the following NS-sector characters and R-sector characters The q-expansion of above NS-sector and R-sector characters can be identified with the solutions of the fermionic BPS third-order MLDE with c = 7. Thus we claim the solutions with c = 7 describe the fermionized Sp(6) 2 WZW model. Fermionization of SU(4) 4 /Z 2 . Let us now discuss the identification of solutions with c = 15/2. Our goal is to apply fermionization to an orbifold theory SU(4) 4 /Z 2 and show that its characters can be identified with the class II solutions with c = 15/2. The first step is to apply (2.25) to the partition function of SU(4) 4 WZW model, which will be denoted as B. We note that the WZW model of interest involves 35 primary and especially we will pay attention to a primary An orbifold theory B/Z 2 possesses the Verlinde line L h=3/2 and it generates a nonanomalous Z 2 symmetry. We use it to fermionize B/Z 2 . By applying the Jordan-Wigner transformation to (3.76) with L h=3/2 , we find that the characters of NS-sector are given by (τ ), JHEP01(2022)089 Fermionization of (Sp(6) 1 ) 2 . We propose to interpret the solutions with c = 42 5 as the characters of fermionized (Sp(6) 1 ) 2 WZW model. To this end, let us fermionize the tensor product of two Sp(6) 1 WZW models. The product theory possesses Z 2 symmetry generated by the Verlinde line L h= 3 2 . We find that a fermionization yield the NS-sector characters of the form (3.79) and the above characters perfectly matched to the NS-sector BPS solution with c = 42 5 . In a similar way, one can show that the R-sector characters of (Sp(6) 1 ) 2 WZW model agree with the R-sector BPS solution with c = 42 5 . An explicit expression of the R-sector characters are given by Fermionization of (SU(6) 1 ) 2 . Our next goal is to show that the partition function of fermionized (SU(6) 1 ) 2 WZW model can be described by the BPS solution with c = 10. We first note that the level-one WZW model for SU (6) involves six primaries of weights h = 0, 5 12 , 2 3 , 3 4 , 2 3 , 5 12 . Fortunately, in this case, the characters of each representation can be distinguished by their conformal weights. Therefore we denote the characters of SU(6) 1 WZW model as χ a 5,1 h (τ ) where h denote the conformal weight of certain representation. The tensor product theory has 36 primaries and especially it involves a primary of h = 3/2. Indeed, one can show that the Verlinde line for a primary of h = 3/2 is related to the non-anomalous Z 2 symmetry. It turns out that a fermionization with L h=3/2 provides the NS-sector partition function of the form where the q-expansions of three characters appeared in the above partition function are given by JHEP01(2022)089 On the one hand, it is straightforward to check that the generalized Jordan-Wigner transformation provides the following R-sector partition function. Bilinear pairs. Note that the class II solutions involve four pairs whose sum of the central charge is 12. We remark that four bilinear relations of the form Comments on the Class III solutions The class III involves three solutions with c = 39/2, c = 22 and c = 66/5. We remark that the first two solutions can be understood from solutions of the BPS second-order MLDE [13]. The BPS second-order MLDE is known to have solutions with (c = 39/4, h = 3/4) and (c = 11, h = 5/6), it is very natural to expect that their tensor product theories appear as the solutions of the BPS third-order MLDE. Let us discuss with more details. To analyze the solution with c = 39/2, we start with the (Sp(6) 1 ) 2 WZW model. The product theory possesses non-anomalous Z 2 symmetry that is inherited from the center symmetry of Sp (6). Especially, we use the Verlinde line L h=3 to construct the Z 2 orbifold theory. The partition function of orbifold theory consists of 13 We next consider the fermionization of an orbifold theory using the Verlinde line L h=3/2 associated with Z 2 symmetry. The NS-sector partition function is now given by We next focus on the solution with c = 22. To identify it, we consider the tensor product of two SU(12) 1 WZW models. The product theory has the Verlinde line L h=3 that generates Z 2 symmetry. With help of the formula (2.25) and the above Verlinde line, it is straightforward to compute the orbifold partition function. We find that partition function of an orbifold theory (SU(12) 1 ) 2 /Z 2 is contributed by 36 primaries involving a primary of weight h = 3/2. Indeed, we check that the Verlinde line L h=3/2 has a role of Z 2 generator. After applying (2.22) to the partition function of an orbifold theory, one can show that the NS-sector partition function is given by where the characters are expressed as follows. Bilinear relations of fermionic RCFT The main idea of Monster deconstruction is to decompose the stress-energy tensor T (z) of Monster CFT into the sum of two stress-energy tensors t 1 (z) and t 2 (z). The Monster deconstruction produces two disjoint CFTs M 1 and M 2 that are associated with t 1 (z) and t 2 (z). By disjoint CFTs we mean every operator in M 1 has regular OPE with any operator in M 2 . It has been known that the process of the above decomposition can be formulated by using the mathematical notion of the commutant subalgebra of Monster VOA. The explicit examples of Monster deconstruction have been discussed in [24,41] with choosing t 1 (z) as the stress-energy tensor of the minimal models or Z k parafermion model. As a consequence, the Monster deconstruction leads us to specific RCFTs which exhibit sporadic groups in their modular invariant partition function. In the previous section, we faced examples of fermionic RCFTs that establish a bilinear relation. Such bilinear relation implies that deconstruction could happen even for the fermionic RCFT. To explore the bilinear relation more, we start from the N = 1 SCFT JHEP01(2022)089 with c = 12. This theory has been constructed with eight bosons compactified on E 8 root lattice and eight free fermions [42]. The free-fermion currents are removed by considering a Z 2 orbifold. From the lattice construction, one can show that the NS sector partition function is given by a modular form of Γ θ which we denote by K(τ ) in what follows, We explore the deconstruction of the above N = 1 SCFT using the level-three WZW models for SO(m) as a basic building block. More precisely, we consider the fermionized SO(m) 3 WZW models that can be obtained by applying (2.22). For notational convenience, we write the fermionized SO(m) 3 WZW model as F A (m) where the superscript A stand for the spin-structure. The next step is to find a dual theory whose partition function deconstructs K(τ ) with the NS-sector partition function of F N S (m). A necessary condition for having a dual pair is c +c = 12, We constrain the dual charactersf N S i (q) of F N S (m) to satisfy a bilinear relation with the characters f N S i (q) of F N S (m). Explicitly, a bilinear relation of our interest is expected to take the form This bilinear relation enables us to explore the S-matrix and characters of dual theory. The central charge of dual theory is obtained by a relation (4.2) and the conformal weights of each primary of dual theory can be deduced from the above bilinear relation. Once the central charge, weights of primaries and S-matrix are known, there are several ways to find the dual charactersf N S i (q). For m ≤ 5, we use the MLDE to determine the dual characters. For m ≥ 6, we apply the Rademacher expansion to obtain the first few coefficients off i (q). We summarize the details of F N S (m) and F N S (m) in table 3. The characters of the bilinear pairs F A (m) and F A (m) should be identified with the solutions of the fermionic MLDE. Specifically, the characters of pair theories (c,c) = (1, 11) and (9/4, 39/4) have been discussed in [13] and they can be obtained from the BPS secondorder MLDE. Furthermore, all the class II solutions of the BPS third-order MLDE can be understood with the theories listed in table 3. It is natural to expect that the solutions for m = 6 and m = 7 are associated with the BPS fourth-order MLDE. As a side remark, we comment on the relation between F A (m) and the N = 1 supersymmetric minimal models which describe the unitary supersymmetric RCFTs of c ≤ where m, n ∈ Z and 1 ≤ m < k, 1 ≤ n < k + 2. The NS and R sectors have even and odd values of m − n, respectively. The character formulas for the NS and R algebra are given by [43] χ N S m,n (τ ) = ζ k m,n (q) (4.6) One can use the characters of the product theory m−1 i=1 M(2i + 2) to express the partition functions of the theories F A (m). For instance, we need to consider the tensor product theory M(4) ⊗ M(6) ⊗ M(8) to describe the partition function of the fermionized SO(4) 3 WZW model. As a consistent check, the central charge of the above product theory c = 18 5 agree with that of the SO(4) 3 WZW model. F (2). The level-k U(1) theories are constructed by compactifying the free boson to the circle of radius R = √ k. The partition function is known to be contributed by finitely many theta functions Θ m,k (τ ) for even k. More precisely, the partition function has the form of U(1) 3 and where The partition function cannot be invariant under the T -transformation when k is an odd integer. Instead, the partition function of U(1) theory with odd level is invariant under T 2 : τ → τ + 2 and S : τ → −1/τ , which is the characteristic property of the NS-sector partition function. The partition function of level-three U(1) theory is described by the following functions, where their S-matrix is given by We remark that two functions (4.9) can be expressed by the NS-sector characters of the N = 1 supersymmetric minimal model with c = 1. We refer the reader to [13] for the details. Let us now consider the NS sector partition function of the dual theory F N S (2). This theory involves two primaries of conformal weights h = 0, 5 6 in the NS sector and their characters are given by [26] f N S 0 (q) = q − 11 24 1 + 143q + 924q 3/2 + 4499q 2 + 18084q 5/2 + · · · , f N S 1 (q) = q 3 8 66 + 495q 1/2 + 2718q + 11649q 3/2 + 42174q 2 + · · · . (4.11) One can show that the charactersf N S 0 (q) andf N S 1 (q) satisfy a duality relation of the form The above duality relation implies that the NS sector partition function is given by We notice that the Fourier coefficients of the dual charactersf 0 (q) andf 1 (q) can be decomposed into the dimension of irreducible representations of 3.Suz, the triple cover of (4.14) Using the number decomposition in (4.14), one can compute the twined character referring to the character table of 3.Suz. For instance, the 2A twined characters read (4.15) It turns out that the above 2A twined characters are combined with the characters of U(1) 3 to produce a duality relation We suggest that the right-hand side of duality relation (4.16) can be identified with the modular form 17) and is known as the Mckay-Thompson series of class 4C. Now we turn to the characters of NS and R-sector. For the superconformal minimal model with c = 1, the NS and R-sector characters are related to the NS-sector character as follow. SO(3) 3 WZW and F (3). Here we comment on the fermionic theory which is obtained from the SO(3) 3 WZW model, equivalently SU(2) 6 WZW model. An explicit illustration of the generalized Jordan-Wigner transformation applied to the SU(2) 6 WZW model is presented in [26]. As a consequence of the fermionization, the NS-sector partition function is described by the following two functions (4.24) It is known that the above two functions solve the BPS fermionic second-order MLDE with c = 9/4 and h = 1/4 [13]. JHEP01(2022)089 SO(6) 3 WZW and F (6). So far, we discuss the fermionized level-three SO(m) WZW models up to m ≤ 5 and their dual pairs. The characters of the above theories turn out to correspond to the solutions of the BPS second and third-order MLDE. From now on, we consider the level-three SO(m) WZW model with a higher rank. We will show that the partition functions of each spin structure involve four characters for m = 6 and m = 7, and they solve the BPS fourth-order MLDE. We first note that the SO(6) 3 WZW model is equivalent to the SU(4) 3 WZW model, a RCFT with c = 45/7. It involves a primary of h = 3/2 and a line defect L h=3/2 is associated with non-anomalous Z 2 symmetry. Now the fermionization can be done via the generalized Jordan-Wigner transformation, (2.22). We find that the NS-sector partition function consists of the following four functions, where the above functions correspond to the NS-sector characters of the primaries of weight h = 0, 5/14, 4/7, 9/14, respectively. The NS-sector characters (4.33) form a Γ θ invariant object of the form 34) and Z N S can be considered as the NS-sector partition function. Here, the NS-primary of weight h = 9/14 has degeneracy two, therefore 4 × 4 sized S-matrix of (4.33) cannot be the symmetric one. Instead, one can find that the following extended S-matrix acting on the vector-valued modular form 7 is a symmetric matrix, thus can provide a consistent fusion rule algebra. We further notice that the four characters (4.33) can be identified with the solution of a below fourth-order MLDE, = 16α + 4β + γ + q(1536α + 1152β + 480γ) + · · · . (4.37) Therefore, the condition of saturating R-sector unitary bound becomes 16α + 4β + γ = 0. Indeed, it is easy to see that (4.36) satisfy a constraint 16α + 4β + γ = 0. It is straightforward to obtain an expression for the R-sector partition function with help of the (2.22). Likewise the NS-sector, the R-sector partition function involves four primaries. Their characters take the form of h=55/56 = q 5 7 72 + 760q + 4920q 2 + 24120q 3 + · · · , h=39/56 = q 3 7 40 + 600q + 4320q 2 + 22600q 3 + · · · . (4.38) One can see that the conformal weight h R 1 indeed saturates the unitarity bound, as desired. Our next goal is to compute the characters of a dual theory F N S (6) that are combined with (4.33) to produce K(τ ). Therefore, the NS-sector of a dual theory is expected to be contributed from four primaries of weight h N S = 5 14 , 3 7 , 9 14 . The strategy is to look for a bosonic RCFT B that is related to F N S (6) via (2.22 Furthermore, we demand that characters of the above 13 primaries share the same Smatrix with the SU(4) 3 WZW model. Now we demand the characters of dual theory paired with (4.33) to form a bilinear relation We combine the above bilinear relation with (4.29) to find the NS-sector characters of F N S (6). As a consequence, we obtaiñ (4.43) SO(7) 3 WZW and F (7). We now focus on the level-three SO (7) Specifically, the Verlinde line with primary of h = 3/2 is associated with the Z 2 symmetry. After applying the generalized Jordan-Wigner transformation, one can find that the NSsector partition function consists of the below four characters, JHEP01(2022)089 On the one hand, the R-sector partition function of the fermionized SO(7) 3 WZW model reads with four characters having the following q-expansion. A fermionic RCFT with c = 33/8 indeed can be considered as the dual theory of the fermionized SO(7) 3 WZW model in that their characters satisfy following bilinear relations, We finally remark that the characters (4.52) and (4.53) arose as the solutions of the fourth-order MLDE with˜ = 2, due to the valence formula. JHEP01(2022)089 where the expressions of coefficients are given in (3.32). Let us multiply the factor 1 − z to the above differential equation, to express it in the form of Beukers-Heckman [35]: with θ = z d dz and Here we set h ns 0 = 0 and h r 0 = c/24. Furthermore, one can plug (3.23) into (A.3) so that the coefficients α i and β i are written in terms of c, h NS 1 , h NS 2 only. The solution of (A.2) is known to be the generalized hypergeometric function of order three, and an explicit expression is given below, In the above expression, the hat means omission of the 1 + β i − β i parameter. Tracking back the various changes of variables, and fixing the normalization, we obtain as the exact expression for the NS-sector characters. In this singular case, the situation is slightly different. We see that the coefficient b 0 of (3.32) vanishes since h ns 1 = 1 2 . Let us denote the independent solutions of (3.31) byg ns i and further introduce a new variable, one can reduce the order of differential equation by one as follows, Now we multiply (A.7) by 1 − z to get JHEP01(2022)089 Case 1: h ns 1 = 1 2 and h ns 2 = 1 2 . Let us suppose that one is given a basis of local solutions in the solution space V of the hypergeometric equation in a small neighborhood around a point z 0 ∈ P 1 \{0, 1, ∞}. When analytically continuing the solution along a loop γ z encircling one of the regular singularities z ∈ {0, 1, ∞} in the anti-clockwise direction, the solution only comes back to itself up to a linear transformation M(γ z ) ∈ GL(V ): The fundamental group π 1 (P 1 \{0, 1, ∞}) is generated by γ 0 , γ 1 and γ ∞ , subject to the relation γ 0 γ 1 γ ∞ = 1 . t − e −2iπα k = t 3 + A 1 t 2 + A 2 t + A 3 , where we recall that the hypergeometric parameters are fixed in terms of the RCFT data by (3.34). From (A.16) we then deduce that: Following the loop around z = 0 counter-clockwise once corresponds to the transformation τ → τ + 2 since z = q 1/2 (1 + O(q 1/2 )), and therefore the monodromy matrix around z = 0 corresponds to the representation of T 2 ∈ Γ θ on the solution space. We also know that z = 1 ⇔ λ = 1 2 ⇔ τ = i, which is stabilized by S transformations, and therefore the 10 We refer the reader to some unpublished notes of Beukers on differential equations [44]. Levelt in his Ph.D. thesis [45] already provided this form of the matrices, but didn't construct explicitly the basis, hence constructed the monodromy matrices only up to conjugation. JHEP01(2022)089 monodromy matrix around z = 1 is conjugate to the representation of S ∈ Γ θ . Moreover, one can show from (A.20) that rank(M(γ 1 ) − id) = 1, therefore M(γ 1 ) must have two eigenvalues equal to 1, as a pseudo-reflection. In our CFT case we actually know a little bit more from representation theory of SL 2 (Z), namely that the T and S matrices are subject to the relations (2.3). One should also recall that in order to obtain the hypergeometric equation, we performed a rescaling of the solutions by a factor proportional to z −c/12 , which contributes a phase e −iπc/6 to the monodromy at 0, and e iπc/6 to the monodromy at ∞. These contributions compensate in the monodromy at 1. Our basis of solutions is such that T 2 is diagonal, and is obtained from the Mellin-Barnes basis by diagonalizing the monodromy matrix around z = 0: The matrix P 1 simply turns out to be a Vandermonde-like matrix: which corresponds to the S-matrix up to conjugation by a matrix commuting with (A.23). the extra conjugation matrix P 2 needed to reach our Frobenius basis of solutions nearby z = 0 is simply diagonal, and is given by the proposition 2.7 of [46]. The k th diagonal entry reads: JHEP01(2022)089 To summarize, the representation or the S-matrix on our basis of characters therefore reads: S = P −1 3 P −1 2 P −1 1 M(γ 1 )P 1 P 2 P 3 , (A. 27) which more explicitly takes the form (the indices m, n ∈ {0, 1, 2}): , where we recall that the hypergeometric parameters are fixed by the CFT data in (3.34). 11 This concludes the non-degenerate case, namely the case of RCFTs for which none of the NS conformal weight is equal to 1 2 . One can of course check that the modular relations (2.3) are well-satisfied. Case 2: h NS = 1 2 . Let us now turn to the degenerate case in which one of the NS conformal weight is equal to 1 2 . We recall that h ns 0 = 0, and that we denote by simply by h the remaining NS conformal weight. We already saw that in this case, one can effectively reduce the MLDE down to a second order ODE, so one may expect that a more direct, computational way of obtaining the S-matrix may be available in that case, in the spirit what was done in the second order MLDE case. After adding to the second order ODE solutions the constant solution and tracking back the various change of variables, we obtained the set of solutions (A.12). We perform on it the following change of basis: and we recognize of course immediately our z-variable on the left-hand-side. This identity allows us to the rewrite the characters in a way that breaks the symmetry λ ↔ 1 − λ, hence allowing us to read off the S-matrix by following the same direct method as we did for the second order MLDE, by using Gauss identity and Euler transformation formulae. We compute: (A.31) 11 We correct a typo of theorem 2.8 of [46]. We therefore obtain as a final result for the S-matrix in the degenerate case: Examples and comments. The S-matrices that we derived above correspond to our Frobenius basis of solutions with the three characters being normalized so that their qexpansion starts with coefficient 1. Accommodating for integer degeneracies m 1 and m 2 in the non-vacuum characters f ns 1 and f ns 2 simply corresponds to performing an extra conjugation to the S-matrix by the matrix diag(1, m 1 , m 2 ). Let us now consider for illustration a non-degenerate theory with both NS conformal weights different from 1 2 . We take for instance the theory (c, h ns 1 , h ns 2 ) = (18/5, 3/10, 2/5), identified in section 3.2.3 as a tensor product of N = 1 minimal models, with degeneracies (m 1 , m 2 ) = (4, 3). Our generic expression (A.27) gives the following result: which is indeed the correct result.
16,800
2022-01-01T00:00:00.000
[ "Physics" ]
Color Routing via Cross-Polarized Detuned Plasmonic Nanoantennas in Large-Area Metasurfaces Bidirectional nanoantennas are of key relevance for advanced functionalities to be implemented at the nanoscale and, in particular, for color routing in an ultracompact flat-optics configuration. Here we demonstrate a novel approach avoiding complex collective geometries and/or restrictive morphological parameters based on cross-polarized detuned plasmonic nanoantennas in a uniaxial (quasi-1D) bimetallic configuration. The nanofabrication of such a flat-optics system is controlled over a large area (cm2) by a novel self-organized technique exploiting ion-induced nanoscale wrinkling instability on glass templates to engineer tilted bimetallic nanostrip dimers. These nanoantennas feature broadband color routing with superior light scattering directivity figures, which are well described by numerical simulations and turn out to be competitive with the response of lithographic nanoantennas. These results demonstrate that our large-area self-organized metasurfaces can be implemented in real-world applications of flat-optics color routing from telecom photonics to optical nanosensing. TILTED OPTICAL NANOANTENNAS FABRICATION A soda-lime glass substrate (20 x 20 x 2 mm) is repeatedly rinsed in ethanol and acetone. The sample is then placed in a custom-made vacuum chamber and irradiated with an 800 eV low energy defocused Ar + ion beam (gas purity N5.0). A biased tungsten filament avoids charge build-up through thermionic electron emission. The ion beam illuminates the glass surface at an incident angle of θ = 30° with respect to its normal. The ion fluence corresponds to 1.4 ×10 19 ions/cm 2 and the glass temperature is fixed at about 680 K during the Ion Beam Sputtering (IBS) process. After the rippled pattern is formed on the glass surface, thermal Au deposition is performed on the rippled facets at a glancing angle θ = 55° with respect to the flat sample normal. The Au beam directly illuminates the glass facets tilted at +35° while the opposite facets are completely shadowed. By means of a calibrated quartz microbalance, the thickness h of the Au stripes can be evaluated by basic geometrical arguments given the Au thickness (h0) deposited on a flat surface facing the crucible at normal incidence and the average slope of the illuminated facet measured with AFM as h = h0  cos(55°-35°). The sample is then put in a custom-made RF sputtering chamber where a layer of SiO2 is conformally grown all over the surface using a 2" fused silica target. The silica layer thickness was monitored by means of a calibrated quartz microbalance. The RF sputtering experiment is run in Argon atmosphere at a power P = 60 W, sample-target distance d = 8.5 cm and total pressure of about P = 710 -2 mbar. Finally, Ag stripes are confined on the rippled facets tilted at -50°, now coated with a conformal SiOx layer, by using the same strategy and arguments already described for the Au ones. MORPHOLOGICAL CHARACTERIZATION The rippled glass template morphology was characterized by means of an atomic force microscope The template average periodicity λ is estimated from the real space distance between the maximum and the secondary neighboring peaks in the 2D self-correlation function (Figs. S1b and S1c). It's worth to note how the 2D self-correlation function of the rippled patterns rapidly decays to negligible values away from the central maxima, as the pattern loses its morphological coherence within 2-3 unit cells. This prevents the rippled glass template, and consequently the nanoantennas array confined on it, from showing grating optical effects which would lead to a more complex engineering of the color routing properties of our self-organized large area platform. NUMERICAL OPTICAL MODEL For the numerical analysis of the nanostrip antennas we employed a commercial software (Comsol Multiphysics 5.3), implementing the full-vectorial finite element method for scattered field formalism in two dimensions. We assumed a circular computational domain with 500 nm radius, surrounded by perfectly matched layers (PML) with scattering boundary conditions. The effective environment approximation was assumed (in accord with, e.g., Refs. 1,2) by embedding the nanostrips into a homogeneous dielectric medium with non-dispersive and lossless permittivity. The sketch of the bimetallic nanoantenna is shown in Fig. S2a. To avoid numerical artifacts, the vertices of the Au and Ag nanostrips have been rounded with 5 nm and 15 nm radius of curvature, respectively. The FEM mesh was accordingly defined so to resolve these radii with at least 5 elements. For the dielectric domain we set a maximum mesh element size corresponding to Starting from the scattered electric ES and magnetic HS vector fields (numerically solved for as a function of ), the total extinction cross-section is computed as E = A +S, with A and S the total absorption and scattering cross-section spectra, respectively given by: With the total extinction cross-section at hand, the transmittance of the sample at normal incidence is estimated as following: where L = 200 nm is the measured average periodicity of the sample and  a dimensionless fitting parameter of the order of 1 (in our simulations we set  Concerning the far-field scattering patterns, we employed the FAR-FIELD procedure in Comsol, implementing the Stratton-Chu formulas (see, e.g., Ref. 7) using Σ as the aperture enclosing our 2D antennas (either being the Ag or Au monomer, or the Ag-Au dimer). OPTICAL CHARACTERIZATION VIS-NIR extinction measurements were performed at normal incidence using a halogen-deuterium In Fig. S4 we show the measured directivity data for the Au, Ag and Au/Ag NSA reported in the main manuscript in Fig. 4a-b-c respectively, but using the same dB scale for all the panels. Fig. S4 evidences how the signal-to-noise ratio is similar for all the three different considered configurations when the directivity is not steeply changing with wavelenght.
1,304.8
2020-05-13T00:00:00.000
[ "Physics", "Materials Science", "Engineering" ]
GrapeMOTS: UAV vineyard dataset with MOTS grape bunch annotations recorded from multiple perspectives for enhanced object detection and tracking Object Detection and Tracking have provided a valuable tool for many tasks, mostly time-consuming and prone-to-error jobs, including fruit counting while in the field, among others. Fruit counting can be a challenging assignment for humans due to the large quantity of fruit available, which turns it into a mentally-taxing operation. Hence, it is relevant to use technology to ease the task of farmers by implementing Object Detection and Tracking algorithms to facilitate fruit counting. However, those algorithms suffer undercounting due to occlusion, which means that the fruit is hidden behind a leaf or a branch, complicating the detection task. Consequently, gathering the datasets from multiple viewing angles is essential to boost the likelihood of recording the images and videos from the most visible point of view. Furthermore, the most critical open-source datasets do not include labels for certain fruits, such as grape bunches. This study aims to unravel the scarcity of public datasets, including labels, to train algorithms for grape bunch Detection and Tracking by considering multiple angles acquired with a UAV to overcome fruit occlusion challenges. Object Detection and Tracking have provided a valuable tool for many tasks, mostly time-consuming and prone-to-error jobs, including fruit counting while in the field, among others.Fruit counting can be a challenging assignment for humans due to the large quantity of fruit available, which turns it into a mentally-taxing operation.Hence, it is relevant to use technology to ease the task of farmers by implementing Object Detection and Tracking algorithms to facilitate fruit counting.However, those algorithms suffer undercounting due to occlusion, which means that the fruit is hidden behind a leaf or a branch, complicating the detection task.Consequently, gathering the datasets from multiple viewing angles is essential to boost the likelihood of recording the images and videos from the most visible point of view.Furthermore, the most critical open-source datasets do not include labels for certain fruits, such as grape bunches.This study aims to unravel the scarcity of public datasets, including labels, to train algorithms for grape bunch Detection and Tracking by considering multiple angles acquired with a UAV to overcome fruit occlusion challenges. Value of the Data • Datasets, along with annotations, are helpful for researchers and professionals working with Computer Vision techniques to perform grape bunch detection and tracking [4] .• Datasets with multiple-perspective videos are crucial to avoid occlusion, which may lead to underestimation of the number of grape bunches in each row.• Grape bunch tracking allows for counting the number of grape bunches on each side of a vineyard row, which is relevant to estimating yield.Additionally, when coupled with ground truth information in the annotations, phenotypic traits can be extracted [5] , further contributing to yield estimation. • The dataset is helpful for winegrowers and field technicians as it provides high-quality videos for visual inspection of bunch monitoring and disease development, eliminating the need to be physically present in the field.• Datasets, together with annotations, address the lack of public agricultural datasets. • This dataset can be integrated with other datasets from the same vineyard that contain key information such as the position of the plant trunks or lidar point clouds [ 4 , 6 , 7 ], enabling researchers to go further and achieve a more precise understanding of the vineyard. Background In agricultural research, the importance of datasets cannot be underestimated, and their applications in vineyards are particularly notable.They help in the identification and classification of diseases [8] , as well as in the detailed analysis of yield factors [9] .Following the idea of [10] , where they introduced the concept of different angles with a handheld camera to avoid occlusions and provided 110 0 0 + images, this dataset offers Unmanned Aerial Vehicles (UAV) videos with grape bunch annotations recorded in a commercial vineyard under challenging conditions, such as occlusion.This endeavour aims not just at enriching the repository of data available for precision agriculture but also at overcoming specific hurdles not only for object detection within viticulture, similar to [11] where they provided instances to locate the bunches in the images but including tracking, by adding the same ID of each grape bunch along frames.By capturing footage from multiple vantage points around the vineyard rows, this dataset allows for a depth analysis, enabling algorithms to count bunches more accurately despite the frequent obstructions caused by foliage.Moreover, the significance of this dataset extends beyond its immediate utility.It serves as another piece that can be synergistically combined with other existing datasets from the same vineyard [ 4 , 6 , 7 ], which encompass a diverse range of data types, including videos, UAV orthoimages, and even LiDAR information.This diversity enhances the potential for data fusion and enables a multifaceted analysis of the vineyard ecosystem on the same dates but also across different years.Such comprehensive temporal and spatial coverage offers an unparalleled opportunity to study the dynamics of vineyard ecosystems in depth.Further, it empowers the available data lake of the vineyard to train models that are capable of generalizing under different operational conditions.This fusion of datasets opens up new avenues for research and application, allowing for a more detailed examination of bunch visibility, phenotypic trait extraction, and yield estimation under varying conditions, among other characteristics. Therefore, in order to obtain a complete perspective of the vineyard, recording the side of the row from multiple perspectives becomes essential.Consequently, this dataset aids Object Detection and Tracking algorithms training in real vineyard conditions, ensuring accurate bunch counting. Data Description The dataset was collected during the 2023 harvesting campaign between September 19th and 20th in a 1.06-hectare commercial vineyard ( Vitis vinifera cv.Loureiro) located in Tomiño, Spain (X: 516989.02,Y: 4644806.53;ETRS89 / UTM zone 29N) ( Fig. 1 ).The plants, managed in a vertical trellis system, were planted in 1990 with an NE-SW orientation.The distance between rows and plants is 3 × 2.5 meters, respectively, and no leaf removal was performed, resulting in a dataset marked by leaf occlusion. The dataset was collected by flying the UAV over the adjacent vineyard row, recording the side of the row of interest.Two types of videos were acquired: (1) a basic type that observed the canopy from a frontal point of view only, serving as control videos, and (2) videos following a path planning using the Ant Colony Optimization (ACO) [ 12 , 13 ] as optimizing algorithm with multiple-angle perspectives to address occlusion.Fig. 2 illustrates the perspectives obtained from the grape bunches when acquiring the data from multiple viewing points.The videos with names starting with NoPathPlanning_ * belong to the first category, while those starting with PathPlanning_ * were recorded using ACO. Experimental Design, Materials and Methods The UAV platform used in this study was the DJI Phantom4 RTK (DJI Sciences and Technologies Ltd., Shenzhen, Guangdong, China), equipped with an integrated RGB sensor.The flights were conducted under a clear sky at 3 m AGL above the vineyard rows, with wind below 0.5 m/s. Data annotation A total of 11 vineyard videos were annotated using CVAT software in MOTS style for grape bunch Detection and Tracking.The MOTS annotations were labelled with per-pixel accuracy, which ensured that each grape bunch instance remained coherent throughout the video sequence.Furthermore, even shaded grape bunches were annotated to ensure proper generalization to multiple illumination scenarios.The annotation focused on exclusively labelling grape bunches, excluding the peduncle and surrounding leaves.In the videos with multiple perspectives, grape bunches appear from different viewpoints, resulting in various shapes.The same ID was maintained for grape bunches seen from different perspectives to enhance Object Tracking.In order to increase the efficiency of the annotation task and due to the similarity of adjacent frames in the video, a frame step as 2 was selected in most of the videos, except PathPlanning_1, and the three videos without Path Planning. Table 1 summarizes the dataset, providing details on the videos, including the number of frames each video included, the number of annotated frames of each video, the frame step for each video annotation task and the size of both images and annotations.The dataset, totalling 78.8 GB excluding the original videos, includes 5958 labelled frames.Videos are available in MP4 format, while the images, along with the annotations, are provided in PNG.Moreover, the instances folder includes also a txt file, which contains the label(s) of the annotations. Table 1 Description of the videos and annotations provided, along with the number of annotated frames, and the size of the zip file containing frames and instances. Limitations None. Dataset link: MOTS-annotated UAV Vineyard Dataset captured using Multiple Perspectives to avoid Leaf Occlusion for Object Detection and Tracking (Original data) Fig. 2 . Fig. 2. Vineyard row acquired from multiple viewing points.The red masks represent the grape bunch annotations.(a) Videos recorded from the left.(b) Vineyard row observed from a frontal point of view.(c) UAV perspective when it was recording being rotated from the right. © 2024 The Author(s).Published by Elsevier Inc.This is an open access article under the CC BY license ( http://creativecommons.org/licenses/by/4.0/ ) [2] provide a multiple-angle view, each from a different vine plant.The other three videos (named NoPathPlanning_ * ) offer a frontal view of the canopy's side.These record the same plants as those with multiple perspectives, allowing for comparison.Recording details:The videos were captured between September 19 and September 20, 2023, during the harvesting period.Both days had sunny conditions and a wind speed below 0.5 m/s.Annotation Information:All the videos have been annotated using CVAT software[1], employing the Multiple Object Tracking and Segmentation (MOTS) annotation style[2].
2,219
2024-04-01T00:00:00.000
[ "Computer Science", "Agricultural and Food Sciences" ]
KNOWLEDGE AND JOB OPPORTUNITIES IN A GENDER PERSPECTIVE : INSIGHTS FROM ITALY By considering the case of Italy we show that despite much rhetoric and expectations about the fact that women have gradually overcome men in terms of educational attainments, they still lack behind in terms of the main skills and competencies that can profitably be used in the market. On the one hand, women lack both general and specific knowledge related to the labour market, on the other hand the skills and competencies they acquire by carrying on unpaid work do not seem to be positively valued by the market. However, women also appear to exhibit higher returns to knowledge, both in terms of returns to education and of returns to work-related knowledge. Women’s employment is more determined by the joint impact of care burdens and knowledge-determined opportunities, and their wages are more significantly affected by our indicators of knowledge. More than for men, while specialisation improves “insider” women’s wages, it reduces “outsider” women’s ability to obtain a job. It is widely recognized that knowledge is central to the process of economic growth and job creation, and not only in Western countries.Human capital has been at center stage of economic theory both in mainstream microeconomics, in the wake of the works by Gary Becker (see for example Gary S. Becker 1964), and mainstream macroeconomics (especially after the work by Paul Romer 1990).Even in its proponents' aims, the concept of human capital was conceived as a multidimensional concept, referring to the stock of competences, knowledge and personality attributes embodied in the ability to perform labor so as to produce economic value (Jacob Mincer 1974).However, it is nowadays current practice to look at best at only two dimensions: (i) formal education; and (ii) work-related skills.By contrast, the increasing diffusion of information networks has progressively accrued the importance of tacit knowledge, in the form of general skills, abilities and comprehensive competencies, on labor market outcomes making it easier and less expensive to access and to efficiently use general information.Thus, in this paper we try to extend the traditional focus on education and labor market training to encompass a wider set of constituent variables of human capital. PANOECONOMICUS, 2011, 5, Special Issue, pp.735-757 Sex differences have widely been considered in the stream of literature on human capital.Ever since Becker posited different returns on different forms of human capital as the founding block of the sexual division of labor in the household (Becker 1985), the New Household Economics literature has sought to use the differences in the monetary benefits of education as a means of explaining practically the entire social construction of gender roles.Besides the issue that such a position may lack realism, in so far as millennial social structures such as patriarchy and genderbased discrimination are here reduced to a "simple" matter of financial expediency (and possibly limited to contemporary capitalist societies only), this strand of human capital theory is especially problematic from a feminist perspective since it assumes that (a-gendered) individuals decide on their education and training by rationally weighing the associated prospective benefits and costs (David Colander and Joanna Wayland Woos 1997). Indeed, the feminist literature has frequently tout-court dismissed neoclassical explanations of gender roles as unrealistic and irrelevant (see for example the works collected in Drucilla K. Barker and Edith Kuiper 2003;or Marianne A. Ferber and Julie A. Nelson 2003).However, in this paper we argue that feminist scholars and activists should not throw out the baby with the bath water, as an opportunely extended notion of knowledge may convey relevant information on gender and gender roles.Indeed, a high level of education is more relevant for the career dynamics of women than of men (for the case of Italy see Angela Cipollone and Carlo D'Ippoliti 2011): women enjoy higher returns to education and training than do men, while men exhibit higher returns from their occupational status.However, recent studies show that women receive less training than men in terms of training hours; such a gender training gap may intensify the relative difficulties of women to enter and to remain in the labor market with better job conditions and better career prospects.Especially in the light of the ongoing process of population ageing, which itself is a gendered phenomenon, increasing and updating adults' skills and competencies will become increasingly crucial (Marcella Corsi and Manuela Samek Ludovici 2010).Moreover, the ability to efficiently use Information and Communication Technologies (ICT) could improve the likelihood of women to find a job after a career interruption, while knowledge in terms of financial literacy may raise their intra-household empowerment, which in turn positively affects the willingness to take up a job and to continuously participate in the labor market along the entire life-cycle. These facts strongly motivate the use of a more comprehensive indicator of knowledge rather than education only in order to discuss career dynamics under a gender perspective.We propose a modification and enlargement of the traditional concept of human capital, which we will refer to as "knowledge" in order to avoid unwarranted assumptions on the rational process of its accumulation.We specifically consider some formal and informal skills to complement the more traditional analyses of education and on-the-job training. By considering the case of Italy, we estimate the interrelation and joint impact of education, skills and labor market experience on men's and women's employment status and wages.Italy is an especially interesting case study because knowledge has been put at the center of the European Union's strategy for growth and social cohe- Gender and Knowledge We propose a multidimensional view of knowledge, including the following dimensions: education (i.e.schooling and continuing (or adult) education); job and labor market related skills (i.e.on-the-job and off-job training, experience, etc.); economic and financial literacy; ICT skills; general informal skills, such as basic household management skills.Most of these dimensions exhibit relevant gendered features. With respect to education, during the second half of the twentieth century (and in the twenty-first so far) the educational attainment of women has progressively increased in nearly all industrialized and in many developing countries.As noted by Anne M. Hill and Elizabeth M. King (1995): "education enhances labor market productivity and income growth for all, yet educating women has beneficial effects on social well-being not always measured by the market" (p.22).Yet, due to the nature of our data in the rest of the paper we do not consider such well-being effects.However, while women are now more often involved in university education, in most countries they constitute only a minority of the students involved in the highest educational (i.e.graduate) programs (see for example Jackie Stalker and Susan Prentice 1998;Diana Leonard 2001).What may be even more relevant, is that aggregate figures hide a very high gender segregation in education, which paves the way for subsequent segregation in the labour market.According to an elaboration by the European Commission (2006), while 60% of PhD students in education and pedagogy are women (72% in Italy), only 15% of PhD students in engineering are women (13% in Italy) and only 19% in computing (25% in Italy). The other dimensions of knowledge listed above have progressively shown to exhibit a relevant influence on gender inequality and power structures in contemporary societies.Financial literacy is key to a balanced smoothing of consumption over time, especially in the context of a general move of European pension systems towards pre-funded schemes based on individual decisions to save.In such an institutional environment, the unwillingness or inability to properly plan one's future resources may aggravate the already substantial gender gap in elderly persons' at-riskof-poverty rates (Corsi and Samek Ludovici 2010).It is thus worrying to note that, PANOECONOMICUS, 2011, 5, Special Issue, pp.735-757 for example, in the USA women exhibit substantially lower financial literacy than men, and that this is related to a lower propensity to plan (Annamaria Lusardi and Olivia S. Mitchell 2008).The issue is partly related to intra-household dynamics, for example Gordon L. Clark, Knox-Hayes Janelle, and Kenda Strauss (2009) show that women are more likely to rely upon others (i.e.their breadwinner spouses) for their expected welfare in old age.In the case of Italy, Elisabetta Addis (2008) shows that not only many women are unconcerned with financial planning in the family, but a considerable number does not even possess precise information on their family's resources (and on their husbands' income in particular).Financial illiteracy is especially diffused among women at a higher risk of poverty and thus the increasing presence of microcredit institutions has frequently served to provide a useful and widespread range of services -such as the joint supply of financial products and training-related facilities -to overcome this women-specific vulnerability.For example, a research focusing on Mediterranean countries showed that the impact of microcredit on women's empowerment (defined as women's ability to take decisions and affect outcomes of importance to themselves and their families: see Gita Sen and Srilatha Batliwala 2000) is associated to a greater participation in intra-household savings and investment decisions and enlarged capacity to undertake purchases in autonomy (Corsi et al. 2006). Information and Communication Technologies are at the core of the European strategy for an economic growth founded on knowledge.Setting the agenda for the coming decade, the European Commission writes: "The crisis has wiped out years of economic and social progress and exposed structural weaknesses in Europe's economy.[...] Faced with demographic ageing and global competition we have three options: work harder, work longer or work smarter.We will probably have to do all three, but the third option is the only way to guarantee increasing standards of life for Europeans.To achieve this, the Digital Agenda makes proposals for actions that need to be taken urgently [...]" (European Commission 2010a).The Digital Agenda is one of the main initiatives for Europe's economic policy in the coming decade and among its main goals it includes the objective of promoting a higher participation of young women and women returners (i.e.adult and relatively older women who enter the labour market after a long period of inactivity) in the ICT workforce.Such a focus on women is due to two concurrent causes: on the one hand, women's employment rates across European countries are still significantly lower than men's, and there is thus a greater potential for job growth for the female workforce; on the other hand, on top of the previously mentioned underrepresentation of women among graduate students in scientific and technological fields, there is also a more general gender gap in basic ICT skills.Accordingly, a report by the European Commission notices that among persons of working age there is a 6% difference in the diffusion of internet users between European men and women (61% as opposed to 55%) but the gap among "frequent" users (at least once a day) increases to almost 40% (European Commission 2010b).The poor endowment of basic ICT skills also explains the low participation of women in ICT-related tertiary education, which frequently reinforces the gender horizontal segregation and the exclusion of women from one of the few industries that was least affected by the economic crisis (for a recent review see PANOECONOMICUS, 2011, 5, Special Issue, pp.735-757 Neil Anderson et al. 2008).Moreover, the efficient and immediate use of ICT facilities may increase participatory relations in organizations and workplaces and allow for a greater flexibility of working places and times (Luc Soete 2001), thus possibly facilitating the conciliation of work and family life.However, it has also been shown (Corsi 2004) that, though women are more involved than men in the use of e-mail in top-down communication (that is within command-and-control hierarchies), their introduction does not seem to have brought about a greater participation of women in decision-making.As the works collected by Sylvia Walby et al. (2007) report, the introduction of ICTs stimulated a growth of non-standard employment forms beneficial to women's employment and, at the same time, led to a "re-gendering" of the ICT workforce by segregating women to the lower tail of the occupational hierarchy in ICT-using and ICT-producing industries. With respect to the set of job and labor market related skills more in general, a gender approach to adult training and lifelong learning has become increasingly relevant as it has been shown that, while women constitute the majority of workers and jobseekers enrolled in adult education programs, numerous gendered disadvantages still exist for women learners.On the one hand, research showed that women may struggle to continue or even quit formal education due to unpaid work burdens, such as caring for children and/or the elderly and housework (Richard Blundell 1992;Amy Shipley 1997;Joyce Stalker 2001).On the other hand, due to these genderspecific responsibilities women exhibit more irregular and fragmented careers and thus they are less likely to accumulate a profitable labor market experience and benefit from it.Conversely, women returners to the labor market may capitalize on training and lifelong learning opportunities in the transition from unpaid to paid work to a greater extent than men: see for example Anne Campbell (1993), Stalker (2001), Deirdre Heenan (2002).An issue on which further research is needed is the question raised by feminist scholars and pedagogues, on the extent to which gender segregation in education and training and the very content of learning act to reinforce gender roles and stereotypes (Sue Jackson 2003; Donna M. Sayman 2007). Mainstream theory interprets the distinction between job and labor market related skills by focusing on the differences between specific and general knowledge, whereby firm-specific knowledge produces an extra-productivity of workers that result in quasi-rents (Becker 1964).Given the limited availability of such data, in this paper we try to distinguish the two notions by referring to tenure, the time spent by workers working for their current employer, as job-specific skills, and to workers' effective age, that is the time passed since workers' entrance in the labor force, as labor market related skills.As it turns out, the two variables are highly correlated for men (88% for working age men in 2008, according to data from the Bank of Italy's Survey of Households' Income and Wealth), possibly due to Italy's low workers' mobility and very low turnover, but they are considerably less correlated for women (78% for working age women in 2008), mainly due to their more frequent career interruptions.However, as suggested by the feminist literature we do not consider housework and care as unproductive activities (irrespective of their being carried out within the family or in the market).Thus in the set of knowledge components we finally include general informal skills, such as basic household management skills, in PANOECONOMICUS, 2011, 5, Special Issue, pp.735-757 order to investigate the relevance of the home as a place of learning (Patricia A. Gouthro 2005).To do so, we separately consider what we called workers' effective age and the number of years spent in paid employment.As shown in the next section, while the two quantities tend to coincide for men (preventing their joint use in regression analyses, among other things), their difference is informative of women's work trajectories in a life-cycle perspective. Expanding the Notion of Knowledge To estimate the relevance of knowledge in determining men and women's work trajectories we use multivariate techniques to summarize the several dimensions described above into a few variables.We use the 2006 wave of the Bank of Italy's Survey of Households' Income and Wealth (SHIW) because on that occasion a special module on financial literacy and other dimensions of knowledge was included. 2The sample (representative of Italy's population) is composed of 9,730 persons of "working age", by which we denote, with some modification upon the common practice in EU, all individuals between 25 and 60 years old (included).Of these, 4,973 are women and 4,757 are men.We also defined a more restrictive sample of people of prime age, which we define as persons between 25 and 50 years old in order to prevent interference with widespread practices of early retirement.The restricted sample is composed of 3,468 women and 3,309 men. We mainly focus on the impact of knowledge on employment status and labor income.Thus, we take a binary approach to employment: individuals are considered to be employed or not employed.However, we recognise that important issues are also the engagement in part-time work or the distinction between unemployed and inactive population.Moreover, we specifically focus on women's employment rather than women's participation for several reasons.On the one hand, we maintain that among the key labour market indicators the employment rate constitutes the best index of labour market dynamics and functioning.On the other hand, in terms of the reciprocal influence of the key labour market indicators, the employment rate can play the major role.Finally, Italy lags well behind the Lisbon target in terms of women's employment rate and this index constitutes thus a major priority for economic policy. In the sample 82% of working-age men and 87% of prime aged men are employed, as opposed to 56% of working-age women and 61% of prime aged women.Mean hourly wages in the sample are 8.95€ for working age men (8.59€ for prime aged men) and 8.52€ for working age women (8.33€ for prime aged women). As described in the previous section, the first set of variables employed in explaining these gender gaps concerns formal education and schooling.We consider six levels of educational attainment, ranging from no education to postgraduate training, distinguishing between the two levels of secondary education in accordance with Italy's institutional setting (up to 1996 it was possible to quit school at age 14, that is PANOECONOMICUS, 2011, 5, Special Issue, pp.735-757 (usually) after a minimum of 8 years of education.In 1997 compulsory education was extended up to 16 years old, although in the form of an "individual right to statefinanced education".Being enrolled in formal education at least up to 16 years old became a binding obligation only in 2007).However, in order to better highlight the role of education in shaping individuals' job opportunities we also distinguish six broad fields of study: vocational, humanistic and social studies for secondary education degrees and scientific, humanistic and social studies for tertiary and upper educational levels (in Italy's educational system scientific studies in tertiary education are jointly classified with humanistic studies under the heading "liceo").The distributions of educational attainments, average number of years spent in education, and field of study are summarized in Table 1.As it is shown, the younger prime age individuals are better educated than the working age persons, and prime aged women are characterized by the highest average number of years of education (11.5).Women exhibit a significantly lower participation in vocational training at all ages, while they are overrepresented in the social sciences field (no significant differences emerge in the humanistic and scientific fields).Concerning job and labor market related skills, our sample does not allow us to account for workers' participation to formal training.However, we are able to capture three different measures of acquired general and specific skills and competencies.As already mentioned, we consider the difference between workers' age and their age at the time of first entry in the labor force as a measure of effective work-PANOECONOMICUS, 2011, 5, Special Issue, pp.735-757 ers' age and interpret it as an indicator of general work-related knowledge.As shown in Table 2, it is significantly higher for men than for women both of working and of prime age (between 17 and 23 for men and between 13 and 17 for women).In the case of men, this measure of labor market experience is highly correlated to our second dimension of work-related knowledge, i.e. job experience, the number of years (and months) spent in actual employment.The correlation between the two variables, as mentioned, is significantly lower for women, who exhibit much more frequent career interruptions (in 2006, the correlation was 78% for women as opposed to 87% for men).We also computed a final measure of work-related knowledge, arguably closer to the neoclassical notion of firm-specific knowledge, that is tenure, the number of years spent working for the current employer.As for the other two variables, women's mean value is significantly lower than men's (around 7 years for men and 5 years for women), as a consequence of both women's lower participation in the labor market and women's overrepresentation among the workers employed on flexible and fixed-term work arrangements (Cipollone and D'Ippoliti 2010). Finally, work-related knowledge may be acquired by means of unpaid work activities, such as productive activities carried out within the family, housework and care work.A relevant question is how tacit skills and competencies acquired at home are valued in the market, and if they may become useful (possibly in certain industries such as services to household).Information on unpaid work may partly be ascertained by investigating the differences between our effective workers' ages and their labor-market experiences.However, such differences may also imply either involuntary job loss or a (temporary or permanent) withdrawal from the labor force to enjoy leisure activities.In order to account for these eventualities, we add upon the differences between our effective workers' ages and their labor-market experiences with a measure of the demand for unpaid work within the household, proxied by (i) being in a long-term affective relationship implying cohabitation (that for reasons of simplicity we denote as "married" status); and (ii) co-living with an elderly person (above 75 years old) or having young children (below 3 years old). 3s suggested by Cipollone and D'Ippoliti (2011), the impact of being married on employment may be considered as a measure of traditional gender roles, while coliving with elderly people or children is a proxy of care work burdens.In our sample women appear to face a slightly higher demand for unpaid work in the household, since men more frequently live alone.By contrast women tend to leave alone in old age due to divorce or widowhood (Corsi and Samek Ludovici 2010).Due to the prevalence of heterosexual cohabitation in the working age population, however, such difference is very small (though statistically significant) and living arrangements on average tend to be equal for men and women: by attributing an equal weight (equal to 1) to all the mentioned sources of demand for unpaid labor and summing them up, on average men face a demand equal to 0.94 (that is on average each working age man lives with almost one person in need for care) and women 0.97.Next, exploiting a specific set of questions available in our dataset, we integrate information on education and work-related knowledge with two further dimensions of knowledge: economic and financial literacy and ability to use information and communication technologies.With respect to the former, six questions were asked4 to measure the respondents' ability to understand the working of inflation, the meaning of basic financial terms such as "bonds" and "shares", and their ability to solve basic financial arithmetic problems (all questions and answers are listed in the Annex).For each question we created synthetic dummy variables assuming a value of 1 if the individual responded correctly and 0 if otherwise (descriptive statistics are shown in Table 2).As it emerges from Table 2, women in Italy do not exhibit pronouncedly lower levels of financial literacy, differing from what was found for the United States.Such difference may be due to the different kind of questions included in the surveys.Accordingly, the questions asked in Italy's SHIW are relatively easier and more straightforwardly related to the actual knowledge of basic financial con-PANOECONOMICUS, 2011, 5, Special Issue, pp.735-757 cepts, requiring a more limited use of mathematic or logic skills (see Annex).Concerning ICT skills we selected four potential proxies by considering the following questions in the survey: (1) if the person uses a computer at home or at work; (2) if his/her family has a computer at home; (3) if the person uses the internet for emails or surfing the web; (4) if the person bought goods or services online.Similarly we created a dummy variable for each of these questions. In order to summarize the information contained in the former proxy variables for financial literacy and ICT ability, and to try to retain the relevant information on the person's skills into a few meaningful indicators abstracting from other sources of variance (such as the person's financial means), we carried on a factor analysis on the matrix obtained by computing tetrachoric correlation of all the mentioned dichotomous dummy variables (on the 9,187 observations of persons of working age).We followed the standard practice in selecting the (two) factors that exhibit an eigenvector greater than one and that contribute to the explanation of a reasonable share of variance, and then rotated the factors according to the varimax method.Results are shown in Table 3.As it emerges, the two factors clearly imply a cluster of financial skills (Factor 1) separated by a second factor summarizing ICT skills (Factor 2).Thus, the two factors are liable for straightforward economic interpretation and allow us to keep more than half the variance of the original variables with the exception of the financial problem-set questions which are more likely to also enclose other skills (mostly in the field of mathematics, such as the ability to read a graph or to make basic computations).In conclusion, we were able to gather variables measuring the number of years and the field of education, three dimensions of work-related experience, proxies of the unpaid work burden, and two indexes measuring ICT skills and financial literacy.We excluded the information on the field of study because it cannot be reduced to a quantitative measure and normalized all the quantitative variables by subtracting their (working age) population average and dividing by the standard deviation.These normalized variables were then collapsed by means of arithmetic average, to create a PANOECONOMICUS, 2011, 5, Special Issue, pp.735-757 synthetic index of Knowledge.We also created a second index, Extended Knowledge, constructed as the previous one with the addition of a further dimension related to care burdens, in order to measure the skills acquired by doing unpaid work.This last variable was created by summing the number of people in the household that assumedly imply a demand for care, as described above.This variable has been averaged jointly with the others by means of arithmetic average.In other words, both indexes of Knowledge and Extended Knowledge are constructed by attributing an equal weight to all the component variables (three for the labor market, one for education, one for ICT skills and one for financial literacy in the standard case, plus a further one for unpaid care work in the extended case). Descriptive statistics for the two indexes are reported in Table 4, distinguishing between the variance of the knowledge indicators in the population -withingroup Knowledge -and the variance between the several dimensions of knowledge for a single person (since the Table shows the average value of the variance of the Knowledge indicator across all individuals, it may be interpreted as the variance of the Knowledge of the "average individual").As shown in Table 4, women exhibit lower average values of both Knowledge and Extended Knowledge, with up to -0.15 points compared to men.Overall, the difference between the Knowledge and Extended Knowledge is not quantitatively relevant.Moreover, the men's group appears to be less heterogeneous in terms of Knowledge, as they exhibit a lower variance than women both in the working age population and in the prime age population.Women's lower concentration is graphically shown in Figures 1 to 4, whereby it is evident that a majority of women of working age exhibit values smaller than men's for both indicators of knowledge (since the distributions approximate Gaussian distributions, mode and median values coincide).Prime aged women partly filled the knowledge gap, but there is still a substantial number of women who cluster at substantially lower values than men's and women's mean values.As shown in the Annex, the knowledge gender gap is sub-stantially lower for prime aged individuals, especially when the knowledge indicators are constructed excluding firm-specific knowledge, i.e. excluding workers' tenure. Finally, as shown in Table 4, for each individual the indicator of Knowledge seems to be constructed by averaging more heterogeneous skill levels (across the several dimensions) for a single man than for a single woman.Indeed, at the individual level, both prime aged and working-age men exhibit higher (mean) standard deviation of the Knowledge and the Extended Knowledge indexes.In particular, women's experiences appear as more diverse through their life course, while men's high standard deviation is a consequence of single very high values, as evidenced by their higher (mean) kurtosis of the two indexes in both age brackets.In other words, men appear to specialize more (often in labor-market experience) than women. Employment We investigate the economic relevance of knowledge in the specific sense of the private returns to knowledge in terms of employment and of labor income.To do so, we first estimate a probit model of the probability of being employed separately for men and women.5 Marginal effects are reported in Table 5. A comparison of the sex-specific estimates highlights a number of significant differences in the impact of individual and household level variables (for example concerning the impact of the "care" variable).As a consequence, we may conclude that active policy interventions aimed at boosting employment should be very different according to their target of either men's or women's employment, and sometimes their effects may even be opposite (see Cipollone and D'Ippoliti 2011).More in general, from a simple comparison of the sex-specific estimates it emerges that a model based on an "average" a-gendered economic agent (i.e. on a representative agent) may fail to grasp relevant economic dynamics.Thus, men and women cannot simply be conceived of as heterogeneous.Moreover, sex-specific theoretical models are needed to understand their behavior and separate empirical models are necessary for empirical analysis (i.e. the notion of "diversity" introduced by D'Ippoliti 2011). From Table 5 it emerges that knowledge exerts a large and significant impact on individuals' probability of employment.For women, such an impact is significantly larger than for men. A unitary change in the indicator of knowledge, approximately correspondent to a shift from the mean value to the top 5% of the distribution, corresponds to an almost doubled probability of being employed for men (+80%) and almost tripled for women (+180%).For men, the impact is considerably higher for the population of prime age, while such a difference is less pronounced for women.Extending the notion of Knowledge by considering our indicator of Extended Knowledge increases the impact for men and lowers it for women.This result depends on the critical fact that care burdens are positively associated to men's employment and negatively to women's, thus reflecting the traditional division of labor in the household. Specialization (for example in the form of high imbalances between workrelated knowledge, education, and skills), as measured by the kurtosis of the knowledge indicators, appears to be negatively associated to both men's and women's employment (between -63.2% and -94% for men and between -72% and -93% for women).While no significant gender difference emerges, the detrimental impact of specialization on the likelihood of employment seems larger for prime age individuals.Marginal effects denote the mean variation in individuals' probability of being employed corresponding to an infinitesimal variation of the independent variable, estimated at the mean value of the independent variable; for dummy variables marginal effects denote the average variation in individuals' probability corresponding to the modification of the independent variable from 0 to 1.Control variables include real and financial wealth, age (squared), Regional fixed effects and a constant term. Source: SHIW (2007). In conclusion, Knowledge seems a crucial determinant of the likelihood of employment, especially for women.More importantly, for women the amount of skills and competencies acquired by practicing unpaid work at home does not seem to be valued by the market.On the contrary, the demand for care constitutes a constraint to women's employment, even when controlling for other variables such regional factors and real or financial wealth. When considering disaggregated variables (Table A1 in the Annex), it emerges that the largest gender differences occur with respect to the returns to the labor market components of Knowledge, namely effective worker's age and job experience.Specifically, both indicators are positively associated to the probability of being employed, though the second boosts the chances of employment more for women.Concerning formal education, the returns to secondary education would appear as higher for men than for women.Tertiary education, with the exception of the social sciences, appears instead to benefit women more than men. The proxy variables for the specific skills considered here appear to exert ambiguous impacts.For women, ICT skills positively increase the probability of employment for the working age population, while financial literacy is never statistically significant.On the contrary, for men of working age financial literacy lowers the probability of employment (though not for prime aged individuals).This peculiar result may be due to an income effect especially in the case of older workers, given the high correlation between financial literacy and accumulated financial wealth.Al-PANOECONOMICUS, 2011, 5, Special Issue, pp.735-757 though in the estimations we control for households' real and financial wealth, a study from the Bank of Italy suggests that these are among the least reliable variables in the sample, given a certain reluctance in the population to uncover such private information in a survey (Claudia Biancotti, Giovani D'Alessio, and Angelo Neri 2004). Finally, concerning the set of care-related variables, gender differences are impressive.For women there are negative and significant impacts from having small children (less than 3 years old), from having a partner and from cohabiting with an old-aged person (above 80 years old).These same variables exert no significant impact on men's chances to be employed and cohabiting with a partner is even positively associated to a higher probability of men's employment.This may denote that women's unpaid work facilitates men's employment in the market by complementing it and making it easier (and in some cases may even be instrumental to it). Labour Income We next considered the returns to Knowledge in terms of labor income by estimating Heckman models of (the logarithm of) hourly wages, using the previous probit models as selection equations.As shown in Table 6, women appear to benefit from slightly higher returns to knowledge in prime age and slightly lower in working age.In particular, a unitary change in the indicator of knowledge corresponds to an increase of almost 19% of the log of hourly wage for men and between 15% and 17% of the log of hourly wage for women.Specialization is rarely statistically significant, but when it is women appear to benefit from it more than men.If we run the above estimations in the pooled sample (i.e.including both men and women), an unexplained residual, corresponding to a "woman" dummy variable, confirms previous estimations of the presence of a gender pay gap being not accountable for by other observable factors but gender (between 16% and 17% in all estimations). When considering disaggregated results (Table A2 in the Annex) it appears that ICT skills benefit men's and women's hourly wages approximately in the same measure (around +1%), while financial literacy is significantly associated to higher wages only for working age men (+3.5%).The returns to education are more similar between men and women, especially for the prime age group, suggesting that the skills distribution and the skills composition of women and men in older ages (between 50 and 60 years old) are less homogeneous compared to those in prime age. Finally, women benefit significantly more than men from firm-specific knowledge, as their return to tenure is on average 50% higher than men's. In conclusion, women appear to exhibit higher returns to knowledge, both in terms of returns to education and of returns to work-related knowledge.Women's employment determined to a larger degree by the joint impact of care burdens and knowledge-determined opportunities, and their wages are more significantly affected by our indicators of knowledge.More than for men, while specialization improves employed women's wages, it reduces the ability to obtain a job for women currently excluded from the labor market.Marginal effects denote the mean variation in individuals' probability of being employed corresponding to an infinitesimal variation of the independent variable, estimated at the mean value of the independent variable; for dummy variables marginal effects denote the average variation in individuals' probability corresponding to the modification of the independent variable from 0 to 1.Control variables include real and financial wealth, age (squared), Regional fixed effects and a constant term. Conclusions After the Lisbon Agenda and the new Europe 2020 strategy, the relevance of knowledge as a driver of individuals' economic opportunities has become widely recognized in Europe.However, the gender dimension of knowledge is most frequently neglected despite the fact that women represent the larger pool of the inactive work force. By considering the case of Italy, in this paper we showed that despite much rhetoric and expectations about the fact that women have gradually overcome men in terms of educational attainments, they still lag behind in terms of the main skills and competencies that can profitably be used in the market.Indeed, distinguishing the concept of knowledge from the solely formal education seems to be crucial, and it is thus fundamental to focus on gender gaps in all the several dimensions of knowledge, beyond education. PANOECONOMICUS, 2011, 5, Special Issue, pp.735-757 In particular women lack both general and specific knowledge related to the labor market, as measured by tenure and labor market attachment.Women's accumulation of labor market experience is mostly constrained by unpaid work and care work burdens.These activities should be regarded, in our opinion, as a source of relevant knowledge in terms of social and interpersonal skills, managerial and organizational capacities.While from a feminist perspective these skills may be considered to be relevant in any work environment, even from a conservative viewpoint this knowledge should be valued at least in certain industries, such as health, long-term care, and services to the households.By contrast, we find that in Italy the skills and competencies acquired by carrying out unpaid work do not seem to be positively valued by the market, either in terms of employability or in terms of wage. Even considering just education, the picture seems to be more differentiated than is usually assumed.Despite the substantial growth of women's educational attainments, gender segregation in education is still a relevant issue.This phenomenon compresses both women's employment chances and women's wages, as evidenced by the fact that the returns to education (both in terms of employability and wage) are significantly higher in the case of scientific disciplines than in the humanities.Gender segregation in education is especially problematic because it is very likely to be highly correlated to gender occupational segregation, which in turn is a major source of the gender pay gap.Thus, educational and cultural policies aimed at overcoming traditional gender roles and images among younger students seems to be a very sensible policy option. On the whole, a note of optimism may come from evidence that gender differentials in the accumulation of knowledge are smaller for the younger population, although prime aged individuals appear to be as affected by traditional gender roles (as measured by the patriarchal sexual division of labor) as older cohorts are. Figures Figures A1-A4 Distribution of Knowledge and Extended Knowledge without Tenure, by Sex and Age Table 1 Educational Attainment and Field of Study, by Sex and Age Source: SHIW (2007). Table 2 ICT Skills, Financial Literacy and Labor Market Related Knowledge, by Sex and Age Note: Year 2006; working age is defined as the [25-60] age bracket, prime age is [25-50].Under the heading ICT skills, percentages denote the proportion of individuals satisfying the requirement.Under the heading Financial literacy, percentages denote the share of individuals selecting the correct answer; questions are listed in Annex.Source: SHIW (2007). Table 3 Factor Analysis on ICT Skills and Financial literacy, Rotated Factor Loadings Table 4 Measures of Knowledge and Extended Knowledge, by Sex and Age Note: Year 2006; working age is defined as the [25-60] age bracket, prime age is [25-50].Standard deviation in parentheses and italics represent between-persons variability of the indexes of knowledge; individuals' std.dev.and kurtosis of the variables Knowledge and Extended knowledge measure, for each person, the variability between his/her different dimensions of knowledge.Source: SHIW (2007). Table 5 The Employment Impact of Knowledge, Marginal Effects Year 2006; working age is defined as the [25-60] age bracket, prime age is [25-50].Standard deviation in parentheses.
9,291.8
2011-02-04T00:00:00.000
[ "Economics", "Sociology" ]
Analysis of Secret Key Randomness Exploiting the Radio Channel Variability A few years ago, physical layer based techniques have started to be considered as a way to improve security in wireless communications. A well known problem is the management of ciphering keys, both regarding the generation and distribution of these keys. A way to alleviate such difficulties is to use a common source of randomness for the legitimate terminals, not accessible to an eavesdropper. This is the case of the fading propagation channel, when exact or approximate reciprocity applies. Although this principle has been known for long, not so many works have evaluated the effect of radio channel properties in practical environments on the degree of randomness of the generated keys. To this end, we here investigate indoor radio channel measurements in different environments and settings at either 2.4625GHz or 5.4GHz band, of particular interest for WIFI related standards. Key bits are extracted by quantizing the complex channel coefficients and their randomness is evaluated using the NIST test suite. We then look at the impact of the carrier frequency, the channel variability in the space, time, and frequency degrees of freedom used to construct a long secret key, in relation to the nature of the radio environment such as the LOS/NLOS character. Introduction Traditionally, a set of cryptographic based mechanisms and protocols provides communication security through data encryption. In symmetric encryption methods, the main drawback is the key management, which includes key generation and distribution, since the same secret key is used for data encryption and data decryption [1]. While this issue is alleviated by asymmetric techniques using a pair of public and private keys [1], their high computational cost stresses the need for new security techniques, especially for wireless communications and emerging Internet of Things in which the energy consumption is of major importance. The robustness of these widespread classical cryptography mechanisms relies on computational constraint on the attacker. However, with the continuous progress of high power computing, unconditionally secured systems are more and more required [2]. In this respect, information-theoretic based security assumes unlimited computing power for the illegal user and claims that only the gathered information may help the eavesdropper to break the data privacy [2,3]. In this framework, a special approach to physical layer security (PhySec) field [2] intends to achieve wireless communications and data protection by exploiting the inherent properties of the wireless propagation channel such as the multipath fading, interference, and noise. One of the main PhySec techniques is secret key generation (SKG) [4,5], which facilitates key management as opposed to conventional cryptosystems. Secret key distribution is mainly avoided since each legitimate terminal (typically referred to as Alice and Bob) is assumed to generate the same secret key from the radio propagation channel, considered as a common source of randomness [4,5]. Indeed, when channel reciprocity applies, typically when Alice and Bob use the same frequency at the same time instant, they share the same wireless channel. Randomness is ensured through multipath fading, which results in decorrelation properties in the spatial, temporal, and frequency domains. Consequently, an eavesdropper (Eve) is probably not able to efficiently exploit her own measured channel in order to crack the key (Figure 1). A robust shared key is characterized by its length and its randomness. In fact, channel estimation noise is a main factor limiting the number of shared bits extracted from a single channel observation (see [6][7][8][9][10][11] for an information-theoretic framework). Therefore, researchers attempt to access more randomness by exploiting various channel degrees of freedom such as the spatial diversity existing in multiple antenna systems [6], the frequency diversity in orthogonal frequency division multiplexing (OFDM) systems [12], and the time diversity in ultrawideband (UWB) channels [8,9]. In addition to key length constraint, the random character of the key is essential in making eavesdropping extremely difficult, which requires a small correlation between the channel samples seen by Alice/Bob and by Eve. Jana et al. [13] assessed security performance by investigating real measured channels in both indoor and outdoor conditions. The measurements using 802.11-based laptops exhibited the weakness of SKG behavior in nearly static environments, where the entropy of extracted key bits is very low. Security in such environments may be enhanced by creating channel fluctuations using beamforming technique [14,15]. However, when either terminals or scattering objects are moving, is the randomness of the key sufficiently guaranteed? Furthermore, how may the security performance be improved in static environments? The statistical National Institute of Standards and Technology (NIST) test suite [16] is usually used to assess the effectiveness of extracting randomness from the wireless channel [13,[17][18][19][20][21]. Furthermore, it is more effective to test the ability of the whole source of randomness to provide really random bit strings [20] rather than testing a unique key realization. It is noteworthy that this randomness evaluation occurs directly after the quantization phase and is improved by privacy amplification [22] in the last step of SKG. According to this brief analysis of the literature, it turns out that most papers evaluate theoretically and practically SKG techniques by emphasizing either key reliability between legitimate users or key vulnerability with respect to Eve. The randomness of keys as a function of their source (i.e., the characteristics of the radio channel) has not been extensively considered. In this context, the main contributions of the present paper are as follows: (1) Investigate real indoor measured channels in different environments and settings, considering varying separation distances between users on the one hand and LOS/NLOS propagation conditions on the other hand. (2) Analyze the quality of the generated keys from the randomness point of view, using the NIST test suite, in relation to the channel properties (coherence bandwidth, carrier frequency, and LOS/NLOS). (3) Compare the key quality for suitably long keys, when the key bits are derived from either space, time, frequency, or jointly space-frequency degrees of freedom. This is especially relevant when targeting WIFI for the implementation of SKG. The paper is organized as follows. Section 2 presents the measurement campaign carried out in different conditions. Section 3 describes the quantization algorithm used to transform the channel complex coefficients into a stream of key bits. The key randomness is then tested through NIST test suite introduced in Section 4. Section 5 explains how to construct a sufficient long secret key by exploiting the channel variability in the spatial (or time) and frequency domain. Results invoking the relation between the key randomness and the real channel features are discussed in Section 6. Finally, the conclusion is drawn in Section 7. Measuring Systems and Scenarios Measurements have been performed in the premises of Télécom ParisTech (TPT), which is a century-old engineering education building with highly heterogeneous internal structuring due to many refurbishing events over the years. The measurements were conducted on a school holiday in order to ensure the absence of detectable human movement in the area. A 4-port vector network analyzer (VNA, Agilent ENA E5071C) has been used to record channel coefficients over 4 GHz of bandwidth (2-6 GHz) with 2.5 MHz as frequency step. This step, which translates into a maximum channel response delay of 400 ns, is enough to avoid aliasing, given the instrument noise floor and the typical delay spread of multipaths in the concerned environments. Table 1 presents the VNA setup parameters. One port of the VNA has been devoted to Alice, as transmitter, whereas the three remaining ports have been devoted to Bob, as receivers. Each port was equipped with an identical UWB bicone antenna with 2 dBi gain, specifically designed for the frequency stability of the radiation pattern [23]. The VNA has been calibrated with a "full 4-port" method including the (highly phase stable) cables, resulting as output at each frequency in the full 4 × 4 matrix of the complex channel coefficients including all antennas. The measurements have been carried out in classrooms and in an auditorium, in order to have indoor scenarios of sufficiently different characteristics, including identical or different heights for the terminals; LOS or NLOS propagation condition; and also different room sizes. Figure 2 shows the floor plans of both classrooms and auditorium where the environment is mainly constituted of concrete, plywood, and partition walls. In the classroom scenario, the terminals have been placed at the same height (1.3 m from the ground), whereas in the auditorium they have been placed at different heights as seen in Figures 3 and 4. The location of Alice was fixed for each of the two environments whereas the remaining three antennas have been moved across the area in a set of irregular locations, mostly within the room but also in the adjacent corridor or in an adjacent room. More clearly, the antennas representing Bob have 51 different positions in the classrooms scenario and 42 positions in the auditorium scenario, where only 25 total positions are in NLOS condition with respect to Alice. The NLOS scenario encompasses either room-to-room or room-to-corridor propagation conditions, as shown in Figure 2. In each measurement run, the three receivers representing Bob are steady while the transmitter representing Alice is spatially scanned over a square grid of 11 × 11 points (30 cm side and 3 cm step) confined to a small area so as to capture fast fading. More clearly, since the grid step is about half a wavelength at 5 GHz, we can expect to achieve close to statistically independent channel coefficients owing to spatial fading. The total 4 GHz bandwidth enables us to investigate in this paper the security performance of wideband (WB) channels centered at either 2.4625 GHz or 5.4 GHz (typical of the WIFI band) with, for example, a bandwidth of either Channel Quantization In a Time Division Duplex (TDD) system, such as for IEEE 802.11, Alice and Bob successively estimate their channel state information (CSI) by successively sending each other a known probe signal, using the same frequency band. Owing to the electromagnetic reciprocity law, the CSIs at both Alice and Bob are very similar. Therefore, assuming they use a common quantization algorithm, they are able to jointly translate their continuous CSI into a shared string of cryptographic key bits which may be used by the upper-layer protocols in order to strengthen security. However, some mismatches between Alice-Bob keys may occur, especially for ordinary commercial wireless devices in the case of TDD and additive noise or channel estimation inaccuracies. Fortunately, such key mismatches may be diminished by an efficient quantization algorithm, employing suitable censoring schemes. While some algorithms increase Alice-Bob's key agreement by dropping down samples falling into a predefined guardband region [6,24], the time required to construct a long secret key increases, which reduces the effectiveness of such algorithms. Alternatively, a more efficient protocol adapts the quantization scheme to the channel observation [6,21], for example, the channel quantization alternating (CQA) algorithm using two alternative maps [6]. Since key mismatches may still occur, a reconciliation step [3], using, for example, LDPC codes, is required to obtain exactly the same shared key bits between Alice and Bob. This part of the whole SKG mechanism is not considered in the present work, since it is not expected to specifically impact the SKG performance in relation to the radio channel characteristics. Although the most common channel metric used in SKG is the received signal strength [13,14,18,24] because this parameter is widely accessible in most radio receivers, it only partially exploits the channel information and the entropy of the generated keys is not very high. Alternatively, the channel phase information has been investigated and found to generate more random and secure stream of bits, such as in [25,26]. Another candidate for SKG is the channel impulse response (CIR) of UWB channels, whose ability to support SKG techniques has been proved experimentally [8,27,28]. Nevertheless, we can efficiently establish sufficient long and random key bits by exploiting more channel information at once, which is achieved by making use of the joint real and imaginary parts of channel coefficients (complex CSI) [6]. In the present work, we chose this option and based the SKG mechanism on the CQA algorithm [6]. At each time instant, Alice chooses and sends Bob publicly an adaptive map index, for which the current channel observation is less sensitive to mismatches between Alice and Bob keys, without revealing any relevant information to Eve. To that aim, the quantization regions (QRs) of Alice's map are computed by quantifying the cumulative distribution functions (CDFs) of each of the aggregated real and imaginary parts of CSI into √ statistically equal quantization intervals, resulting in QRs. Then, each alternative map of Bob results from the shifting of quantization thresholds with the same probability in a different direction (please refer to [6] for further insights). Figures 5(a) and 5(b) depict a particular channel realization in the complex -plane and show Alice's map by presenting the correspondence between symbols and QRs, both for = 4 and = 16. The interest of increasing is to establish a certain length of key with less required channel samples. However, as seen in [29], this yields an increase in the bit disagreement between legitimate users keys, which complicates in turn the key reconciliation step. Hence, increasing requires an improvement in the signalto-noise ratio (SNR) in order to alleviate the key reliability issue [29]. In [6], the effectiveness of the CQA algorithm for complex Gaussian channels is analyzed first in terms of bit disagreement between keys extracted by Alice, Bob, and Eve and secondly in terms of randomness, without testing, but by relating it to the independence between the real and imaginary parts of Gaussian channels. In a previous preliminary work [29], we have studied the reliability and the confidentiality of the wireless data transmission by computing the disagreement between keys bits, employing CQA on the channels whose measurements are explained in Section 2. The keys have been extracted from spatially variant channels at 5.4 GHz and the results have been discussed in terms of the impact of narrow band or WB channels on key randomness based security. In [11], the lack of spatial stationarity between Bob and Eve is addressed in terms of both theoretical bounds and keys bits disagreement. In [30], a novel security mechanism combining SKG and "tag signals" has been proposed, without deep analysis on key randomness quality. Here, we focus on a global understanding of the key randomness and security performance, according to the various features of the radio channel. Further, we do not consider the presence of Eve, which has been already considered in [10,11,29,30]. NIST Test Suite for Key Randomness Evaluation The fact that key bits are not statistically independent reduces the key quality since in an information-theoretic framework Eve may exploit any useful information to collapse the key space. In this context, the source of randomness, in addition to the admitted quantization algorithm, is the most critical aspect that mainly affects the key robustness behavior. Hence, we aim to assess the security performance in terms of randomness, which can be achieved using the NIST test suite [16]. We notice that these tests are not able to prove the perfect randomness of a key. However, each test shows whether the key bits follow a certain expected behavior owing to key generation process [20]. There are 16 statistical tests in total, but we are not able to apply all these tests owing to limitations on the requirement of each tested key length. Table 2 shows length limitations for some tests applied in this paper where is the length in bits of the bit strings used in each test and is the key length in bits. The remaining tests require very long keys and do not apply to the physical layer based wireless security scheme we here target. Some tests try to show whether the sequence of bits has the statistical properties of a random sequence. Consistently, the "monobit frequency" and the "block frequency" tests investigate these randomness criteria on, respectively, the entire key bits and in subblocks. For example, if we consider the following sequence of bits, 00 ⋅ ⋅ ⋅ 00 ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ we notice that it passes the monobit frequency test since 0 and 1 are equiprobable bits in the whole sequence whereas it is not the case in subblocks where too many bits equal either to 0 or to 1 may be present, leading to failure of the block frequency test. Another test, that is, the "runs" test, checks whether the frequency of runs, that is, uninterrupted strings of identical bits either 0 or 1, is as expected for a random sequence. In other words, it determines whether the transition between bits 0 and 1 is too fast or too slow. Accordingly, the sequence in the above example is considered random since the number of runs is very close to that expected for a random sequence (i.e., /2 runs). However, the sequence 00 ⋅ ⋅ ⋅ 00 ⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟ is not random since only 3 runs are computed. Both the "approximate entropy" (ApEnt) test and the "serial" test focus on the frequency of occurrences of all possible overlapping 2 strings of -bit length each, across the entire key bits. Their purpose is to compare the frequency of overlapping strings of several consecutive lengths against the expected result for a random sequence. For that, the ApEnt test uses two consecutive lengths ( and + 1) while the serial test uses three consecutive lengths ( , − 1, and − 2). Moreover, the serial test differs from the ApEnt test by the fact that longer bit strings can be used in the former for the same key length, as shown in Table 2. According to both the ApEnt test with = 1 and the serial test with = 2, the first sequence example is supposed to be random since strings of 2 bits are almost equiprobable whereas the second key example fails these two tests. Furthermore, if we consider strings of higher lengths, the first sequence may fail the tests. More information about these statistical tests can be found in [16]. For a single key, each randomness test indicates whether the key is accordingly random or not. Furthermore, in order to relate the quality of the randomness to the features of the radio channel, a set of generated keys is tested by each randomness test, which returns a percentage of sequences passing the test. Then, we computed the "mean pass rate" by averaging the percentages of sequences passing each NIST test and thus over all the applied statistical tests, which provides a global assessment of the randomness for each specific scenario. In the computation of the mean pass rate, we exclude the monobit frequency test for reasons explained in Section 6. Channel Variability in SKG In practice, a long secret key results from the concatenation of symbols derived from several estimated channel samples. Hence, channel variability is a crucial requirement to establish long random secret key bits. The quality of the key in part depends on the statistical independence between key bits, which to some extent can be reduced to the lack of correlation between channel samples. Such independence stems from sufficiently separated samples, in whatever domain sampling might be, which involves the physical propagation mechanisms and characteristics of the radio environment. In this part, we investigate the impact of the space, time, frequency, and joint space-frequency degrees of freedom on the SKG performance. Space versus Time Variability. Space variability stems from several differing positions for a single antenna or (although coupling and other effects can disturb this simple picture) from several antennas (multiantenna channel). Time variability can simply result from one given antenna being moved over differing positions, in which case it is generally equivalent to spatial variability. This is valid in, so far as the velocity is small enough, that when multiplied by the CIR delay spread, the result is much smaller than the wavelength (in other words, the channel can be assumed to be static over the CIR duration). Time variability can also come from movement of scatterers (such as vehicles in outdoor scenarios or pedestrians in indoor scenarios [31]) in the surroundings of the transmitter and the receiver. This type of time variability is not equivalent to spatial variability. In TPT measurements, spatial variability is provided by the movement of Alice over the 11 × 11 square grid as explained in Section 2, which is equivalent to the first type of time variability. These 121 antenna positions allow testing the SKG performance provided by spatial degrees of freedom, where Alice's antenna can take random positions over the grid, providing as many complex channel coefficients in order to construct ( = log 2 )-bit key at a given frequency. Hence, we randomly choose to construct 60 sets of random Alice positions for each Bob's position and each available frequency in the 20 MHz bandwidth. A statistical distribution can then be computed over Bob's positions, over the frequencies in the 20 MHz bandwidth, and over the 60 random sets of Alice positions. Frequency Variability. In real world applications, a spatial degree of freedom may not always be available (e.g., in single antennas links with very stable channels). In such a case, SKG is not applicable unless we find another source of channel variability, hence the need to exploit the frequency variability existing in WB channels. In order to investigate SKG performance in frequency variant channels, the data has been processed consistently with the 802.11a/g/n/ac standard, that is, in order to obtain complex channel coefficients at the required number of subcarriers for each bandwidth BW as shown in Table 3. For that purpose, the measurements were frequency interpolated. Moreover, for the same WIFI standard consistency, we discarded the channel coefficients at frequencies responsible for transmitting pilot bits and we kept only those at data transmitting frequencies. Table 3 shows some frequency channel characteristics for each bandwidth and according to the same standard. Given these parameters, not all the subcarriers need to be used to generate keys of enough bit length; then comes the question of how to choose the subcarriers. Intuitively, more correlation is likely to occur when the frequency difference between two channel coefficients is reduced. Unless the ratio between the number of available and the number of required subcarriers is an integer, there is no unique and obvious way to choose the subcarriers used in the SKG process. Hence, Alice chooses randomly a set of frequency subcarriers, from which ( = log 2 )-bit key is extracted, and she sends publicly this set to Bob. Although this information is transmitted also to Eve, it is not very relevant since it does not indicate any information about the key. Finally, a set of secret keys is obtained over Bob's positions, over the 121 positions of Alice, and over the random sets of subcarriers (arbitrarily taken to be 10 sets). Joint Space-Frequency Variability. Intuitively, with smaller coherence bandwidth, the SKG will be able to more efficiently exploit frequency variability. Unfortunately, the coherence bandwidth changes from an environment to another and is out of control. SKG performance should be achieved also in environments where the coherence bandwidth is small, which is a difficulty when no sufficient spatial variability is provided. As a way of mitigation, we here consider the possibility to exploit jointly the space and frequency degrees of freedom, so as to relax the requirements on each of both individually. A potential use case is that of MIMO systems (such as IEEE 802.11n/ac), providing spatial variability, together with OFDM technology providing frequency variability. Based on the features of the TPT campaign, spatial variability is provided by considering either every two consecutive Alice positions on each row of the grid or each foursquare consecutive Alice position on every two consecutive rows of the grid, as an array antenna resulting, respectively, in either 110 sets of 2-array antennas or in 100 sets of 4-square array antennas. More clearly, the vector = [ Results In the following, we use a fixed key length ( = 128) in the key randomness quality evaluation, with the exception of the pure spatial variability case where a comparison between different key lengths is carried out. For each channel variability type, a statistical distribution over the extracted keys is formed, as explained in Section 5, in order to compute a mean pass rate using the NIST tests. Table 4 shows the number of tested keys for each type of channel variability. Whatever the source of channel variability used to generate the key, our results show that all the keys pass the monobit frequency test. This is due to the statistically equal quantization intervals on each and , used to transform channel coefficients into discrete sequences of bits through CQA. Consequently, all the strings (of length log 2 √ ) have the same probability to occur and, equivalently, the probability to have either bit 0 or bit 1 is 1/2. Accordingly, we exclude the monobit frequency test when we compute the mean pass rate. International Journal of Antennas and Propagation 7 Figure 6 represents the mean rate of key sequences passing the chosen selection of NIST tests and for both = 128 and = 242. The spatial channel variability is used here to construct the key of bits with = 4. = 64 and = 121 channel samples are needed to, respectively, construct a 128-bit key and 242-bit key. Whatever the used frequency, it is shown that shorter keys better profit from the channel randomness. While maintaining the same , we need more channel samples in order to construct a longer secret key and consequently the probability to have more correlated samples increases, yielding bits with more correlations. Figure 7 shows the mean pass rate for = 128, for both 5.4 GHz and 2.4625 GHz bands, and with respect to LOS/NLOS cases. The impact of carrier frequency is not really meaningful in Figures 6 and 7 since the mean pass rates are very high, that is, nearly 1, in good part owing to the random positions taken by Alice over the grid. Nonetheless, this impact may be shown for the worst-case scenario corresponding to consecutive Alice positions over the regular grid, and thereby the 5.4 GHz band offers more random keys than 2.4625 GHz band. Indeed, the distance between two adjacent Alice positions on the grid corresponds almost to /2 at 5.4 GHz and to /4 at 2.4625 GHz, while /2 typically corresponds to the coherence distance over which channels are statistically well decorrelated in omnidirectional scenarios, resulting in extracted bits with a good level of independence. LOS/NLOS Effect. The key randomness is enhanced in NLOS propagation conditions, as shown in Figure 7, due to the lack of a dominant path yielding then more fluctuation of the channel transfer function than in LOS cases. Briefly speaking, it is noteworthy that in all cases the mean pass rate is very high, indicating that the spatial degree of freedom is suitable for random key generation. As discussed above, spatial variability can be translated into time variability through a random movement of Alice in space, providing adequate key randomness. As an extra advantage, such a time variant scheme would make it difficult for Eve to track accurately Alice's positions, reducing her ability to gather deterministic information about the channel characteristics and to guess the sequence of bits. Frequency Variability. A quantitative measure of the key randomness behavior with respect to the frequency variability domain can be found from the analysis of the root mean square (RMS) delay spread rms and consequently of the coherence bandwidth which typically varies inversely to the RMS delay spread. For each position of Alice over the square grid, CIR is computed by taking inverse Fourier transforms of the frequency responses recorded over 500 MHz bandwidth centered on either 2.4625 GHz or 5.4 GHz band and filtered with a Hamming window. The power delay profile (PDP) ( ) is then the average of the 121 squared CIRs computed over the grid. Therefore, where ℎ( ⃗ r, ) and are, respectively, the space-varying complex CIR and the path delay. {⋅} denotes the expectation over the space domain ⃗ r. Subsequently, rms is calculated as follows: where max and are, respectively, the maximum excess delay and the mean delay. The latter is defined as follows: Only multipath components with amplitude within 20 dB from the peak of each PDP are included in the computation of rms and . Figure 8 shows two examples of normalized measured PDPs and their corresponding frequency responses for both LOS and NLOS cases. It is clear that the NLOS PDP is rich in multipath components and thereby exhibits higher delay spread than the LOS one having a few dominant peaks at short delays. Figure 9 plots the variation of rms as a function of the distance, both for LOS and NLOS cases. We assess key randomness exploiting frequency variability while maintaining the same key length; that is, = 128 bits. To this end, we determine the number of subcarriers used for SKG according to ; that is, = 64 for = 4 and = 32 for = 16. Figure 10 shows the variation of the mean pass rate as a function of the distance between Alice and Bob at 5.4 GHz band, for different bandwidths and for both LOS and NLOS conditions. We note that 128 key bits cannot be extracted by exploiting the frequency variability in BW = 20 MHz when = 4. Figure 11 considers the impact of the carrier frequency on the key randomness behavior for BW = 40 MHz and = 16. Figure 10 shows that the higher the separation distance between Alice and Bob, the higher the mean pass rate, especially for LOS channels or for small values of . Moreover, NLOS channels provide statistically more random secure key bits as seen in Figures 10 and 11. The same behaviors are noticed in Figure 9 with respect to the delay spread. Hence, the improvement of the mean pass rate is explained by an increase of rms indicating a reduction in the coherence bandwidth, which yields less channel correlations for close frequency responses. Furthermore, the advantage of NLOS channels over the LOS ones in providing random keys comes from the multipaths richness of the former: the lack of proper Rayleigh fading reduces the channel variability in the frequency domain and creates insufficient randomness for a satisfactory success to NIST tests. Nonetheless, rms takes relatively small values ranging from 5 ns to 30 ns due to the open and little cluttered environment of TPT investigated locations. These values are consistent with typical ones for indoor environments; see, for example, [32]. An improvement in mean pass rate is thus expected for richer scattering environments. Bandwidth Effect. Larger available bandwidths yield larger separation of the subcarriers used for SKG and consequently smaller correlations. This results in improved key randomness, as seen in Figure 10. Figures 12 and 13 00 · · · 00 11 · · · 11 00 · · · 00 Q I Figure 12: Example illustrating SKG from very close channel responses, for = 4. two examples of key generation from, respectively, very close and very far spaced channel responses, for = 4. It is clear that more randomness is provided by the case where the channel coefficients are very far spaced where the SKG profits from the whole bandwidth while the efficient bandwidth is reduced in the other case yielding a key with poor randomness. Figure 11, the carrier frequency affects the key randomness behavior just Frequency (GHz) Q I 1 · · · 1 · · · 0 · · · 0 · · · 1 · · · 0 · · · 1 · · · 1 for LOS channels where higher mean pass rates are seen for the smallest carrier frequency (i.e., 2.4625 GHz). This is explained by the decrease in the coherence bandwidth or equivalently by the increase in the RMS delay spread, when the frequency gets lower, as shown in Figure 9(a). Furthermore, as displayed in Figure 9(b), rms does not change with the frequency for NLOS channels. The behavior of rms with the carrier frequency is consistent with results obtained in [33,34]. However, the difference in mean pass rates is weak, implying that there is no strong preference between the low and high WIFI band from this point of view. Still, the fact that the low band is limited to 20 MHz bandwidth while the high band reaches 160 MHz provides a clear advantage for SKG, given the above results. Figure 14 compares the mean pass rate for the three types of channel variability in the 5.4 GHz band for both = 4 and = 16. The full space variability provides the most robust keys and thereby the most suitable source for SKG. However, such a scheme either would require the terminal mobility over all the scanned positions before generating a key or would need those many antennas in a stable scenario. Therefore, we now assess the security provided by joint space-frequency variability, for both ant = 2 and ant = 4, which relaxes such requirements. Indeed, this scheme provides more random keys, especially with ant = 4, than the pure frequency domain variability, which stems from the larger average difference between frequency channels and the resulting reduced correlations between channel coefficients. Simply stated, for fully decorrelated antenna signals, the increase in ant reduces the bandwidth requirements. This is a very encouraging result for the effectiveness of SKG toward physical layer security. Since many wireless devices (for 3G, 4G, WIFI, etc.) tend to be multiantenna systems, such a solution will certainly be more and more feasible in the short term future. We also stress the importance of increasing , which well improves the key randomness despite the requirement of a high SNR. Conclusion In this paper, we presented a study of SKG and of key randomness by exploiting different degrees of freedom in the channel characteristics, based on space, frequency, and joint space-frequency variability. The random character of the key has been evaluated using the statistical NIST tests and the analysis has been targeted to relate this randomness to major channel features such as LOS/NLOS propagation and the nature of the radio environment. To this end, a set of indoor radio channel measurements have been carried out and the data coefficients have been processed using the CQA algorithm in order to construct secure keys of suitable length. The results showed that the spatial variability, in particular in small area confined to spatial fading, is very efficient in ensuring random keys for the shared secret key. However, since the spatial variability is a difficult requirement to occur in real daily life, it is better to exploit another channel variability which could be the frequency variability. In this work, we have specifically focused on 802.11a/g/n/ac standards, which impose limitations on the bandwidth and on the set of usable subcarriers. Under this constraint, we showed that the frequency variability by itself may not be enough to support high randomness SKG, especially when the coherence bandwidth is large for short distances in environments with little scattering/reverberation. An alternative approach is to exploit jointly the spatial and frequency degrees of freedom, which relaxes the constraints on the frequency variability of the channel or, said differently, on the nature of the radio environment. We noticed that randomness is improved for NLOS scenario cases but also that the presence of a strong or significant LOS component reduces the variability, especially in the frequency domain, and makes the extracted keys less random. While we have used in this work omnidirectional antennas, it can be expected that directional antennas will further impact the level of security brought about by SKG, although this will very much depend on the type of antennas used by Alice/Bob/Eve and will need further investigations. Finally, we stress the importance of applying a multibit extraction algorithm when the SNR and channel estimation errors allow, since this effectively improves key randomness.
8,381.8
2015-10-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Conditions and phase shift of fluid resonance in narrow gaps of bottom mounted caissons This paper studies the viscid and inviscid fluid resonance in gaps of bottom mounted caissons on the basis of the plane wave hypothesis and full wave model. The theoretical analysis and the numerical results demonstrate that the condition for the appearance of fluid resonance in narrow gaps is kh=(2n+1)π (n=0, 1, 2, 3, …), rather than kh=nπ (n=0, 1, 2, 3, …); the transmission peaks in viscid fluid are related to the resonance peaks in the gaps. k and h stand for the wave number and the gap length. The combination of the plane wave hypothesis or the full wave model with the local viscosity model can accurately determine the heights and the locations of the resonance peaks. The upper bound for the appearance of fluid resonance in gaps is 2b/L<1 (2b, grating constant; L, wave length) and the lower bound is h/b≤1. The main reason for the phase shift of the resonance peaks is the inductive factors. The number of resonance peaks in the spectrum curve is dependent on the ratio of the gap length to the grating constant. The heights and the positions of the resonance peaks predicted by the present models agree well with the experimental data. Introduction The coastal and marine structure may be composed of large number of caissons, which are vertically situated in water of uniform depth. The water wave interaction with an array of bottom mounted caissons is studied by several researchers. Dalrymple and Martin (1990) have investigated the wave diffractions of offshore detached breakwaters. Porter and Evans (1996) obtained two singular integral equations for the pressure and the velocity through a gap. Abul-Azm and Williams (1997) examined the oblique wave interaction with offshore breakwaters. Fernyhough and Evans (1995) studied the case of a periodic array of rectangular blocks. Peter and Meylan (2007) investigated wave scattering by a semi-infinite periodic array of arbitrary bodies. In the last two decades, following the development of the offshore technology, the assembly of large caissons with small gaps has been commonly used in oil and gas off-loading facilities and oil storage. Side-by-side operations adopted in production must now consider environmental conditions, operation procedures and so on. Amongst them, the wave resonant phenomenon in a narrow gap is one of the most important topics. The first analytical and experimental results of wave resonance in gaps of bottom mounted and floating caissons were probably obtained by Miao et al. (2000Miao et al. ( , 2001 using an asymptotic matching technique and Saitoh et al. (2002) and Iwata et al. (2007) by using laboratory tests. Because the floating structures are widely used in actual production, the fluid resonance in a narrow gap of the floating caissons has received substantial attention. In general, numerical methods based on the linear potential theory are widely used to study fluid resonance in narrow gaps of multiple floating bodies (e.g., Hong et al., 2005Hong et al., , 2013Lewandowski, 2008;Sun et al., 2010). However, discrepancies between linear potential results and measured free-surface elevations are significant. To overcome the potential theory being unable to model the viscous effects (skin friction and flow separation on the gaps), an external damping factor was introduced in the gaps. Thus, a number of very effective and powerful numerical simulation methods are de-veloped. Buchner et al. (2001) used the damping lid technique. Li et al. (2005) developed a modified scaled boundary element method. Pauw et al. (2007) applied the damping lid method for the investigation of resonant effects. Faltinsen (2009, 2010) compared their experimental data with results from a two-dimensional numerical analysis using a vortex tracking method. Sun et al. (2010) utilized a 3D program DIFFRACT to simulate firstand second-order resonant waves between adjacent barges. Lu et al. (2010) investigated the effect of adding an artificial damping to the momentum equations. More recently, Lu et al. (2011aLu et al. ( , 2011b have modelled the fluid resonance in narrow gaps and fluid forces on multi-bodies. Kristiansen and Faltinsen (2012) analysed the gap resonance by a new domain-decomposition method combining potential and viscous flow. Liu and Li (2014) obtained a semi-analytical solution for gap resonance by adding the artificial resistance force on the gap-free surface and the boundary between the dissipative domain and the non-dissipative domain. Yeung and Seah (2007) studied Helmholtz and higher-order resonance in the gaps between twin floating bodies. Most recently, Sun et al. (2015) have investigated wave driven free surface motion in the gap. Pessoa et al. (2015) investigated the coupled motion responses in waves of sideby-side LNG floating systems by numerical study. Perić and Swan (2015) conducted an experimental study of the wave excitation in the gap. Moradi et al. (2015) presented the effect of inlet configuration on wave resonance in the narrow gap of two fixed bodies. Watai et al. (2015) introduced an external damping factor and improved the numerical convergence by ranking time-domain method simulations. Feng and Bai (2015) described fully nonlinear waves by separating the contributions from incident and scattered waves. The terminal gap problem has analogies to the moonpool problem, which has been treated in the frame of linear potential theory by Molin (2001) for infinite water depth and Faltinsen et al. (2007) for finite water depth. For the appearance conditions of fluid resonance in narrow gaps, theoretical analysis and physical model tests in the gaps of bottom mounted and floating caissons conducted by Miao et al. (2001), Saitoh et al. (2002Saitoh et al. ( , 2008, and Iwata et al. (2007) show that the condition of fluid resonance in the narrow gap can be approximated by sin(kh)=0, kh=nπ (n=1, 2, 3, …), fundamental mode near kh=2.9 or khtanh(kd)=1 (Iwata et al., 2007), where d was the water depth. The majority of the results mentioned above have concentrated on the prediction of free surface elevations in the gaps between floating bodies, and the results obtained mainly by the numerical method are very rich and effective. However, there is no a priori method of determining the coefficient of the damping term unless being calibrated by experimental tests. Further investigation of the multi-body hydrodynamics within the narrow gaps is still necessary. It is concluded that the following questions remain to be solved. (1) The previous work suggests that the ability of linear diffraction can accurately predict the height and location of the resonance peaks without doubt and a monochromatic wave field is surely not a good representation of the ocean surface. Therefore, a suitable wave model needs to be used. (2) Theoretical arguments on the appearance conditions of fluid resonance in narrow gaps are not sufficient and need to be further investigated. On the other hand, the number of the resonance peaks in the spectrum curve and the reason (and value) of the phase shift of the response peak are still not clear. What is the influence of the geometrical parameters (e.g., the width of a caisson and a gap, length of a caisson or a gap) on the fluid resonance? (3) It is very important to identify a good viscosity model. A reasonable viscosity model can explain many physical phenomena for the fluid resonance in gaps, for example: What is the relationship between the maximum height and the position of the resonance peak in a spectrum curve? What causes lead to the phase shift of the response peak? What is the influence of the phase shift on the height of the resonance peak? (4) Whether there is a difference in the resonance peak number calculated by the viscosity model and by the potential flow model? This paper presents more details on the theoretical derivation method and the numerical results for the inviscid and viscid fluid resonance in the gaps of bottom mounted caissons on the basis of the plane wave hypothesis and the full wave model. Theoretical formulae of fluid resonance in gaps The structures consist of equally spaced caissons. The width and thickness of each caisson are A and h; the gap width between two adjacent caissons is 2a. The caissons are vertically located in water of a uniform depth d, and Cartesian coordinates (x, y, and z) are employed with the origin located on the still-water level at the centre of the gap. The positive x-direction coincides with the wave direction, the yaxis is parallel with the seaward side of the caissons, and the z-axis is directed vertically upward, as denoted in Fig. 1. The row of the caissons is normally subjected to plane ZHU Da-tong et al. China Ocean Eng., 2017, Vol. 31, No. 6, P. 724-735 725 waves with the wave height H and frequency ω. The whole fluid domain is divided into three regions: the sea field outside the row of the caissons (x≤0), the sea field inside the row of the caissons (x≥h), and in the gap (0≤x≤h) where there is the influence of the fluid viscosity. We use "viscid fluid" or the "viscosity of the fluid" to denote the local effects of the fluid viscosity on wave transmission and fluid resonance. Because the viscous decay length is assumed to be much smaller than the wave length, the viscous effects are confined to a thin region at the two ends of the gaps. Our analysis will proceed under the assumptions that the fluid is incompressible, inviscid, and the motion is irrotational away from the gaps of the caissons. We further assume that the boundary conditions on the free surface can be linearized. The fluid motion can then be expressed in terms of velocity potential. In the gap region between adjacent caissons, these velocities should match across the gap; and the pressures on each side should be equal in the gap to ensure a continuous pressure across the gap. Free surface elevation in gaps between the caissons The wave potential in a gap (0≤x≤h) can simplistically be denoted as: where A g and B g are unknown constants. The free surface elevation in the gaps can be represented as: ] . (2) In the following, we will neglect the time factor e -jωt for the sake of convenience. The wave pressure and the x-directed velocity in the gap are: Hence, the wave pressure and the velocity at two ends of the gap, x=0 and h, can be written as: ; where u a (0) and u a (h) are the non-dimensional velocities to be determined. The unknown constants A g and B g are easily obtained from Eqs. (9) and (10): A g and B g are substituted into Eq. (2) and the free surface elevation in the gaps of the caissons can be denoted as: The free surface elevation at the midpoint location of the gaps of the caissons is: It has been confirmed from Eqs. (11) and (12) that the non-dimensional velocities u a (0) and u a (h) at both ends of the gaps are the sources of disturbance of the free surface motion in the gaps. They can be obtained from a different wave theory and a viscous model. Based on the plane wave hypothesis and the full wave model in this paper, four cases can exist: the inviscid and viscid fluid velocities for plane wave hypothesis and inviscid and viscid fluid velocities for the full wave model on the two ends of the gap. Non-dimensional velocities under plane wave hypothesis We first discuss the non-dimensional velocities under plane wave approximation. To obtain the non-dimensional velocities, u a (0) and u a (h) at both ends of the gaps, we must use the wave field for seaward and landward regions of the caissons and matching conditions at the two ends of the gaps. The potential of the incident and reflected waves in the seaward region can be denoted as (Dean and Dalrymple, 1991): The wave pressure and the x-directed velocity in the field on the left-hand side of the row of caissons have the following formulas: . (15) The free surface elevation in the sea field on the lefthand side of the row of caissons is: where R 0 is the reflection coefficient; ρ and c are the fluid density and the wave celerity, respectively. The wave potential in the landward region of the row of caissons (x≥h) is: in which T 0 is the transmission coefficient. The wave pressure and the x-directed velocity in the field on the right-hand side of the row of the caissons have the following formulas: The free surface elevation in the landward region of the row of the caissons is denoted as: . (20) Matching conditions and non-dimensional velocities on both ends of a gap in an inviscid fluid for plane wave approximation The continuity of x-directed velocities of fluid particles across each end of the gap requires: where, the porosity is a ratio of the gap width (2a) to the grating constant (2b=2a+A). The continual conditions of wave pressures on both ends of the gap are denoted in an inviscid fluid as: Substituting Eqs. (7) and (15) into Eq. (21) gives the reflection coefficient: Inserting Eqs. (7) and (19) into Eq. (22) yields the transmission coefficient: Eqs. (5), (14) and (25) are substituted into Eq. (23) to obtain the following formula: (27) and (28), we can obtain non-dimensional velocities u a (0) and u a (h) on both ends of a gap in an inviscid fluid under plane wave approximation, respectively. Matching conditions and non-dimensional velocities at both ends of a gap in viscid fluid under plane wave approximation It is no doubt that the skin-friction, flow separation, and vortex shedding around the gap and corners caused by the viscid fluid will dissipate a large amount of wave energy, thereby reducing the height of the free surface and the severity of the fluid resonance in the gaps. The continual condition of the x-directed velocities of fluid particles across each end of the gap still requires Eqs. (21) and (22). The continual conditions of pressure on both ends of the gaps in viscid fluid can be represented as: where ∆P 0 and ∆P h represent the loss of fluid pressure across the two ends of the gaps. We use the local viscosity model, which is analogous to damping lid method in the numerical simulation, and the effects of viscosity are only located around the gaps. The influence of the additional mass on the fluid resonance in the gaps is also considered in the local viscosity model. The losses of pressure ∆P 0 and ∆P h can be denoted as (Zhu, 2011): where μ, ζ and l′ are the dynamic viscosity, the local resistance coefficient, and the modified length of gap (Zhu, 2013), respectively; δ=0-0.5 is the inductance coefficient, which is also referred to as the resonance participation coefficient. The first terms in Eqs. (35) and (36) are an inductive reactance. It does not dissipate energy, but can store energy, and lead the phase shift. The second term is the energy loss caused by the skin-friction; and the third term is a nonlinear energy dissipation. Another formula can be derived from Eq. (38) by employing the same procedure as above. Non-dimensional velocities for a full wave model The inviscid and viscid fluid non-dimensional velocities at both ends of a gap under full wave model have been studied by Zhu (2013) and Zhu and Xie (2015), and their results are introduced in this section. Inviscid fluid non-dimensional velocities under full wave model The inviscid fluid non-dimensional velocities at both ends of the gap under a full wave model can be rewritten as: ( ; . Viscid fluid non-dimensional velocities under a full wave model Viscid fluid non-dimensional velocities at both ends of the gap under a full wave model can be obtained by a set of non-linear equations similar to Eqs. (41) and (42), but the parameters contained in these equations are not the same with the plane wave approximation. They have the following forms: ; . Fluid resonance in gaps 2.4.1 Inviscid fluid resonance in gaps Eqs. (30) and (31) in plane wave hypothesis are substituted into Eq. (12). The free surface elevation at the midpoint of the gaps of the caissons in an inviscid fluid can finally be denoted as: − je 1 sin (kh) . h/L=1/2 is the zeroth-order mode, are higherorder modes. kh=mπ (m=1, 2, 3, …) is the appearance condition for the fluid resonance obtained by Miao et al. (2001) and Saitoh et al. (2002Saitoh et al. ( , 2008. For an even number for m, the fluid resonance in narrow gaps does not exist. Hence, Eq. (47) provides an accurate and efficient method for the prediction of the appearance conditions of the fluid resonance in narrow gaps. The root kh=(2n+1)π is inserted into Eq. (46); both nu-merator and denominator to zero are observed, and its limit value can only be obtained by the L'Hôpital's rule as follows: Eq. (48) shows that the ratio of the amplitude of the resonance wave in narrow gaps with the incident wave height can reach a very large value when the porosity e 1 is smaller than 1. Eqs. (43) and (44) in the full wave model are substituted into Eq. (12). Free surface elevation at the midpoint of the gaps of the caissons in inviscid fluid can be denoted as: Viscid fluid resonance in gaps Owing to a viscid fluid, non-dimensional velocities at both the ends of the gap are found from Eqs. (41) and (42), and they are the two implicit functions. Free surface elevation of viscid fluid at the midpoint of the gaps cannot be represented by explicit functions. Viscid fluid non-dimensional velocities can be directly inserted into Eq. (12) to calculate the free surface elevation at the midpoint of the gaps. Validation of theoretical formulae To examine the effectiveness and accuracy of the formulae for viscid and inviscid fluid resonance in the gaps, free surface elevation in gaps predicted by the present formulas are compared with the measured values obtained by Saitoh et al. (2008). In the experiments of Saitoh et al. (2002Saitoh et al. ( , 2008, the width and the thickness of a square caisson were 2b-2a=77 cm and h=77 cm, respectively. The water depth was d=20 cm. The widths of the gap were 3, 2.6, and 2 cm, and the length was 77 cm. Comparison of the free surface elevations in gaps predicted by the present models with the measured values The viscid fluid free surface elevations in the gaps are predicted utilizing Eqs. (41) and (42) for the plane wave hy-pothesis and using Eqs. (41), (42) and (45) for the full wave model, and the measured values (Saitoh et al., 2008) are together plotted in Fig. 2. Curves and symbols in the figure describe theoretical predictions and test data, respectively. The solid line in Fig. 2 is to denote the results of the full wave model and the dotted line for the plane wave hypothesis. The theoretical curves calculated by the two models are almost coincident. Fig. 2 indicated that the heights and the positions of the resonance peaks predicted by the present models are in agreement with the experimental ones (Saitoh et al., 2008). On both sides of the resonance peak, the theoretical results are smaller than the experimental data. In Fig. 2b the predicted height of the resonance peak is a little larger than the measured value. The phase shifts of the response peaks corresponding to Fig. 2 are denoted in Fig. 3; the theoretical values are in good agreement with the experimental data. Comparisons of the present model with the previously measured results confirmed that the viscosity model based on full wave solution and plane wave hypothesis provides satisfactory results from the viewpoint of practical applications. Especially, the viscosity model can accurately predict the phase shift of the response peak. A reasonable viscosity model can reduce the free surface height in the gaps, move the position of the resonance peak, and lead to the phase shift of the response peak. Wave transmission through narrow gaps The transmission of waves through gaps is closely related to the gap length. In consideration of the influences of gap length and porosity, the transmission coefficients under the plane wave hypothesis are derived in Eq. (32) for inviscid fluids and Eqs. (41) and (42) for viscid fluids. The transmission coefficients for the full wave model have been obtained by Zhu (2013) and Zhu and Xie (2015). The transmission coefficients calculated utilizing two kinds of models are shown in Fig. 4. Fig. 4a is the transmission coefficient for plane wave Fig. 2. Comparison of the predicted free surface elevation with test data (Saitoh et al., 2008). Solid line is from full wave model; dotted line is from plane wave hypothesis; symbols are test data. ZHU Da-tong et al. China Ocean Eng., 2017, Vol. 31, No. 6, P. 724-735 conditions; Fig. 4b is for the full wave model. The horizontal coordinate is the ratio, 2b/L of grating constant (2b= 2a+A) with wavelength. The inductance coefficient or resonance participation coefficient in the viscosity model is taken as δ=0.1, i.e., the phase shift factor is considered. There are two curves in each figure; one curve is for the inviscid fluid, and the other for the viscid fluid. The transmission peaks (dotted line) for plane wave hypothesis can be plotted from infinite; the six peaks are only denoted in Fig. 4a. In the full wave model, we only observe four transmission peaks (dotted line) in Fig. 4b, and when 2b/L>1, other transmission peaks vanish. It has been shown that the ratio, 2b/L=1 indeed is a critical value because the transmission peaks are truncated at this value. This phenomenon is shown to have a bound, which is the upper limit for the appearance of the transmission peaks. The solid curves in Figs value, , which is the upper limit for the appearance of the transmission peaks. It is perfect that the remaining two numbers are included in the range of 2b/L<1. It is thus reasonable to think that the ratio, 2b/L<1 is the upper limit for the appearance of fluid resonance in the gaps. The comparison of Fig. 4c and Fig. 4d shows that although the plane wave hypothesis can predict the positions and the heights of the transmission peaks, it cannot determine the upper limit for the appearance of the transmission peaks. The positions of transmission peaks (dotted line) of inviscid fluid in Fig. 4d are a little different from Fig. 4c and are moved to the low frequency side. This phenomenon shows that the evanescent waves in the full wave model have a phase shift function (Zhu and Xie, 2015), while the plane wave model does not include the evanescent waves. However, the value of the phase shift by the evanescent waves is smaller than the ones obtained by the resonance participation coefficient. 4.2 Conditions for the appearance of the fluid resonance in narrow gaps and the reason of phase shift The condition of fluid resonance in gaps of the caissons is . However, this condition is very rough, and it is not complete. Moreover, the modes (number) of the resonance peaks seem to be infinite in the spectrum curve. The relationships of the resonance peaks with caisson geometry are not clear. In this section, we research the appearance condition of fluid resonance in the gaps in detail. Upper and lower limit conditions for the appearance of fluid resonance in gaps From Section 4.1, a ratio of grating constant with the wavelength 2b/L<1 is the upper bound for the appearance of fluid resonance in the gaps, i.e., the resonance can occur only when the condition 2b/L<1 is satisfied. Taking 2b=Ƹh gets the following formula: When Ƹ=2, the length of the gaps, h=b is half of the grating constant, can be obtained from Eq. (50), and this value is already lower than the limit of 1/2. Fig. 5b for the full wave model shows that the viscid fluid resonance does not appear in the gaps. However, the viscid fluid resonance peaks predicted by the plane wave model still appear in Fig. 5a. If Ƹ=1.7, , then the fluid resonance occurs in the gaps (Fig. 6). Thus, for the length of the gaps to equal half of the grating constant may be the lower limit for the appearance of the zeroth-order resonance in the gaps. While the resonance peaks calculated by the plane wave model are distributed in all frequency domains, and the upper and lower bound for the appearance of the resonance peaks are not found by the plane wave model. When Ƹ=1/4, the length of the gaps h=4(2b) is four times that of the grating constant, can be derived from Eq. (50). This value is already larger than 7/2. There are four resonance peaks (Fig. 7) at in the spec-h L = 1 2 trum curve. The zeroth-order mode is , and the thirdorder mode is h/L=7/2. Therefore, the upper limit of the resonance peak completely depends on the ratio of the length of the gap to the grating constant; the length of the gap increases, and the upper bound is raised without a fixed value. Number of resonance peaks in the spectrum curve The number of resonance peaks depends on the ratio of the gap length to the grating constant. When h≤b, there is no resonance in the gaps. As the length of the gap increases, the number of the resonance peaks also increases. there is only a resonance peak for the zeroth-order mode (Fig. 8). For values such as h=2(2b), 2b/L=h/(2L)<1, h/L<2, h/L can take and two resonance peaks for the zeroth-and first-order modes are displayed on the spectrum ZHU Da-tong et al. China Ocean Eng., 2017, Vol. 31, No. 6, P. 724-735 h L = 1 2 , 3 2 curve in Fig. 9. From Fig. 4d Again, the length of the gaps, h=3(2b) is three times that of the grating constant, 2b/L=h/(3L)<1, h/L<3; h/L then can select . Fig. 10 shows that, in the spectrum curve there are three resonance peaks for the zeroth-, first-and second-order modes. Therefore, the index 2b/L<1 determines the number of resonance peaks in the spectrum curve and its upper limit for the appearance of fluid resonance in the gaps. Value and reason of phase shift of resonance peaks in spectrum curve The previous analysis shows that there are two kinds of phase shift phenomena, i.e., the phase shift caused by the evanescent waves in the full wave model and the phase shift caused by the resonance participation coefficient in the viscous model. The former influence is little; the latter effect is obvious. The main task of this section is to discuss the influence of resonance participation coefficient on the phase shift. Fig. 2 describes the theoretical predictions and test data, and the results for full wave model and for plane wave hypothesis are almost coincident. The theoretical height and location of the resonance peak for the zeroth-order mode agree well with the experimental data. In Fig. 11, we also plotted the curves of influence of the gap width on the resonance. Fig. 11a is the phase shift to equal zero, the position of the resonance peak is h/L=1/2, i.e., kh=π, when the ratio of the gap width to the gap length is 2a/h<0.1, the viscosity of the fluid plays a very important role, and the resonance peak is obviously reduced; 2a/h≥ 0.1, the heights of the resonance peak predicted by using viscid fluid models are the same as that with the inviscid fluid model. In Fig. 11b, δ=0.1. The values of the phase shift increase and heights of the resonance peaks decrease with an increase of the gap width. When the gap width is larger than 10% of the gap length, the heights of the resonance peaks are smaller than 5 times the incidence wave height. The average values at three locations of the resonance peaks for the zeroth-order mode in Fig. 3 (Saitoh et al., 2008)). Figs. 7 and 10 also show that the phase shifts increase gradually with an increase of the resonance order. The relation of the phase shift of the resonance peaks for the zeroth-order mode with the resonance participation coefficient is denoted in Fig. 12. Four resonance curves for zeroth-order mode correspond to four resonance participation coefficients, δ=0, 0.1, 0.3 and 0.5. When δ=0, there is no phase shift, kh=3.1416 (the upper limit for zeroth-order mode); for δ=0.5, the phase shift is larger; the locations of the resonance peaks for the zeroth-order mode are moved to the lower frequency side. Fig. 12 shows that the heights of the resonance peaks decrease with an increase in the resonance participation coefficients, δ. The first term in Eqs. (35) and (36) for the viscosity model is an inductive reactance. It does not dissipate energy, but can store energy and lead to the phase shift. According to the study for the perforated wall caisson breakwater (Zhu and Zhu, 2010), the first term in the viscosity model can be used to change the position of the minimum reflection coefficient on the curve of the reflection coefficient via the relative width of the caisson, to move the minimum reflection coefficient to the low frequency side. The function of this term for the fluid resonance in the gap is similar to the perforated wall breakwater. The volume and mass of the resonance water are relatively easy to be determined in the perforated wall caisson breakwater, but the water volume and mass for the fluid resonance in the gap is difficult to be identified. So the resonance participation coefficient δ is used for approximation. The fluid energy in the gaps can be divided into kinetic energy and potential energy. Kinetic energy is the energy ZHU Da-tong et al. China Ocean Eng., 2017, Vol. 31, No. 6, P. 724-735 carried by the water body (including add mass) inflow and outflow at both ends of the gap; potential energy can be expressed with the free surface elevation above the still water level. When the width of the gap is fixed, increasing the coefficient δ yields to an increase in the volume of water involved in the resonance, the phase shift of the resonance peaks and the kinetic energy to reduce the potential energy and the height of the resonance peaks. To fix the coefficient δ, increasing the width of gaps also obtains the same results (Fig. 11). Therefore, a reasonable selection of the coefficient δ is the key to determine the heights and the locations of the resonance peaks. Conclusions This paper presents details of the theoretical analysis and the numerical results for the inviscid and viscid fluid resonance in the gaps of bottom mounted caissons by using a plane wave hypothesis and a full wave model. The main focus is to determine the upper and lower bounds for the appearance of fluid resonance in the gaps, the heights, the number and the phase shift of the resonance peaks in the spectrum curve. On the basis of the comparisons of the present model with the existing theoretical and experimental results, the following conclusions can be drawn: (1) The theoretical analysis and the numerical results all show that the general conditions for the appearance of fluid resonance in narrow gaps are kh=(2n+1)π (n=0, 1, 2, 3, …), i.e., , rather than kh=nπ (n=1, 2, 3, …). (2) The plane wave hypothesis and the full wave model can accurately determine the heights and the locations of the resonance peaks in the viscid fluid, and reasonably explain the value and the reason for the phase shift of the resonance peaks in the spectrum curve. However, the plane wave hypothesis does not obtain the upper and lower limit for the appearance of fluid resonance in gaps and the number of the resonance peaks in the spectrum curve. (3) The upper bound, 2b/L<1 for the appearance fluid resonance in the gaps can be obtained by the full wave model in an inviscid fluid, and lower bound, h/b≤1 can be obtained by the full wave model in viscid fluids. (4) The number of resonance peaks in the spectrum curve is completely dependent on the ratio of the length of the gaps to the grating constant. Increasing the length of the gaps yields a higher number of resonance peaks. (5) The main reason for the phase shift of the resonance peaks is the inductive factors (resonance participation coefficient). With an increase in the phase shift, the potential energy contained in the gap fluid is gradually converted to kinetic energy, thereby reducing the resonance peak height and moving the resonance peak to the low-frequency side. The range 0.0≤δ≤0.15 of the coefficient may be reasonable. The viscosity model can accurately predict the phase shift of the response peak. However, a suitable coefficient δ still needs to be chosen. The influence of the evanescent waves in the full wave model on the phase shift is smaller than the resonance participation coefficient. (6) The gap width is larger than 10% of the gap length, and the heights of resonance peaks are smaller than 5 times the incidence wave height. (8) It should be further studied whether the content of this paper can be applied to the fluid resonance in the gaps of multiple floating bodies.
7,746.2
2017-12-01T00:00:00.000
[ "Geology" ]
Preparation and Characterization of Polyurethanes with Cross-Linked Siloxane in the Side Chain by Sol-Gel Reactions A series of novel polyurethanes containing cross-linked siloxane in the side chain (SPU) were successfully synthesized through a sol-gel process. The SPU was composed of 0%–20% N-(n-butyl)-3-aminopropyltriethoxysilane (HDI-T) modified hexamethylene diisocynate homopolymer. The effects of HDI-T content on both the structure and properties of SPU were investigated by Fourier transform infrared spectroscopy (FT-IR), X-ray photoelectron spectroscopy (XPS), X-ray diffraction (XRD), scanning electron microscopy (SEM), energy dispersive spectroscopy (EDS), differential scanning calorimetry (DSC), thermogravimetric analysis (TGA), mechanical properties tests, gel content test, water contact angle measurement and water absorption test. FT-IR, XPS and XRD results confirmed the successful incorporation of HDI-T onto polyurethanes and the formation of Si–O–Si. The surface roughness and the Si content of SPU enhanced with the increase of HDI-T content. Both crystallization and melting temperature shifted to a lower point after the incorporation of HDI-T. The hydrophobicity, tensile strength, Young’s modulus and pencil hardness overall increased with the increasing of HDI-T content, whereas the thermal stability and the elongation at break of SPU slightly decreased. However, there is still a need to improve properties of PUs with the emphasis on their hydrophobicity and mechanical properties. This can be achieved by varying the microstructures of PUs or incorporating inorganic fillers. For example, organic-inorganic nanocomposites were developed to combine the desirable properties of PUs and those of inorganic fillers [13][14][15][16] such as clay, silica and other nano-sized layered silicates. Consequently, significant improvement in performances such as mechanical properties, thermal stability and others was achieved. Moreover, various PU/polysiloxane hybrids, PU/alkoxysilane hybrids, PU/acrylic hybrids, PU/epoxy resin hybrids, were prepared to offer This work aims to attach siloxane onto both the backbone and side chains of PU, as shown in Figure 2. In this way, the amount of siloxane in PU would increase greatly, which may further significantly improve the surface hydrophilicity and mechanical properties. Firstly, hexamethylene diisocyanate homopolymer (HDI-3) was grafted with N-(n-butyl)-3aminopropyltriethoxysilane (NBAPTS) to generate HDI-T. A given amount of HDI-T was then mixed with 1,6-hexamethylene diisocyanate (HDI). The mixture reacted with polytetramethylene glycol (PTMG). The resulting product was chain-extended with 1,4-butanediol (BDO) and blocked with NBAPTS. This process offers a novel self-designing approach to develop novel PU with excellent properties. The structure and properties of the resultant materials were investigated by Fourier transform infrared spectroscopy (FT-IR), X-ray photoelectron spectroscopy (XPS), X-ray diffraction (XRD), scanning electron microscope (SEM), differential scanning calorimetry (DSC), thermogravimetric analysis (TGA), and mechanical and gel content tests. Materials 1,6-hexamethylene diisocyanate (HDI), hexamethylene diisocyanate homopolymer (HDI-3), and polytetramethylene glycol (PTMG; number-average molecular weight Mn of 2000 g/mol were obtained from Bayer Material Science (Pittsburgh, PA, USA). N-butyl-3-ammonia propyl triethoxy This work aims to attach siloxane onto both the backbone and side chains of PU, as shown in Figure 2. In this way, the amount of siloxane in PU would increase greatly, which may further significantly improve the surface hydrophilicity and mechanical properties. various PU/polysiloxane hybrids, PU/alkoxysilane hybrids, PU/acrylic hybrids, PU/epoxy resin hybrids, were prepared to offer synergetic proprieties through different methods, such as ultraviolet polymerization, miniemulsion polymerization, seeded emulsion polymerization and interpenetrating polymer networks [17][18][19][20]. Among the above methods, modification of PUs with polysiloxane is an important and effective way to prepare high performance materials [21]. The unique properties of polysiloxane, such as low surface energy, good thermal stability, and excellent flexibility, are mainly attributed to its intrinsic structure contained inorganic Si-O bonds. The combination of PU and polysiloxane could offer a better heat resistance and low temperature flexibility than PU alone and better mechanical properties and abrasion characteristics than polysiloxane alone [22][23][24]. Therefore, the possibility of combining the advantages of polysiloxane and PU has triggered many investigations for a long time. Most methods for improving water resistance, surface hydrophilicity and mechanical strength of PU are to introduce siloxane at both ends of the PU chain [25][26][27][28][29][30][31][32][33][34][35], as shown in Figure 1. The entrained Si-O-R will ensure the cross-linking of the modified PU, which may improve its water resistance, surface hydrophilicity and mechanical properties. However, the siloxane can only be introduced at the terminal groups of the backbone. Therefore, the amount of cross-linked units is low and consequently the improvement in the properties of PU may not be significant enough. This work aims to attach siloxane onto both the backbone and side chains of PU, as shown in Figure 2. In this way, the amount of siloxane in PU would increase greatly, which may further significantly improve the surface hydrophilicity and mechanical properties. Firstly, hexamethylene diisocyanate homopolymer (HDI-3) was grafted with N-(n-butyl)-3aminopropyltriethoxysilane (NBAPTS) to generate HDI-T. A given amount of HDI-T was then mixed with 1,6-hexamethylene diisocyanate (HDI). The mixture reacted with polytetramethylene glycol (PTMG). The resulting product was chain-extended with 1,4-butanediol (BDO) and blocked with NBAPTS. This process offers a novel self-designing approach to develop novel PU with excellent properties. The structure and properties of the resultant materials were investigated by Fourier transform infrared spectroscopy (FT-IR), X-ray photoelectron spectroscopy (XPS), X-ray diffraction (XRD), scanning electron microscope (SEM), differential scanning calorimetry (DSC), thermogravimetric analysis (TGA), and mechanical and gel content tests. Preparation of HDI-3 Grafted NBATPS As shown in Figure 3, NBAPTS (13.85 g) and HDI-3 (27.39 g, -NCO = 23%) were added to anhydrous THF (41.24 g) in a 500 mL four-neck round-bottom flask filled with argon and equipped with a mechanical stirrer. Experiments were performed at 0 • C and the rotation speed of the mechanical stirrer was 300 rpm. The dibutylamine method was used to determine the content of the isocyanate groups in the reaction system. The reaction was stopped when the content of the isocyanate groups reached a theoretical value (-NCO = 5.09%) and the target HDI-T was obtained. Figure 3, NBAPTS (13.85 g) and HDI-3 (27.39 g, -NCO = 23%) were added to anhydrous THF (41.24 g) in a 500 mL four-neck round-bottom flask filled with argon and equipped with a mechanical stirrer. Experiments were performed at 0 °C and the rotation speed of the mechanical stirrer was 300 rpm. The dibutylamine method was used to determine the content of the isocyanate groups in the reaction system. The reaction was stopped when the content of the isocyanate groups reached a theoretical value (-NCO = 5.09%) and the target HDI-T was obtained. Synthesis of Silicone/Polyurethane (SPU) Hybrids As shown in Figure 4, the molar ratio between isocyanate and hydroxyl groups was fixed at 1.2. THF, PTMG, HDI and HDI-T were added to a four-neck round-bottom filled with argon. The mixture was maintained at 25 °C and stirred for 30 min. Two drops of dibutyltin dilaurate (DBTDL) as a catalyst was added and the mixture was heated up to 55 °C and was maintained at that temperature until the theoretical -NCO value was reached. The resulting prepolymer was chain-extended with BDO at 45 °C. After the flask was cooled to room temperature, NBATPS was added dropwise to the flask at 20 drops per minute until the isocyanate groups were used up. Finally, a homogeneous and transparent SPU hybrid was obtained. The compositions of all samples are given in Table 1. Synthesis of Silicone/Polyurethane (SPU) Hybrids As shown in Figure 4, the molar ratio between isocyanate and hydroxyl groups was fixed at 1.2. THF, PTMG, HDI and HDI-T were added to a four-neck round-bottom filled with argon. The mixture was maintained at 25 • C and stirred for 30 min. Two drops of dibutyltin dilaurate (DBTDL) as a catalyst was added and the mixture was heated up to 55 • C and was maintained at that temperature until the theoretical -NCO value was reached. The resulting prepolymer was chain-extended with BDO at 45 • C. After the flask was cooled to room temperature, NBATPS was added dropwise to the flask at 20 drops per minute until the isocyanate groups were used up. Finally, a homogeneous and transparent SPU hybrid was obtained. The compositions of all samples are given in Table 1. Preparation of Silicone/Polyurethane (SPU) Films All hybrids obtained in Section 2.2.2 were smeared evenly on a polytetrafluoroethylene mold at room temperature for 7 days for moisture-induced curing and a series of cross-linked SPU films were obtained, as shown in Figure 5. Preparation of Silicone/Polyurethane (SPU) Films All hybrids obtained in Section 2.2.2 were smeared evenly on a polytetrafluoroethylene mold at room temperature for 7 days for moisture-induced curing and a series of cross-linked SPU films were obtained, as shown in Figure 5. Characterization FT-IR spectroscopy analysis: FT-IR spectroscopy was performed on a spectrometer (Thermo Fisher Scientific, Waltham, MA, USA) using the attenuated total reflectance technique. Data were collected in the range of 4000-500 cm −1 at 4 cm −1 resolution. XPS analysis: X-ray photoelectron spectroscopy (XPS) measurements were made on a Thermo VG ESCALAB 250 spectrometer (East Grinstead, UK) with a monochromatic Al-Kα X-ray source. Differential Scanning Calorimetry (DSC) analysis: Between 5 and 10 mg pre-dried samples were analyzed by a DSC equipment (Q200, Newcastle, TA, USA). Samples were heated up from 30 to 100 °C at a rate of 20 °C·min −1 and held at 100 °C for 3 min to remove the thermal history under a dry helium purge. They were then cooled down to −50 °C at a rate of 20 °C·min −1 . Finally, they were heated up to 100 °C again at the same rate. Wide angle XRD analysis: Wide angle X-ray diffraction measurement was carried out with a Philips X'pert-PRO (PANalytical, Holland) using Cu-Kα radiation. The diffraction angle 2θ ranged from 5° to 60°. Surface morphology analysis: The surface morphology of the SPU samples was analyzed by scanning electron microscopy (SEM, JSM5900LV, JEOL, Tokyo, Japan) at an accelerating voltage of 25 kV. Samples were adhered to aluminum sample holders and sputter coated with Au layer. The content of Si element on the surface of SPU films was measured by energy dispersive spectroscopy (EDS, EMAX, and EX-450 JEOL, Tokyo, Japan). Thermogravimeter analysis (TGA): TGA was used to measure the weight loss of the SPU films under nitrogen atmosphere. Samples were heated from 100 to 700 °C at a heating rate of 10 °C·min −1 . Mechanical properties analysis: The tensile properties of the films were measured at 25 °C with a universal testing machine (CMT, SANS, Shenzhen Sans Material Test Instrument Co., Ltd., Shenzhen, China) at a crosshead speed of 300 mm·min −1 . The reported values were averages of five specimens. Pencil hardness was carried out by a pencil according to ISO 15184. The SPU were coated on a glass and the final thickness of the films was about 100 µm. Characterization FT-IR spectroscopy analysis: FT-IR spectroscopy was performed on a spectrometer (Thermo Fisher Scientific, Waltham, MA, USA) using the attenuated total reflectance technique. Data were collected in the range of 4000-500 cm −1 at 4 cm −1 resolution. XPS analysis: X-ray photoelectron spectroscopy (XPS) measurements were made on a Thermo VG ESCALAB 250 spectrometer (East Grinstead, UK) with a monochromatic Al-Kα X-ray source. Differential Scanning Calorimetry (DSC) analysis: Between 5 and 10 mg pre-dried samples were analyzed by a DSC equipment (Q200, Newcastle, TA, USA). Samples were heated up from 30 to 100 • C at a rate of 20 • C·min −1 and held at 100 • C for 3 min to remove the thermal history under a dry helium purge. They were then cooled down to −50 • C at a rate of 20 • C·min −1 . Finally, they were heated up to 100 • C again at the same rate. Wide angle XRD analysis: Wide angle X-ray diffraction measurement was carried out with a Philips X'pert-PRO (PANalytical, Holland) using Cu-Kα radiation. The diffraction angle 2θ ranged from 5 • to 60 • . Surface morphology analysis: The surface morphology of the SPU samples was analyzed by scanning electron microscopy (SEM, JSM5900LV, JEOL, Tokyo, Japan) at an accelerating voltage of 25 kV. Samples were adhered to aluminum sample holders and sputter coated with Au layer. The content of Si element on the surface of SPU films was measured by energy dispersive spectroscopy (EDS, EMAX, and EX-450 JEOL, Tokyo, Japan). Thermogravimeter analysis (TGA): TGA was used to measure the weight loss of the SPU films under nitrogen atmosphere. Samples were heated from 100 to 700 • C at a heating rate of 10 • C·min −1 . Mechanical properties analysis: The tensile properties of the films were measured at 25 • C with a universal testing machine (CMT, SANS, Shenzhen Sans Material Test Instrument Co., Ltd., Shenzhen, China) at a crosshead speed of 300 mm·min −1 . The reported values were averages of five specimens. Pencil hardness was carried out by a pencil according to ISO 15184. The SPU were coated on a glass and the final thickness of the films was about 100 µm. Gel contents analysis: Samples (approximately 1 g) were cut from the SPU films, weighed, and then put in a Soxhlet extractor (Wuhan, China) filled with THF for 24 h. After the extraction, the samples were dried, the gel contents W g (%), were calculated as where m 1 and m 2 are the masses of dried SPU films before and after the extraction, respectively. Water contact angle analysis: Water contact angles (WCA) were measured through the sessile drop method on Dataphysics OCA20 (Wuhan, China) contact angle meter. The reported WCA values were the averages of five measurements taken at five different surface locations. Water absorption analysis: Two grams of pre-weighed dry SPU films were immersed in de-ionized water at room temperature. After the excess water was wiped from the film surface by filter paper, the mass of the swollen film was measured immediately. The water absorption was calculated as the mass percentage of water in the swollen sample as: where m 4 and m 3 are the masses of the dry and swollen samples, respectively. Infrared Spectroscopy A series of SPU hybrids and films were synthesized on the basis of the HDI-T and NBATPS. Stable hybrids and films were obtained by the addition of 0, 5, 10, 15 and 20 mol % HDI-T to SPU, respectively. The FT-IR spectra of the SPU films are shown in Figure 6. The weak absorption bands around 3310 cm −1 (N-H stretching) and 1530 cm −1 (N-H bending) and strong absorptions at 1700 cm −1 (free C=O stretching of urethane and carboxylic groups) and 1210-1240 cm −1 (stretching vibration of N-CO-O) confirm the formation of the urethane linkage. The peaks at 2938, 2857, and 2800 cm −1 (CH 2 and CH 3 stretching vibration); 1105 cm −1 (C-O-C stretching vibration of PTMG and Si-O-Si asymmetric stretching vibration); 1258 cm −1 (CH 3 -in Si-CH 3 symmetric bending), 1072 and 1021 cm −1 (Si-O stretching); and 800 cm −1 (Si-C stretching) can clearly be observed in the spectra. The peak at 2270 cm −1 (N=C=O stretching) has disappeared. This indicates that siloxane groups were successfully introduced into the SPU hybrid films. Gel contents analysis: Samples (approximately 1 g) were cut from the SPU films, weighed, and then put in a Soxhlet extractor (Wuhan, China) filled with THF for 24 h. After the extraction, the samples were dried, the gel contents Wg (%), were calculated as where m1 and m2 are the masses of dried SPU films before and after the extraction, respectively. Water contact angle analysis: Water contact angles (WCA) were measured through the sessile drop method on Dataphysics OCA20 (Wuhan, China) contact angle meter. The reported WCA values were the averages of five measurements taken at five different surface locations. Water absorption analysis: Two grams of pre-weighed dry SPU films were immersed in deionized water at room temperature. After the excess water was wiped from the film surface by filter paper, the mass of the swollen film was measured immediately. The water absorption was calculated as the mass percentage of water in the swollen sample as: where m4 and m3 are the masses of the dry and swollen samples, respectively. Infrared Spectroscopy A series of SPU hybrids and films were synthesized on the basis of the HDI-T and NBATPS. Stable hybrids and films were obtained by the addition of 0, 5, 10, 15 and 20 mol % HDI-T to SPU, respectively. The FT-IR spectra of the SPU films are shown in Figure 6 DSC Analysis of SPU Films DSC results showed that the crystallization temperatures of SPU0, SPU5, SPU10, SPU15 and SPU20 were −22.4, −23.9, −25.4, −31.1, and −31.5 °C, respectively. Their melting temperatures were 29.1, 26.3, 25.7, 23.7, and 19.7 °C, respectively, as shown in Figure 8 and Table 2. The chemical linkages as a result of the polycondensation reaction between -S-O-C2H5 and H2O might have weakened the crystallization ability of the soft segments [36,37], decreased the melting temperature of the crystalline domains, and also restricted the movement of the molecular chains in the crystallized domains even at temperatures above the melting temperature, and provided the SPU films with an elastomer state at ambient temperature. This result indicated that the addition of HDI-T affected the crystal structure of the polymer and resulted in a lower crystallinity or lower degree of segment order. DSC Analysis of SPU Films DSC results showed that the crystallization temperatures of SPU0, SPU5, SPU10, SPU15 and SPU20 were −22.4, −23.9, −25.4, −31.1, and −31.5 • C, respectively. Their melting temperatures were 29.1, 26.3, 25.7, 23.7, and 19.7 • C, respectively, as shown in Figure 8 and Table 2. The chemical linkages as a result of the polycondensation reaction between -S-O-C 2 H 5 and H 2 O might have weakened the crystallization ability of the soft segments [36,37], decreased the melting temperature of the crystalline domains, and also restricted the movement of the molecular chains in the crystallized domains even at temperatures above the melting temperature, and provided the SPU films with an elastomer state at ambient temperature. This result indicated that the addition of HDI-T affected the crystal structure of the polymer and resulted in a lower crystallinity or lower degree of segment order. X-ray Diffraction Analysis X-ray patterns of the SPU with different HDI-T contents are shown in Figure 9. All the diffractograms were similar and exhibited a broad diffraction halo around 22°. Moreover, the diffraction peak became weaker and broader with increasing HDI-T content, implying that the crystallinity of SPU films gradually decreased with increasing HDI-T content. The hydrolysis and condensation reaction of alkoxy silane in HDI-T and NBAPTS formed a Si-O-Si cross-linked network structure, which restricted the movement and ordered arrangement of chain segments, decreased the regularity of soft-segments, and therefore led to a decrease in the crystallinity of soft-segments [23,24,34]. X-ray Diffraction Analysis X-ray patterns of the SPU with different HDI-T contents are shown in Figure 9. All the diffractograms were similar and exhibited a broad diffraction halo around 22 • . Moreover, the diffraction peak became weaker and broader with increasing HDI-T content, implying that the crystallinity of SPU films gradually decreased with increasing HDI-T content. The hydrolysis and condensation reaction of alkoxy silane in HDI-T and NBAPTS formed a Si-O-Si cross-linked network structure, which restricted the movement and ordered arrangement of chain segments, decreased the regularity of soft-segments, and therefore led to a decrease in the crystallinity of soft-segments [23,24,34]. X-ray Diffraction Analysis X-ray patterns of the SPU with different HDI-T contents are shown in Figure 9. All the diffractograms were similar and exhibited a broad diffraction halo around 22°. Moreover, the diffraction peak became weaker and broader with increasing HDI-T content, implying that the crystallinity of SPU films gradually decreased with increasing HDI-T content. The hydrolysis and condensation reaction of alkoxy silane in HDI-T and NBAPTS formed a Si-O-Si cross-linked network structure, which restricted the movement and ordered arrangement of chain segments, decreased the regularity of soft-segments, and therefore led to a decrease in the crystallinity of soft-segments [23,24,34]. Figure 10 shows the morphologies of the surfaces of the SPU films by SEM. The surface of the SPU0 was rough and contained white spots which might be siloxane particles. As the HDI-T content increased, the white spots vanished and the surface of the film became smoother. The surface of the SPU20 film was smooth and was significantly different from the films containing low HDI-T contents. It is known that microstructures of PU block copolymers can be affected by the chemical compositions and lengths of the blocks, and the miscibility between hard and soft segments [38,39]. In this work, the hard segments were composed of urethane and urea groups, and the soft ones polyester carbonyls and Si-O-Si chains. Moreover, the soft segments were the matrix and the hard ones were dispersed in it. The surfaces of the films of such materials could range from strongly phase separated to nearly homogeneous, depending on the miscibility between their soft and hard segments. Figure 10 shows the morphologies of the surfaces of the SPU films by SEM. The surface of the SPU0 was rough and contained white spots which might be siloxane particles. As the HDI-T content increased, the white spots vanished and the surface of the film became smoother. The surface of the SPU20 film was smooth and was significantly different from the films containing low HDI-T contents. It is known that microstructures of PU block copolymers can be affected by the chemical compositions and lengths of the blocks, and the miscibility between hard and soft segments [38,39]. In this work, the hard segments were composed of urethane and urea groups, and the soft ones polyester carbonyls and Si-O-Si chains. Moreover, the soft segments were the matrix and the hard ones were dispersed in it. The surfaces of the films of such materials could range from strongly phase separated to nearly homogeneous, depending on the miscibility between their soft and hard segments. The EDS analysis was performed to identify the presence of atoms in the samples at a depth of 100-1000 nm from the surfaces. The expected elements (C, O and Si) can be observed in Figure 11 and The EDS analysis was performed to identify the presence of atoms in the samples at a depth of 100-1000 nm from the surfaces. The expected elements (C, O and Si) can be observed in Figure 11 and Table 3. The percentage of Si element on the surface increased from 0.33% (SPU0) to 0.89% (SPU20) with increasing HDI-T content and consequently crosslinking degree. The theoretical and experimental results of EDS for the content of Si matched fairly well. Figure 10. SEM micrographs of the surfaces of SPU films (scale bar = 10 µm). The EDS analysis was performed to identify the presence of atoms in the samples at a depth of 100-1000 nm from the surfaces. The expected elements (C, O and Si) can be observed in Figure 11 and Figure 12 shows the TGA and DTG curves of all SPU hybrid films. They all showed two-stage decomposition temperatures. The slight weight loss up to 250 • C was due to the evaporation of residual moisture and the presence of organic solvents in the films [40]. The weight loss between 250 and 350 • C was attributed to the dissociation of urethane bonds to form isocyanates, alcohol and amines [41]. The degradation above 400 • C was mainly due to the scission of the cross-linked structure. The major decomposition product was SiO 2 . The DTG curves of the SPUs shifted slightly to a lower temperature with an increase in HDI-T content, which was slightly different from some of the research works reported in the literature [39]. Two factors might be responsible for this phenomenon. On the one hand, as HDI-T content increased, the gel content of SPU increased as shown in Figure 13. That was beneficial for the thermal stability. On the other hand, the content of C-N bond increased with increasing HDI-T content. However, the bond energy of C-N (305 kJ·mol −1 ) was lower than those of C-C (346.9 kJ·mol −1 ) and C-H (414 kJ·mol −1 ). That was unfavorable for the thermal stability. Therefore, the thermal stability of the SPU films was a trade-off between these two opposite factors. Table 3. The percentage of Si element on the surface increased from 0.33% (SPU0) to 0.89% (SPU20) with increasing HDI-T content and consequently crosslinking degree. The theoretical and experimental results of EDS for the content of Si matched fairly well. Figure 12 shows the TGA and DTG curves of all SPU hybrid films. They all showed two-stage decomposition temperatures. The slight weight loss up to 250 °C was due to the evaporation of residual moisture and the presence of organic solvents in the films [40]. The weight loss between 250 and 350 °C was attributed to the dissociation of urethane bonds to form isocyanates, alcohol and amines [41]. The degradation above 400 °C was mainly due to the scission of the cross-linked structure. The major decomposition product was SiO2. The DTG curves of the SPUs shifted slightly to a lower temperature with an increase in HDI-T content, which was slightly different from some of the research works reported in the literature [39]. Two factors might be responsible for this phenomenon. On the one hand, as HDI-T content increased, the gel content of SPU increased as shown in Figure 13. That was beneficial for the thermal stability. On the other hand, the content of C-N bond increased with increasing HDI-T content. However, the bond energy of C-N (305 kJ·mol −1 ) was lower than those of C-C (346.9 kJ·mol −1 ) and C-H (414 kJ·mol −1 ). That was unfavorable for the thermal stability. Therefore, the thermal stability of the SPU films was a trade-off between these two opposite factors. Figure 14 shows that the tensile strength of the SPU films increased with increasing HDI-T content, whereas their elongation at break showed an opposite trend because of an increase in the crosslinking degree of SPU films with the formation of Si-O-Si linkage through the hydrolysis and condensation process. Table 4 summarizes the Young's modulus, tensile strength, elongation at break and pencil hardness values as a function of HDI-T content. When HDI-T content was increased from 0 to 20 mol %, the Young's modulus and tensile strength were increased from 0.41 MPa to 1.77 MPa and from 0.15 MPa to 0.55 MPa, respectively. Meanwhile, the elongation at break decreased from 1019.7% to 471.4%, suggesting an increased brittleness. All those changes resulted from the increased crosslinking degree. The crosslinking reduced the mobility of the chains during tensile deformation and consequently increased their mechanical properties [42,43]. The hardness increased from HB to 2H with increasing HDI-T content. Figure 14 shows that the tensile strength of the SPU films increased with increasing HDI-T content, whereas their elongation at break showed an opposite trend because of an increase in the crosslinking degree of SPU films with the formation of Si-O-Si linkage through the hydrolysis and condensation process. Table 4 summarizes the Young's modulus, tensile strength, elongation at break and pencil hardness values as a function of HDI-T content. When HDI-T content was increased from 0 to 20 mol %, the Young's modulus and tensile strength were increased from 0.41 MPa to 1.77 MPa and from 0.15 MPa to 0.55 MPa, respectively. Meanwhile, the elongation at break decreased from 1019.7% to 471.4%, suggesting an increased brittleness. All those changes resulted from the increased crosslinking degree. The crosslinking reduced the mobility of the chains during tensile deformation and consequently increased their mechanical properties [42,43]. The hardness increased from HB to 2H with increasing HDI-T content. Figure 14 shows that the tensile strength of the SPU films increased with increasing HDI-T content, whereas their elongation at break showed an opposite trend because of an increase in the crosslinking degree of SPU films with the formation of Si-O-Si linkage through the hydrolysis and condensation process. Table 4 summarizes the Young's modulus, tensile strength, elongation at break and pencil hardness values as a function of HDI-T content. When HDI-T content was increased from 0 to 20 mol %, the Young's modulus and tensile strength were increased from 0.41 MPa to 1.77 MPa and from 0.15 MPa to 0.55 MPa, respectively. Meanwhile, the elongation at break decreased from 1019.7% to 471.4%, suggesting an increased brittleness. All those changes resulted from the increased crosslinking degree. The crosslinking reduced the mobility of the chains during tensile deformation and consequently increased their mechanical properties [42,43]. The hardness increased from HB to 2H with increasing HDI-T content. Surface Property and Water Absorption of SPU Films The contact angle of water test was used to characterize the surface properties of SPU films. The results are shown in Figure 15. As the HDI-T content increased, the contact angle of water on the SPU film increased. It was 66.5 • , 71.5 • , 79.2 • , 81.2 • and 86.5 • for SPU0, SPU5, SPU10, SPU15 and SPU20, respectively, indicating an obvious improvement in hydrophobicity of the SPU films. The results indicated that the incorporation of alkoxysilane reduced the surface free energy of the SPU films because of the migration of Si atoms with a low polarity to the surfaces of the SPU films [44]. Therefore, the wettability of the SPU films decreased and their hydrophobicity increased with increasing alkoxysilane content. This phenomenon agreed with the results reported in the literature [45][46][47]. Surface Property and Water Absorption of SPU Films The contact angle of water test was used to characterize the surface properties of SPU films. The results are shown in Figure 15. As the HDI-T content increased, the contact angle of water on the SPU film increased. It was 66.5°, 71.5°, 79.2°, 81.2° and 86.5° for SPU0, SPU5, SPU10, SPU15 and SPU20, respectively, indicating an obvious improvement in hydrophobicity of the SPU films. The results indicated that the incorporation of alkoxysilane reduced the surface free energy of the SPU films because of the migration of Si atoms with a low polarity to the surfaces of the SPU films [44]. Therefore, the wettability of the SPU films decreased and their hydrophobicity increased with increasing alkoxysilane content. This phenomenon agreed with the results reported in the literature [45][46][47]. In parallel with the improved surface hydrophobicity with increasing HDI-T content from 0% to 20%, the water absorption of the SPU films after seven days decreased from 27.2% to 7.6%, as shown in Figure 16. This indicates that the incorporation of HDI-T in SPU improved their water resistance. This may be ascribed to the formation of a hydrophobic layer of Si-O-Si chains enriched on the surface of SPU film and that of a crosslinked siloxane network structure, both of which prevent water molecules from getting into and diffusing through the film. In parallel with the improved surface hydrophobicity with increasing HDI-T content from 0% to 20%, the water absorption of the SPU films after seven days decreased from 27.2% to 7.6%, as shown in Figure 16. This indicates that the incorporation of HDI-T in SPU improved their water resistance. This may be ascribed to the formation of a hydrophobic layer of Si-O-Si chains enriched on the surface of SPU film and that of a crosslinked siloxane network structure, both of which prevent water molecules from getting into and diffusing through the film. Conclusions In this study, we synthesized HDI-T and incorporated it into SPU chains. HDI-T content was between 0 and 20 mol %. FT-IR, XPS and XRD results showed successful incorporation of HDI-T into polyurethanes and the formation of Si-O-Si. SEM images exhibited a much smoother surface and EDS test found the Si element content increased as the increase of HDI-T content, DSC demonstrated that both crystallization temperature and melting temperature moved to a lower point as a result of the incorporation of HDI-T. The resulted SPU coating films exhibited an increased surface hydrophobicity and decreased water adsorption. Meanwhile, the Young's modulus, tensile strength and pencil hardness of the films were also improved with increasing HDI-T content, but their elongation at break decreased. The thermal stability of the SPU films became poorer with the incorporation of more C-N bond as a result of an increase in HDI-T content. This study provided a new effective way to prepare SPU coating materials with different performances. Conclusions In this study, we synthesized HDI-T and incorporated it into SPU chains. HDI-T content was between 0 and 20 mol %. FT-IR, XPS and XRD results showed successful incorporation of HDI-T into polyurethanes and the formation of Si-O-Si. SEM images exhibited a much smoother surface and EDS test found the Si element content increased as the increase of HDI-T content, DSC demonstrated that both crystallization temperature and melting temperature moved to a lower point as a result of the incorporation of HDI-T. The resulted SPU coating films exhibited an increased surface hydrophobicity and decreased water adsorption. Meanwhile, the Young's modulus, tensile strength and pencil hardness of the films were also improved with increasing HDI-T content, but their elongation at break decreased. The thermal stability of the SPU films became poorer with the incorporation of more C-N bond as a result of an increase in HDI-T content. This study provided a new effective way to prepare SPU coating materials with different performances.
7,711.2
2017-02-28T00:00:00.000
[ "Materials Science" ]
A thermoeconomic indicator for the sustainable development with social considerations The United Nations action plan Agenda 21 has represented a milestone toward Sustainable Development. On its 40th Chapter, it is introduced the requirement to dispose of an accurate and continuous collection of information, essential for decision-making. Besides bridging the data gap and improving the information availability, it is highlighted the need to dispose of sustainable development indicators, in order to assess and monitor the performances of countries toward sustainability. In this paper, we develop an improvement of a new indicator, recently introduced linking environmental anthropic footprint and social and industrial targets. Here, we suggest a link with the Income Index, in order to consider also a condition of people well-being. Our results consists in an improvement of the present approaches to sustainability; indeed, we link the socio-economic considerations, quantified by the Income Index and the Human Development Index, to the engineering approach to optimization, introducing the thermodynamic quantity entropy generation, related to irreversibility. In this way, two different new indicators are introduced, the Thermodynamic Income Index and the Thermodynamic Human Development Index, which quantitatively express a new viewpoint, which goes beyond the dichotomy between socio-economic considerations on one hand and engineering and scientific approach to sustainability on the other one. So, the result leads to a unified tool useful for the designing of new policies and interventions for a sustainable development for the next generations. Introduction The use of a continuous increasing amount of energy has been fundamental for the human development. Indeed, a key factor for socio-economic development of societies can be identified in the capability to manage flows of energy and materials (Cleveland et al. 1984). During the evolution of human history, specially from the industrial era up to now, our society has begun to need always more power and to deeply depend on fossil fuels. However, nowadays, there are main concerns linked to the use of fossil fuels, not last those related to environment and sustainability. The increase in greenhouse gasses (GHGs) and pollutant emissions on the one hand, and the depletion of fossil fuel resources on the other one, are driving the scientific research to find alternative sources of energy and technologic solutions to burn less fuel and to reduce pollutant emissions. So, in the last decades, increasing and optimizing energy efficiency has become a highpriority for all engineering areas, specially in relation to sustainability, sustainable development and to the rational use of resources. The key challenge of sustainability Sustainability and sustainable development represent a key challenge of our Century for all the disciplines. Since 1983, the World Commission on Environment and Development (WCED) started to work on problems related to environment and development, trying to advance tools to orient the international community to solve them. The goal of the Brundtland Commission was to try to optimize the process of development considering three different dimensions of it: the economical, the environmental and the social one (Spangenberg et al. 2002). Therefore, it has become common to describe Sustainable Development through the three interlinked pillars of sustainability (Purvis et al. 2019): This concept is often presented graphically trough a Venn diagram, with three intersecting circles (the three main domains of sustainability) (Mensah 2019), where only the overlap area of the three domains implies sustainable development. The first literature work in which is presented this conceptualization is by Barbier in 1989Barbier (1987. Barbier also calls it systems approach (Barbier and Burgess 2017), where emerges the need not to maximize the single goals of each subsystem (economic, social and environmental) but to find a continuous balance of trade-offs among this different goals, without ignoring the consequences on the other subsystems. The most important aspect to take into account is that an action, which is in accordance with sustainable development, must consider at the same time the three main domains of sustainability. Brief history of sustainable development and the need to measure it Sustainable development started to be promoted in the early Seventies in order to reach suitable environment setting (Asr et al. 2019) in the course of the development both of societies and of technical progress. In this context, the work "The limits to Growth" 1 3 Meadows et al. (1972), commissioned by the Club of Rome, can be considered a precursor on this topic, where the authors have developed a model to realize a simulation of the interaction human-Earth in which are taken into account five main variables: consumption of non-renewable resources, industrialization, food production, pollution and population. Being the natural resources upper limited (finite quantities) and assuming an exponential growth of the main variables (according to the previous historical data) (Basiago 1999), the authors concluded that there exists an upper limit of time, persevering this kind of behavior (Norman 2009). The term Sustainable development first appeared officially in 1980, in the World Conservation Strategy Report (IUCN 1980), where emerged the need for a global approach to the administration of resources on which anthropic activities and development rely on. Indeed, the focus of this report was on two main issues: • Conservation, referred to the necessity for conservation of our living ecosystem, limiting ecosystems degradation and all the problems related with negative human beings taking into account the needs of the future generations; • Interrelation of actions at a global level which highlights the responsibility of local actions, which all have a rebound at a global scale. This document was addressed to government policy makers, to the people who works in strictly contact with natural resources and development practitioners (IUCN 1980) with the aim to reach sustainable development stimulating an approach based on conservation and awareness of human actions. So, fifty years ago, clearly emerged the request of a more sustainable way of management of Earth resources use, since it was noticed the failure of integrating conservation with development. In 1986, the International Union for Conservation of Nature (IUCN), identified five requirements to realize sustainable development, which can be summarized as Jacobs et al. The most often cited definition of Sustainable development (Schaefer and Crane 2005) as the development which responds to the present requirements without compromising the potentiality of the next generations (WCED 1987). This report has put the spotlight on sustainable development both in the scientific community (Castro 2004) and in the international policy framework (Johnston et al. 2007). The Brundtland Report is divided into three main parts: common concerns, common challenges and common endeavors; in all of them emerges the requirement of link economic growth to social equity and environmental concerns. The WCED put the spotlight on sustainable development (Castro 2004) and, in the wake of the Brundtland Report's contents, in 1992, the largest world's leaders meeting ever noticed (Basiago 1999) was held in Brazil: it is known as "Rio Earth Summit." The aim of this event was to create an international partnership in order to implement strategies for sustainable development for the entire global population (Cicin-Sain 1996). It followed the creation of a Commission to Sustainable Development to purse the Agenda 21 progresses. Some of the mechanisms introduced to implement the targets of Agenda 21 were the: • Program for the Further Implementation of Agenda 21 in 1997; • United Nations Millennium Development Goals (MDGs) or International Development Goals in 2000. With the Agenda 21 emerged the need to dispose of tools to measure sustainable development: indicators were identified as the suitable tool to the assessment and continuous improvement of the development of nations (Strezov et al. 2016). In 2002 was held the Johannesburg World Conference on Sustainable Development, identified as the first international gathering in which factors concerning economy, society and environment were purposed to underpin sustainable development. Nine years later, the concept of triple helix was introduced in the UN Environment Programme (UNEP). This idea encloses the three factors as being intertwined in a helical shape (Haines et al. 2012), highlighting the complexity of the topic. Last, in 2015, the Sustainable Development Goals (SDGs) were introduced by the United Nations General Assembly with the objective to fulfil them by 2030. They are presented within the Agenda 2030 and, to each one of them corresponds a list of goals to achieve (169 in overall), which are measurable by means of at least one indicator (232 overall approved indicators). The principal areas on which is focused the Agenda 2030 are United Nations (2015): • People the first SDG is "No poverty," to obtain in all its forms and for all people; • Planet which means protecting and preventing irreversible damages to the ecosystem, acting against climate changes; • Peace in order to have peaceful societies in all corners of the world; • Prosperity which means progress and well-being from an economic and social point of view both for humans and nature; • Partnership to fulfil all the goals of the agenda mutual aid and agreements are needed. Indicators of sustainability In literature there are several international studies and reviews on indicators of sustainability, as deep analyzed in Refs. Munda (2005) Santagata et al. 2020;Rossi et al. 2020;Pascale et al. 2021), industrial supply chains (Neri et al. 2021) but also sustainability of biofuels production (Mayer et al. 2020;, biomass-based carbon chemicals (Horváth et al. 2017), agriculture (de Olde et al. 2017; Janker and Mann 2020), emerging technologies (Açkkalp and Ahmadi 2018; Meramo-Hurtado and González-Delgado 2020), etc.. Moreover, the need to assess sustainability must be coupled with the requirement of decent standards of quality of life (Eras et al. 2013) and human well-being. Therefore, in order to support decision-making activities toward sustainability, researchers and International Organizations have been working in propose new indicators. In 1989, Cobb introduced the Index of Sustainable Economic Welfare ( ISEW ) as an alternative to the Gross Domestic Product ( GDP ) (Cobb 1989). Then, Cobb himself, extended his indicator (Cobb and Cobb 1994;Cobb et al. 1999), developing the Genuine Progress Indicator ( GPI ), which contains aspects of all three domains of sustainable development. Another indicator of sustainability, introduced in the 1990s, was the Ecological Footprint ( EF ), which considers the surface of productive land required to support a given population at its actual level of consumption (Rees 1992;Moldan et al. 2012). In order to measure the annual total capital stock of a country, including also the wealth accounting, the Genuine Savings Indicator ( GSI ) has been presented (Hamilton and Naikal 2014;Hamilton and Hepburn 2014). The Environmental Sustainability Index ( ESI ) is a composite index to assess sustainability, by using environmental and socio-economic indicators (Esty et al. 2005). This index encloses 21 different indicators, which are combined with two to eight variables (76 overall variables) (Wilson et al. 2007). Then, this composite index has been modified, by adding some indicators regarding human health and environmental issues, designing the Environmental Performance Index ( EPI ). The latter identifies economic and social driving forces, and environmental pressures, assesses the impacts on human health, and on the environment (Hsu et al. 2013). The Sustainable Society Index ( SSI ) is a composite index which encompasses indicators of all three main domains of sustainability which has been introduced to measure the level of sustainability of a country including the most important aspects of sustainability and quality of life of a national society (de Kerk and Manuel 2008). Another index built as an alternative to the GDP is the Happy Planet Index ( HPI ), which measures the trade-off between ecological footprint data and life quality (Tausch 2011), with a subjective measure of well-being (Campus and Porcu 2010). One of the landmarks among the indicators of sustainable development is the Human Development Index ( HDI ) which was proposed in the early 1990s by the United Nations Development Programme (United Nations Development Programme 1990;UNDP 1990;Sagar and Najam 1998). It is a multidimensional index which measures the development of a country from a socio-economic stand-point, focusing in human well-being by considering key parameters of social development (Sagar and Najam 1998;Hickel 2020). During the last thirty years, this indicator has been updated and improved (Liu et al. 2017;Hickel 2020;UNDP 2010UNDP , 2015. In Table 1, the main indicators presented in literature, are summarized in relation to their chronological introduction and, in Table 2, are highlighted the main dimensions of sustainability encompassed by each one of them. Despite the big efforts made in the last decades in defining new indicators of sustainability and sustainable development, there are still open problems with their definition and acceptance. In order to overcome this issues is fundamental to dispose of an interdisciplinary approach based on policy-making and sciences, concerning all the relevant aspects of sustainability (Strezov et al. 2016). (Cobb 1989(Cobb ) 1990 Human development index ( HDI ) -first version (UNDP 1990;Sagar andNajam 1998) 1992 Ecological footprint ( EF) (Rees 1992;Moldan et al. 2012;Wackernagel and Rees 1997;Fiala 2008;Kissinger et al. 2013;Ghita et al. 2018;Chen et al. 2019;Shi et al. 2020;Guo et al. 2020Guo et al. ) 1994 Index of sustainable economic welfare (ISEW), updated version: Green National Product (Cobb and Cobb 1994;Neumayer 1999) 1995 Genuine progress indicator ( GPI) 1 3 The focus of this paper In this paper, a thermodynamic approach recently developed (Grisolia et al. 2020;Lucia et al. 2020; is improved with particular regards to its link with Income Index. The aim of this paper is to develop a new viewpoint based on the fundamentals of sustainable development, considered both from a socio-economic and an engineering viewpoint. To do so, in Section Theory we will develop an analytical approach to improve the present socio-economic indicators, by introducing the entropy generation in order to take into account the engineering quantity used in optimization approach to designing. In Section Discussion, we will highlight how the new indicators obtained can be part of the present context on sustainability, improving the present approaches in relation of going beyond the present dichotomy between socio-economic and engineering and scientific approaches to sustainability. Theory and methodology Usually, indicators and indexes are the main tools used to assess performances by policy-makers, statisticians, economists. Following the definition given by the Organization for Economic Co-operation and Development (OECD) (OECD 2008), an indicator is an instrument used to measure, quantitatively or qualitatively, based on the observation of reality, focused on the aspects that can disclose and enable comparisons. Indicators are used by policy-makers for analyzing and comparing trends of the different Countries, for driving the attention to some specific topics, for determining policy priorities, for surveying performances. In this section, the link between a new indicator (Lucia and Grisolia 2019) to the wealth and purchasing power is developed. To do so, we first consider some indicator, just accepted and used in literature and by the International Organizations, then we introduce the thermodynamic analysis to irreversibility, as usually developed in engineering optimization, last we introduce new indicators in order to improve the present treatment of sustainability in relation to well-being and ecological impact. So, the first step in our approach is to consider the following indicators: • The exergy intensity (Grisolia et al. 2020): where Ex in is the input exergy. This indicator is analogous to the ECO2 presented in IAEA (2005) but, it considers exergy instead of energy, which means introducing the irreversibility and the energy quality; • The labor productivity, LP , defined as OECD (2019); Blain (1996); Zhang and Dornfeld (2007): where n wh = n w ⋅ n h is the total number of worked hours needed to obtain the GDP , where n h is number of worked hours, and n w is the number of workers; • The Second Law Inefficiency , defined as Lucia and Grisolia (2019): where W is the work lost due to irreversibility and friction (Bejan and Lorente 2004). In accordance with the Gouy-Stodola theorem (Bejan 2006), the work lost due to irreversibility W , can be evaluated by multiplying the environmental temperature T 0 with the entropy variation due to irreversibility, S g . So, we introduce an quantitative expression of the new indicator by considering the well-being and purchasing power: where n wh is strictly related both to the result of the work and to the Gross National Income per capita. So, we modify our indicator of Eq.(4) by considering the useful work and the GNI pc , as follows where Ẇ is the useful power obtained by the process considered, and S g ∕W =Ṡ g ∕Ẇ . Now, it is possible to relate the indicator with the normalized Income Index ( II ), used in the United Nations Development Program, which is defined as follows (Pinar et al. 2017;Kahneman and Deaton 2014): where GNI pc is the gross national income per capita, where the minimum and the maximum value are 100.00 $ and 75000.00 $ , respectively. So, the indicator useful to link the environmental and the economic purchase power results: (1) ExI = Ex in GDP , (2) LP = GDP n wh , where ṁ CO 2 is the carbon dioxide flux due to anthropic activities and s is its specific entropy. Consequently, this result introduces the carbon dioxide fluxes, linked to environmental impact, the entropy generation, linked to the technological level, into the economic analysis based on the Income Index. This result improves the Income Index indicator by considering also the technological and ecological level of a country. Consequently, it appears interesting also in relation to the United Nations (UN) approach to evaluate the well-being of Countries; indeed, UN has introduced the Human Development Index, HDI , in order to consider the developing level of any country, in relation to education, health and salary conditions. The index HDI is a composite indicator,focused on three keys of the countries development: • The possibility to lead a long and healthy life, quantified by the life expectancy at birth; • The possibility to achieve a good level of knowledge, quantified by the amount and the expected years of schooling; • The possibility to achieve a decent standard of living, quantified by the gross national income per capita, just related to Income Index ( II ), and also to the Thermodynamic Income Index ( I T ). The HDI is analytically defined as United Nations Development Program (2020) (7), in order to take into account also the ecological impact of the human activities, considering the environmental conditions a fundamental key of human well-being; so, we can redefine the HDI as Thermodynamic Human Development Index THDI as follows : Results and discussion In Chapter 40 of Agenda 21 is introduced the requirement to have the availability of information and data about the conditions of each single State, and emerges the need to develop indicators of sustainable development to provide a basis for decision-makers, in order to allow them to support sustainability by their policy (United Nations General Assembly 1992). So, in order to evaluate the advance on the implementation of Agenda 21, a set of indicators of sustainable development was developed by the Commission on Sustainable Development in 1995, adding to the common three dimensions (social, economic and environmental) the institutional one. In the first work of the CSD with the UN Department of Economic and Social Affairs (UNDESA) , 134 indicators of sustainable development were presented, with their relative methodologies to adopt in using them. To classify them, the CSD initially used the "Pressure -State -Response" framework (Levrel et al. 2009), introduced by OECD (OECD 1993) for Environmental Indicators, but this approach was more suitable for the environment but not fully complete to include also the social and economic aspects. Thus, it was modified substituting "Pressure" with "Driving force." After the national testing of the proposed indicators, the "Driving force -State -Response" was modified into "Policy issues" or, better, in main themes and sub-themes (IAEA 2005). A core set of Energy Indicators for Sustainable Development (EISD) has been carried out by different institutions: the International Atomic Energy Agency (IAEA), the United Nations Department of Economic and Social Affairs (UNDESA), the International Energy Agency (IEA), Eurostat, and the European Environment Agency (EEA). Here, we summarize the properties of the fundamental indicators involved the usual analyses of the sustainability energy uses: • Social, which is related to accessibility, affordability, disparities, and safety; • Economic, which is related to the energy use pro capita, the energy per GDP, the productivity, the efficiency, the economic sector use of energy, the import/export of energy resources, and strategic market of reference; • Environmental, which is related to atmosphere, water, and lands, with particular regards to climate change, air and water and soil quality, deforestation prevention and the energy generation and management. Energy need, use and consumption have a central role in our daily life, and society. So, the management of its fluxes, and all its production and consumption chains, are fundamental to ensure a correct approach to sustainability and sustainable development. This aspect can not be neglected considering an effective indicator of sustainability and, using a thermodynamic approach, can be a powerful tool to improve existing indicators. Our results can be considered an improvement of the approach to sustainability as here summarized, because we can link the socio-economic considerations to the engineering approach to optimization, by the introduction of the thermodynamic quantity entropy generation. So, the new indicators I T and THDI represent a new viewpoint, which goes beyond the dichotomy between socio-economic considerations, and engineering and scientific approach to sustainability, obtaining a unified tool to design new policies and activities, for the next future. Conclusions The result here obtained consists in an improvement of an indicator recently introduced in Lucia and Grisolia (2019Grisolia ( , 2017; Lucia (2016). Our previous indicator allows us to analyze the technological processes by using a holistic approach based on thermodynamics: it considers all the interactions internal to the process (Lucia and Grisolia 2019;Lucia 2016). Moreover, it takes also into account the related consumption rate of the available resources (Sciubba and Zullo 2011). Up now, the community and the environment have been considered separately, even if it is clear that they are interacting systems. The indicators here proposed introduce the entropy approach into economic analysis of sustainability, and the improvement here proposed allows us to consider also the need of people in relation to well-being and purchasing power. Indeed, our indicator assumes the lower values if the anthropic consequences are low and if the well-being and purchasing power are high, as represented in Fig. 1. In this Figure, it is possible to highlight that a process is considered sustainable if the indicators I T and T 0Ṡg ∕Ẇ are as low as possible and the indicator II is as greater as possible. Consequently, a process is considered sustainable if the indicator I T is as low as possible and the indicator THDI is as greater as possible. Author Contributions UL and GG involved in conceptualization; UL involved in methodology; GG developed software; UL, DF and GG validated the manuscript; UL and GG involved in formal analysis; UL, DF and GG investigated the study; UL and DF involved in resources; GG participated in data curation; UL, DF and GG involved in writing-original draft preparation; UL and GG involved in writing-review and editing; GG visualized the study; UL and DF supervised the study; UL and DF involved in project administration; UL and DF participated in funding acquisition; All authors have read and agreed to the published version of the manuscript. Funding Open access funding provided by Politecnico di Torino within the CRUI-CARE Agreement. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
5,644.8
2021-05-20T00:00:00.000
[ "Environmental Science", "Economics" ]
PARAMETRIZATION OF THE OPTICAL FLOW CAR TRACKER WITHIN MATLAB COMPUTER VISION SYSTEM TOOLBOX FOR VISUAL STATISTICAL SURVEILLANCE OF ONE-DIRECTION ROAD TRAFFIC A computer vision problem is considered. The prototype is the optical flow car tracker within MATLAB Computer Vision System Toolbox, tracking cars in one-direction road traffic. For adapting the tracker to work with other problems of moving cars stationarycamera-detection, having different properties (video length, resolution, velocity of those cars, camera disposition, prospect), it is parametrized. Altogether there are 19 parameters in the created MATLAB function, fulfilling the tracking. Eight of them are influential regarding the tracking results. Thus, these influential parameters are ranked into a nonstrict order by the testing-experience-based criterion, where other videos are used. The preference means that the parameter shall be varied above all the rest to the right side of the ranking order. The scope of the developed MATLAB tool is unbounded when objects of interest move near-perpendicularly and camera is stationary. For cases when camera is vibrating or unfixed, the parametrized tracker can fit itself if vibrations are not wide. Under those restrictions, the tracker is effective for visual statistical surveillance of one-direction road traffic. NOMENCLATURE CAMS is a continuously adaptive mean shift; CVST is Computer Vision System Toolbox™; KLT is Kanade-Lucas-Tomasi; MCSCD is a moving cars stationary-camera-detection; MNF are motion numerical features; OFCT is an optical flow car tracker; VSS is visual statistical surveillance; A is an algorithm used to compute optical flow; F is a narrowed and ranked set of relevant OFCT parameters; motion g is a motion vectors gain; ( ) moment N t is a number of cars intersecting an appropriate region at a moment t; ( ) t is a total number of cars intersected the line all over the video (first) frames; blob r is a marginal ratio in classifying the blob as a car; factor r is a frame scaling percentage; T is a total number of frames; t is a moment (a frame); th v is a velocity threshold, computed from the matrix of complex velocities; w is a width (in pixels) of a square structuring element; γ is a binary classification factor; θ is a time unit. INTRODUCTION Computer vision is an inseparable and high-promising part of automation.This is a very huge scientific field, included methods for acquiring, processing, analyzing, and understanding multi-dimensional data from the real world in order to produce decisions as numerical or symbolic information [1,2].Particularly, these data are images and frames from video sequences, views from cameras, or plane projections from scanners.Computer vision efficiently uses utilities and facilities of applied mathematics, machine learning and artificial intelligence, image and signal processing [1,3,4].Being scientific-technological discipline, computer vision renders its theories and models to the construction of computer vision systems.Such systems mainly are designed for controlling industrial processes, autonomous vehicle navigation, detecting events for VSS, organizing imaged and databased information, analyzing and modeling topographical environments, computer-human interaction [1,5,6]. The being described application areas employ a few contemporary general problems of computer vision, whose resolution depends on the application requirements and approaches in solving.Typically, these problems are recognition, motion analysis, scene reconstruction, image restoration.Computer vision system methods for solving them issue from multi-dimensional data acquisition, preprocessing, feature extraction, detection, segmentation, high-level processing.Eventually, the final decision required for the application is made.Before computer vision system projection, using hardware (power sources, multi-dimensional data acquisition devices, processors, control and communication cables, wireless interconnectors, monitors, illuminators) anyway, its work must be modeled in order to heed of the application area unpredictable specificities.Up-to-date MATLAB ® environment grants a powerful CVST, providing algorithms and tools for the design and simulation of computer vision and video processing systems [4,7,8].CVST proposes a lot of MATLAB ® functions, MATLAB System objects™, and Simulink ® blocks for feature extraction, motion detection, object detection, object tracking, stereo vision, video processing, and video analysis.Its tools include video file input/output, video display, drawing graphics, and compositing.For rapid prototyping and embedded system design, CVST supports fixed-point arithmetic and C code generation.Also there are demos, showing advantages of CVST.Some of those demos are a good basis for projecting real computer vision systems.However, for doing that there sometimes are not enough evident parameters, whose values might have been adjusted for other tasks within the regarded computer vision problem class.One of the classes, demonstrated in CVST, is the optical flow object tracking [2,3,9,10]. When studying methods of tracking the object and motion estimation, one of the key demos in CVST is OFCT.This demo tracks cars in a one-direction road traffic video by detecting motion using the optical flow methods [2,11,12].These methods, trying to calculate the motion between two image frames which are taken at neighboring times at every voxel position, are based on local Taylor series approximations [2,3,13] of the image signal.They use partial derivatives with respect to the spatial and temporal coordinates.The cars are segmented from the background by thresholding the motion vector magnitudes.Then, blob analysis is used to identify the cars [1,14,15]. A blob is an image region in which some properties are constant or vary within a prescribed range of values.All the points in a blob can be considered in some sense to be similar to each other.Blob detection refers to mathematical methods that are aimed at detecting image regions that differ in properties, such as brightness or color, compared to areas surrounding those regions.Given some property of interest expressed as a function of position on the digital image, there are two main classes of blob detectors [14,16,17].The first class is differential methods, which are based on derivatives of the function with respect to position.The second class is methods based on finding the local maxima and minima of the function. CVST algorithms for video tracking are CAMS and KLT ones [2,4,18,19].CAMS uses a moving rectangular window that traverses the back projection of an object's color histogram to track the location, size, and orientation of the object from frame to frame.KLT tracks a set of feature points from frame to frame and can be used in video stabilization, camera motion estimation, and object tracking applications.CVST also provides an extensible framework to track multiple objects in a video stream.It includes Kalman filtering to predict a physical object's future location, reduce noise in the detected location, and help associate multiple objects with their corresponding tracks [2,3,19].The Hungarian algorithm is for assigning object detections to tracks [20].Blob analysis and foreground detection is used for moving object detection.Additionally, there are annotation capabilities to visualize object location and to add object label. Motion estimation is the process of determining the movement of blocks between adjacent video frames.CVST provides a variety of motion estimation algorithms -optical flow, block matching, template matching.These algorithms create motion vectors, which relate to the whole image, blocks, arbitrary patches, or individual pixels [21,22].The evaluation metrics, for finding the best match in the block and template matching, includes particularly mean-square error principle [2,3,21,23,24]. OFCT within CVST shows how moving objects are detected with a stationary camera.In series of video frames, optical flow is calculated and detected motion is shown by overlaying the flow field on top of each frame.But OFCT takes the specified series of 121 video frames, and so this demo cannot be applied outright for other moving cars videos with different frames' number or distinct frame size.Besides, OFCT didn't offer a numerical feature of motion results in the video frames series, except instant calculation of objects intersecting an early horizontal line at a moment.Therefore OFCT should be parametrized for getting some needful numerical features of motion results in the video frames series, and for resolving at least slightly different tasks of MCSCD. PROBLEM STATEMENT Our goal is to view and rank the clue parameters in OFCT for parametrizing it within MATLAB CVST, what is going to be adapted for working with other MCSCD problems, having different properties (video length, resolution, velocity of those cars, camera disposition, prospect).Nominally, from the given set 0 F of OFCT attributes, we must yield a set of relevant OFCT parameters, whereupon this set is narrowed and ranked to F .Formally, this is a map ( ) ensuring true MNF of videos.Parametrization of OFCT within MATLAB CVST by adding the MNF will allow projecting a computer vision system for VSS of one-direction road traffic.This is very important problem in organizing and optimizing the road traffic for its safety. The successive components of the said goal are the following.Firstly, there must be structuring and algorithmization of information processing stages when onedirection road traffic is video-analyzed.Then, for VSS, MNF at OFCT windows should be added.And, eventually, the parametrized OFCT is going to be tested on another video. REVIEW OF THE LITERATURE Structurally, video information processing is divided into four items [2,3,8]: 1) extraction of the foreground; 2) extraction and classification of moving objects; 3) tracking trajectories of the revealed objects; 4) recognition and description of objects-of-interest activity. Conventionally, the video foreground is of moving objects or regions.So, extraction of the foreground consists in separating moving fragments of the view from the motionless ones.These ones, being stationary objects or regions, are background of the view.Accuracy at this stage predetermines whether a computer vision problem is going to be satisfactorily solved.And nearly the best accuracy in selecting moving objects can be ensured with the optical flow methods [2,3,9].The foreground extraction stage predetermines also the requirements to computational resources that may be needed at the rest three stages. At the second stage, the extracted foreground is segmented.Each segment is a compact region whose pixels move at approximately equal velocities.Before segmentation the image is filtered for reducing noise, including impulse noise [1,4,25,26].Median filter as nonlinear digital filtering technique is usually invoked for noise reduction, running through the image entry by entry, and replacing each entry with the median of neighboring entries [26,27].For removing image defects (non-compactness), morphological dilatation and erosion over segments are fulfilled [1].Subsequently, contours of the selected segments become smoother and they contain minimal quantity of spaces (gaps) within the object.Then those segmented regions, being moving objects, are classified.The classification is rough, meaning that its result is the moving object's type -a man, a car, an animal, etc. At the third stage, the revealed objects' trajectories are tracked.For the tracking fulfillment, the one-to-one correspondence between the revealed objects on successive frames must be determined. Finally, there are recognition and description of the revealed-and-tracked objects.In particular, for a task of MCSCD, it is VSS.Here, major MNF are number implying how many cars intersect the line on average in the time unit θ (second, minute, hour, etc). MATERIALS AND METHODS The video in MATLAB OFCT demo has resolution 120by-160, where the one-direction road traffic runs approximately vertically.Stages of this video processing are the said above within the processing loop to track cars along the series of 121 video frames.Initially the optical flow estimates direction and speed of motion.The optical flow vectors are stored as complex numbers, and the velocity threshold from the matrix of complex velocities is computed.Then median filter removes speckle noise introduced during thresholding and segmentation.For thinning out the parts of the road and other unwanted objects and filling holes in the blobs, there are applied morphological erosion and closing methods.After that the blob analysis method estimates the area and bounding box of the blobs, filtering out objects which cannot be cars with binary classification factor CVST objects to display the original video, motion vector video, the thresholded video and the final result. ( ) moment N t is displayed in the viewer named «Results» (Figure 1) in its left upper corner.2. Having been made as a MATLAB function, it works with other MCSCD problems.But primarily it should to be adjusted to the problem, slightly changing 19 parameters (name of the file is not reckoned in) in sets ( 4) and (5).These parameters are input in the following order: = is returned as well (Figure 3).And there are no missed or defectively-tracked cars anymore. Hereinafter, we will test the OFCT parametrized under (4) and ( 5) for seeing how MATLAB function "ofct" performs on other MCSCD problems for VSS of one-direction road traffic.But the part of empirical adjustment is omitted.The omission cause is that the adjustment is not routine. EXPERIMENTS For testing the parametrized OFCT, diverse videos containing one-direction road traffic have been explored.It is noteworthy that the road view is not always straight perpendicular.Figure 4 shows that cars are successfully tracked when they move non-perpendicularly having different velocities and accelerations.At that, the input arguments of the being invoked parametrized OFCT are just slightly different.For instance, there are invocations: VSS of bigger vehicles causes a new problem.Trucks having long trailers may be split into a few blobs and thus the big long vehicle is tracked as two or more.Another great problem is that the rectangular bounding box sometimes disappears for a frame or two and then appears again.This effect may cause a fail of MNF calculation. For cases when camera is vibrating or unfixed, the parametrized OFCT can fit itself if vibrations are not wide.However, the influential parameters (6) and their order (7) may become incomplete.Wider ranges of the camera vibration will require either to re-rank the elements of the set CONCLUSIONS The testing-experience-based criterion of ranking OFCT parameters has allowed to reduce the set of 19 non-arranged elements down to eight ones (6) ordered as (7).The advantage (preference) means that the parameter (of the couple) shall be varied above all the rest (to the right side of the ranking order).At that, there is no preference inside of the couples { } an MCSCD problem.This is possible owing to parametrization of the OFCT within MATLAB CVST whose corollary is the ranking (7).Consequently, MCSCD problems are solved via MATLAB function «ofct».These primitives are indispensable to projecting a computer vision system for VSS of one-direction road traffic and ensuring its safety. For general VSS of one-direction road traffic, the parametrized MATLAB function «ofct» is not going to be used straight off.The explanation lies in that the objects-ofinterest activity is the cars' movement which must be almost perpendicular, and camera disposition ought to hang over the road (hanging not low).Hence, the promising research is in adapting the developed MATLAB tool for tracking vehicles of any form and size, moving in one-direction under arbitrarily disposed camera. blob a is a ratioF between area of the blob and area of the bounding box; max b is a maximum blob area in pixels; min b is a minimum blob area in pixels; offset b is a border offset in plotting motion vectors; max c is a maximum number of blobs in the input image; col d is a step through horizontal axis, when coordinates are generated for plotting motion vectors; frame d is a number of frames between reference frame and current frame; line d is a distance between centers of structuring element members at opposite ends of the line; row d is a step through vertical axis, when coordinates are generated for plotting motion vectors; 0 is a set of OFCT attributes; Figure 1 - 3 - Figure 1 -Four viewers, visualizing the running MATLAB OFCT demo file name is input at the front of them. Figure 4 - Figure 4 -Snapshots off the viewer «Results», visualizing the running OFCT by MATLAB function «ofct» on other MCSCD problems for VSS of one-direction road traffic near-perpendicularly (but not horizontally or close to that) and camera is stationary.Nonetheless, adjustment even on the foremost couple { } max max , x y by (7) may take substantial time.And tracking any vehicles is handled harder.VSS of bigger vehicles causes a new problem.Trucks having long trailers may be split into a few blobs and thus the big long vehicle is tracked as two or more.Another great problem is that the rectangular bounding box sometimes disappears for a frame or two and then appears again.This effect may cause a fail of MNF calculation.For cases when camera is vibrating or unfixed, the parametrized OFCT can fit itself if vibrations are not wide.However, the influential parameters(6) and their order (7) may become incomplete.Wider ranges of the camera vibration will require either to re-rank the elements of the set(6) or to re-select influential parameters from the set ( 6 ) or to re-select influential parameters from the set elements are likely to be varied simultaneously.The adjustment is that naive heuristic optimization of values in(6) giving true -ISSN 1607-3274.Радіоелектроніка, інформатика, управління.2015.№ 3 e-ISSN 2313-688X.Radio Electronics, Computer Science, Control.2015.№ 3 max x is maximal deviation ratio of the bounding box through horizontal axis; end y is a value of ordinate in the frame, where tracking ends; p Table 2 - Adjustable parameters attached to the MATLAB OFCT demo ones are algorithms of Horn -Schunck and of Lucas-Kanade.Thus, the set of parameters (Table2) Figure 2 -The modified MATLAB OFCT code, made as MATLAB function «ofct» with 19 adjustable parameters Therefore the MATLAB OFCT demo may be modified over one or several parameters from the set Радіоелектроніка, інформатика, управління.2015.№ 3 e-ISSN 2313-688X.Radio Electronics, Computer Science, Control.2015.№3The flaw concerning the 73-th frame of the video in original MATLAB OFCT demo is remedied by launching the modified MATLAB OFCT code as follows (from the MATLAB Command Window prompt): function [hVideo4] = ofct(filename, OpticalFlowMethod, ReferenceFrameDelay, ResizeFactor, w, d_line, alpha_line,...
4,001.2
2015-01-01T00:00:00.000
[ "Computer Science" ]
Zika virus: mapping and reprogramming the entry The flaviviridae family comprises single-stranded RNA viruses that enter cells via clathrin-mediated pH-dependent endocytosis. Although the initial events of the virus entry have been already identified, data regarding intracellular virus trafficking and delivery to the replication site are limited. The purpose of this study was to map the transport route of Zika virus and to identify the fusion site within the endosomal compartment. Tracking of viral particles in the cell was carried out with confocal microscopy. Immunostaining of two structural proteins of Zika virus enabled precise mapping of the route of the ribonucleocapsid and the envelope and, consequently, mapping the fusion site in the endosomal compartment. The results were verified using RNAi silencing and chemical inhibitors. After endocytic internalization, Zika virus is trafficked through the endosomal compartment to fuse in late endosomes. Inhibition of endosome acidification using bafilomycin A1 hampers the infection, as the fusion is inhibited; instead, the virus is transported to late compartments where it undergoes proteolytic degradation. The degradation products are ejected from the cell via slow recycling vesicles. Surprisingly, NH4Cl, which is also believed to block endosome acidification, shows a very different mode of action. In the presence of this basic compound, the endocytic hub is reprogrammed. Zika virus-containing vesicles never reach the late stage, but are rapidly trafficked to the plasma membrane via a fast recycling pathway after the clathrin-mediated endocytosis. Further, we also noted that, similarly as other members of the flaviviridae family, Zika virus undergoes furin- or furin-like-dependent activation during late steps of infection, while serine or cysteine proteases are not required for Zika virus maturation or entry. Zika virus fusion occurs in late endosomes and is pH-dependent. These results broaden our understanding of Zika virus intracellular trafficking and may in future allow for development of novel treatment strategies. Further, we identified a novel mode of action for agents commonly used in studies of virus entry. Schematic representation of differences in ZIKV trafficking in the presence of Baf A1 and NH4Cl associated with ZIKV infection (microcephaly in newborns and Guillain-Barré syndrome in adults [7][8][9]), indicate an urgent need for research into the biology of the pathogen. The initial events of ZIKV entry have been identified [10][11][12]; but data regarding its fate thereafter are limited. Upon attachment to a permissive cell, ZIKV crosses the plasma membrane via clathrin-and mucolipin-2-dependent endocytosis, accompanied by formation of LY6E tubules [13,14]. The dependence of ZIKV on endocytosis has been confirmed in a variety of cell models [15]. Clathrin-mediated endocytosis (CME) is initiated by activation of receptor proteins, followed by recruitment of the AP2 adaptor complex, which induces assembly of the clathrin coat and formation of membrane niches of~100 nm [16]. As the invagination deepens, dynamin (GTPase), oligomerizes around the bud neck, cleaves it from the cell surface, and creates an intracellular vesicle [17,18], which at first is translocated through the actin cortex and then trafficked along microtubules [19]. As the vesicle travels across the cell, it matures. First, the clathrin coat is removed; the uncoated vesicles may then fuse with each other or be delivered to the first (and major) sorting station, i.e., early endosomes. Vesicle trafficking is directed by small membrane GTPases belonging to the Rab family [20]. Early endosomes are characterized by the presence of early endosome antigen 1 (EEA1) and Rab5 proteins. During vesicle maturation, the pH gradually decreases due to the activity of proton pumps and fusion with other acidic vesicles. Early endosomes are moderately acidic (pH ∼ 6.3-6.8) [21], and their cargo can be sorted either for degradation via multivesicular bodies and late endosomes to lysosomes, or for recycling to the cell surface, exosomes, or the trans-Golgi network [22,23]. Transport along the degradative pathway is associated with a gradual decrease in pH (from 6.0 to 4.8 in Rab7-positive late endosomes and further to 4.5 in lysosomes [24]). Lysosomes act as a storage site for hydrolases and other proteolytic enzymes; they are the final destination on this pathway. In recycling endosomes, the pH is maintained at ∼6.5 and vesicles may be targeted to the outer membrane by Rab35 via fast recycling endosomes (~5 min), or by Rab11 via slow recycling endosomes (15-30 min) [21]. Alternatively, cargo may be transported from multivesicular bodies to intraluminal vesicles, which may recycle to the cell surface via a Rab27a/b-mediated pathway, leading to release of cargo-loaded exosomes [25] (30-100 nm, often used by viruses during assembly and egress [26,27]). Finally, at any point, cargos may enter the trans-Golgi network and follow the retrograde transport pathway guided by Rab9 [28]. The microenvironment within the vesicle during its travel is precisely controlled, and viruses usually fuse with the vesicular membrane at a certain time, i.e., when the pH, membrane composition, and activity of cellular proteases are optimal for fusion [22]. Virus dependence on an acidic environment is often treated as a requirement for endocytosis prior to fusion. Consequently, agents such as ammonium chloride (NH 4 Cl) or bafilomycin A1 (Baf A1), which increase intravesicular pH, are used to determine whether certain viruses are able to fuse to the cell surface or whether endocytic internalization is required. Here, we complement and expand the knowledge about cell entry and intracellular trafficking of ZIKV. Tracking of single virions using confocal microscopy and separate labeling of the viral capsid and envelope proteins revealed that virions that enter cells via CME travel to late endosomal compartments and subsequently fuse with the membrane. Blocking endosome acidification using Baf A1 inhibited viruscell fusion, leading to trafficking of virus either along the degradative pathway to the lysosomal compartments or its slow recycling to the cell surface. Similar results were expected for NH 4 Cl-treated cells; however, in this case, virions localized to the cell surface, suggesting a very different mechanism of action. Surprisingly, it appeared that NH 4 Cl "rewired" the endosomal hub and altered virus trafficking within the endocytic labyrinth. ZIKV H/PF/2013 strain acquired from European Virus Archive [29] was propagated in Vero cells under standard medium. After 3 days of infection at 37°C, virus-containing medium was collected and titrated. As a control, mock-infected Vero cells were subjected to the same procedure. Virus and mock aliquots were stored at − 80°C. Virus titration was performed on confluent Vero cells on a 96-well plate, according to the Reed-Muench method [30]. Immunostaining Vero cells were seeded on glass slides in a cell culture plate and cultured in standard medium for two days at 37°C. Upon experimental procedure, the cells were fixed with ice-cold 4% formaldehyde in PBS for 20 min at room temperature, washed with PBS and permeabilized with 0.5% TWEEN-20 for 10 min at room temperature. Afterwards, non-specific binding sites were blocked overnight at 4°C with 5% BSA and slides were incubated for 2 h at room temperature with primary anti-ZIKV antibodies (specific to envelope protein (Merck Millipore, Poland) or capsid protein (kind gift from prof. Jassoy, Institut für Virologie, Leipzig, Germany) diluted 1:100 in 3% BSA in PBS. To visualize host cell proteins, slides were incubated with primary antibodies against clathrin, EEA1, Rab7, LAMP1, Rab11, Rab27 and Rab35 [goat anti-clathrin HC (RRID:AB_2083170), rabbit anti-EEA1 (RRID:AB_2277714) and rabbit anti-Rab7 (RRID:AB_2175483) polyclonal antibodies from Santa Cruz Biotechnology, Poland, rabbit anti-Rab11A (RRI-D:AB_2173458) polyclonal antibody from Proteintech, UK, rabbit anti-Rab27A monoclonal antibodies from Cell Signaling Technology, Poland, rabbit anti-Rab35 polyclonal antibody from Novus Biologicals, Poland, rabbit anti-LAMP1 (RRID:AB_2134611) polyclonal antibody from Thermofisher Scientific, Poland] diluted 1:100 in 3% BSA in PBS, together with anti-ZIKV antibodies. Next, the cells were incubated for another 1 h with Alexa Fluor 488-labeled goat anti-mouse IgG (RRI-D:AB_2534069, Thermofisher Scientific, Poland) or Alexa Fluor 488-labeled rabbit anti-mouse IgG (RRI-D:AB_2534106, Thermofisher Scientific, Poland) diluted 1:200 in 3% BSA in PBS. For staining of host cell Control denotes mock-infected cells, stained with anti-ZIKV envelope antibodies and rabbit isotype antibodies (control for staining of cellular proteins). The virus is visualized in green, clathrin and EEA1 are shown in red, and nuclei are presented in blue. Scale bar = 10 μm. Co-localization parameters: r -Pearson's coefficient; M2 -Manders' coefficient M2 (the virus overlapping with clathrin/EEA1) proteins also Alexa Fluor 546 goat anti-rabbit secondary antibodies (RRID:AB_2534077, Thermofisher Scientific, Poland) or Alexa Fluor 546 donkey anti-goat secondary antibodies (RRID:AB_2534103, Thermofisher Scientific, Poland) diluted 1:200 were added to the mix. In experiments focused on siRNA silencing and inhibitors' influence on virus internalization, the cell surface was labelled by Atto 633-phalloidin (Thermofisher Scientific, Poland) diluted 1:50 in PBS for 1 h at room temperature. Nuclei were stained with DAPI (RRID:AB_2629482, Thermofisher Scientific, Poland) diluted 1:10000 in PBS. At the end of immunostaining procedure slides were thoroughly washed with 0.5% TWEEN-20 in PBS. Stained slides were mounted in ProLong Diamond antifade medium (Thermofisher Scientific, Poland) and stored at 4°C. Staining of living cells Vero cells were seeded on 35 mm glass bottom dishes and cultured in standard medium for two days at 37°C. Afterwards, the cells were washed with PBS and incubated in standard medium containing 50 nM Lyso-Tracker™ Red DND-99 (Thermofisher Scientific, Poland) for 90 min at 37°C to visualize acidic organelles. Next, the medium was discarded, and cells were overlaid with BafA1/NH 4 Cl-containing or control standard medium and observed for 30 min with a fluorescence microscope. The first images ("0 min") were acquired in < 1 min upon treatment of the cells with the two above mentioned agents. Fluorescence and confocal microscopy Images of living cells were acquired using EVOS FL Imaging System (Thermofisher Scientific, Poland) with 60× oil immersion lens. Images of fixed cells were taken under a ZEISS LSM 710 (version 8.1) confocal microscope with 40× oil immersion lens and ZEN 2012 SP1 (black edition, version 8.1.0.484). Image processing was performed with ImageJ FIJI (RRID:SCR_002285, Co-localization between ZIKV envelope and clathrin presented as Manders' coefficient M2 for control and clathrin-depleted Vero cells inoculated with ZIKV. The data is presented as mean ± SD. To determine the significance of differences between compared groups, single-factor analysis of variance (ANOVA) was applied. P values < 0.05 were considered significant. One asterisk (*) identifies adjusted P values between 0.01 and 0.05, two asterisks (**) identify adjusted P values between 0.01 and 0.001, three asterisks (***) identify adjusted P values between 0.001 and 0.0001. (c) Western blot analysis of the efficiency of siRNA-dependent clathrin silencing (clathrin expression in Vero cells compared to GAPDH expression in these cells). M -BlueStar prestained protein marker; Ønormal non-transfected Vero cells National Institutes of Health, Bethesda, Maryland, USA). Co-localization parameters (Pearson's and Manders' coefficients) were calculated using JaCop plugin [40]. Flow cytometry Vero cells were seeded in a 6-well cell culture plate and cultured in standard medium for two days at 37°C. Upon experiment, the cells were fixed, permeabilized, blocked and immunostained with primary antibodies specific to viral envelope protein (Merck Millipore, Poland) and secondary rabbit anti-mouse antibodies labeled with Alexa Fluor 488 (RRID:AB_2534106, Thermofisher Scientific, Poland), as indicated in Immunostaining section. Proportion of ZIKV-infected cells (corresponding to the median fluorescence of the analyzed cells population) was evaluated with flow cytometry using FACSCalibur (RRID:SCR_000401, Becton Dickinson, Poland). Cell Quest software (RRID:SCR_014489, Becton Dickinson, Poland) was used for data processing and analysis. Cell viability Cells were seeded on 96-well plates and cultured in standard medium for two days at 37°C. Afterwards, the cells were washed with PBS, overlaid with standard medium supplemented with inhibitor or control and further incubated for 3 days at 37°C. Cell viability was examined using XTT Cell Viability Assay (Biological Industries, Poland), according to the manufacturer's protocol. Briefly, the medium was discarded and 50 μl of fresh standard medium with 50 μl of the activated XTT solution was added to each well. After 2 h incubation at 37°C, the supernatant was transferred onto a new, transparent 96-well plate and signal from formazan derivative of tetrazolium dye was read at λ = 490 nm using colorimeter (Tecan i-control Infinite 200 Microplate Reader, 1.5.14.0). The obtained results were further normalized to the control, where cell viability was set to 100%. Virus yield Virus detection and quantification was performed using reverse transcription (RT) followed by quantitative real-time PCR (qPCR). Viral RNA was isolated from cell culture supernatant 3 days post-infection (p.i.) using Viral DNA / RNA Kit (A&A Biotechnology, Poland), while reverse transcription was carried out with High Capacity cDNA Reverse Transcription Kit (Thermofisher Scientific, Poland), according to manufacturers' protocols. To assess virus yield, DNA standards were subjected to qPCR along with the cDNA acquired from the isolated samples. qPCR was performed using KAPA PROBE FAST qPCR Master Mix (Kapa Biosystem, Poland), ZIKV-specific primers (5′-TTG GTC ATG Fig. 3 Inhibition of ZIKV infection in Vero cells by chemical agents blocking clathrin-mediated endocytosis. Vero cells pre-treated with CME inhibitors were infected with ZIKV and viral yield was assessed 3 days p.i. Virus yield (RT-qPCR) is presented on the left side of the graph, while on the right side toxicity of the compounds is visualized (XTT assay). mockmock-infected cells; vØ -ZIKV-infected cells; AMTD -400 μM amantadine, Dyn -100 μM dynasore, MM -20 μM MitMab, PtS -50 μM PitStop; controlinhibitor-untreated, non-infected cells. The data is presented as mean ± SD. To determine the significance of differences between compared groups, single-factor analysis of variance (ANOVA) was applied. P values < 0.05 were considered significant. One asterisk (*) identifies adjusted P values between 0.01 and 0.05, two asterisks (**) identify adjusted P values between 0.01 and 0.001, three asterisks (***) identify adjusted P values between 0.001 and 0.0001 ATA CTG CTG ATT GC-3′ and 5′-CCT TCC ACA AAG TCC CTA TTG C-3′) and probe (5′-CGG CAT ACA GCA TCA GGT GCA TAG GAG-3′) labelled with FAM (6-carboxyfluorescein) and TAMRA (6-carboxytetramethylrhodamine). Rox was used as a reference dye. The signal was recorded and analysed using 7500 Fast Real-Time PCR System (Thermofisher Scientific, Poland). Vero cells cultured for 1 day in antibiotic-and serum-depleted standard medium on a 6-well plate were transfected with appropriate siRNAs using Lipofectamine RNAiMAX (Thermofisher Scientific, Poland). The procedure was performed according to the manufacturer's instructions and repeated 24 h later to enhance the silencing effect. The efficiency of the procedure was assessed by Western blotting 24 h later (at the same time as microscopic studies on virus subcellular localization). Western blot analysis Cells were harvested in RIPA buffer (1 h, 4°C; Thermofisher Scientific, Poland) supplemented with 0.5 M EDTA and protease inhibitors cocktail (cOmplete Tablets, Roche, Poland)]. Protein concentration was assessed with Pierce BCA Protein Assay Kit (Thermofisher Scientific, Poland), according to manufacturer's protocol. Samples containing equal amounts of proteins were mixed with SDS-PAGE Sample Buffer (0.5 M Tris, pH 6.8, 10% SDS, 50 mg/ml DTT), denatured for 10 min at 95°C and separated by SDS-PAGE electrophoresis. Subsequently, proteins were electrotransferred onto a PVDF membrane (1.5 h, 100 V; Amersham, Poland). The non-specific binding sites on the membrane were blocked for 1 h at room temperature with 10% milk (Bioshop) in Tris-buffered saline supplemented with 0.25% TWEEN-20 (Bioshop, Poland) (TBST) and incubated with primary antibodies specific to clathrin or Rab35 [rabbit anti-clathrin heavy chain polyclonal antibody, RRI-D:AB_10695306, Cell Signaling Technology, Poland; rabbit anti-Rab35 polyclonal antibody, Novus Biologicals, Poland] diluted 1:500 or 1:1000 (for clathrin and Rab35, respectively) in 3% BSA in TBST overnight at 4°C and additionally for 1 h at room temperature; or with primary antibodies specific to GAPDH (rabbit anti-GAPDH antibodies, RRID:AB_561053, Cell Signaling Technology, Poland) diluted 1:5000 in 3% BSA in TBST for 1 h at room temperature. After being washed in TBST, the membrane was incubated with HRP-labelled anti-rabbit IgG antibody (RRI-D:AB_257896, Sigma Aldrich, Poland) diluted 1:20,000 in 3% BSA in TBST for 1 h at room temperature. Finally, the proteins were visualized with chemiluminescence, using the ECL system (Amersham, Poland). Co-localization assay Vero cells were seeded in standard medium on glass slides in 12-well plates. After 2 days, 2 h prior to infection, cell culture medium was replaced with serum-depleted standard medium. Next, cells were cooled down to 4°C, overlaid with 100 μl of non-diluted ZIKV stock (TCID 50 ranging from 1000,000 to 10,000,000/ml, which approximately corresponds to MOI = 1.75-17.5 and 7 × 10 5 -7 × 10 6 PFU/ml) and incubated for 30 min at 4°C to synchronize cargo particles entry from the cell surface. Subsequently, after incubation at 37°C (exact times indicated for each experiment) the cells were fixed, permeabilized, blocked and immunostained for viral and cellular proteins, as indicated in Immunostaining section. i. was assessed using RT-qPCR (left side of the graph); cytotoxity of inhibitors is presented on the right side of the graph (XTT assay). mockmock infected cells; vØ -ZIKV-infected cells; cam -100 μM serine protease inhibitor camostat; E64-100 μM cysteine protease inhibitor E64; CMK -50 μM furin inhibitor CMK; controlinhibitor-untreated, non-infected cells. The data is presented as mean ± SD. To determine the significance of differences between compared groups, single-factor analysis of variance (ANOVA) was applied. P values < 0.05 were considered significant. One asterisk (*) identifies adjusted P values between 0.01 and 0.05, two asterisks (**) identify adjusted P values between 0.01 and 0.001, three asterisks (***) identify adjusted P values between 0.001 and 0.0001 Virus inhibition assays For visualization of virus subcellular localization during experiments with agents interfering with virus trafficking, Vero cells were cultured in standard medium on glass slides in 12-well plates for 2 days and pre-treated with a particular inhibitor 1 h prior to infection. Afterwards, the cells were cooled down to 4°C, overlaid with 100 μl of non-diluted ZIKV stock (TCID 50 ranging from 1000,000 to 10,000,000/ml, which approximately corresponds to MOI = 1.75-17.5 and 7 × 10 5 -7 × 10 6 PFU/ml) in the presence of inhibitory agents and incubated for another 30 min at 4°C to synchronize cargo internalization. Subsequently, the virus-overlaid cells were warmed up to 37°C. At indicated for each experiment time points, cells were washed with PBS, fixed, permeabilized, blocked and immunostained for viral and actin cytoskeleton proteins as indicated in Immunostaining section. Identical procedure was carried out for visualization of virus particles in cells depleted of certain proteins with siRNAs. To test the influence of compounds on virus adhesion, Vero cells cultured in standard medium in a 6-well plate for 2 days were cooled down to 4°C, overlaid with 100 μl of ice-cold non-diluted ZIKV stock (TCID 50 ranging from 1000,000 to 10,000,000/ml, which approximately corresponds to MOI = 1.75-17.5 and 7 × 10 5 -7 × 10 6 PFU/ml) and incubated for another 2 h at 4°C. Further, cells were rinsed twice with ice-cold PBS. The cells were fixed, permeabilized, blocked, immunostained and analyzed with flow cytometry, as described in Flow cytometry section. For assessment of the inhibitors' influence on viral replication, Vero cells were cultured in standard The data is presented as mean ± SD. To determine the significance of differences between compared groups, single-factor analysis of variance (ANOVA) was applied. P values < 0.05 were considered significant. One asterisk (*) identifies adjusted P values between 0.01 and 0.05, two asterisks (**) identify adjusted P values between 0.01 and 0.001, three asterisks (***) identify adjusted P values between 0.001 and 0.0001 medium in 96-well plates for 2 days and pre-treated with a selected agent 1 h prior to infection. Virus at TCID 50 of 800/ml (which approximately corresponds to MOI = 0.0014 and 550 PFU/ml) was overlaid on the cells in the presence of inhibitors and samples were incubated for 2 h at 37°C. Wells were washed thrice with PBS and incubated at 37°C in standard medium supplemented with inhibitors. 3 days p.i. culture supernatants were collected, viral RNA was isolated and its yield was quantified with RT-qPCR. Whether the procedure was modified, it is described in the results section. The data is presented as mean ± SD. To determine the significance of differences between compared groups, single-factor analysis of variance (ANOVA) was applied. P values < 0.05 were considered significant. One asterisk (*) identifies adjusted P values between 0.01 and 0.05, two asterisks (**) identify adjusted P values between 0.01 and 0.001, three asterisks (***) identify adjusted P values between 0.001 and 0.0001 Statistical analyses Each experiment was performed at least twice in triplicate. Chart bars represent mean ± SD. The significance of differences between compared groups was determined by Single-Factor Analysis of Variance (ANOVA); p values < 0.05 were considered significant. ZIKV enters Vero cells via clathrin-dependent endocytosis First, we asked whether ZIKV enters Vero cells via CME, as reported for other in vitro systems. We examined co-localization of virions and cellular proteins (clathrin, caveolin, endophilin, EEA1) at several time points post-infection (p.i.). Co-localization of ZIKV with clathrin was observed at 2-10 min p.i. and coincided with co-localization of virus with the early endosomal marker EEA1 (Fig. 1). No co-localization with caveolin or endophilin was observed (data not shown). As co-localization rates were not very high (Pearson's coefficient ranging from 0.103 to 0.216 and Manders' coefficients up to 0.384), we carried out a complementary study to validate the results. Subcellular localization of ZIKV in Vero cells in which clathrin expression was transiently silenced was examined. In this model virions were retained on the surface of clathrin-depleted cells even at 10 min p.i.; by contrast, control cells showed normal expression of clathrin and were permissive for ZIKV entry (Fig. 2). To prove that clathrin-dependent endocytosis is the main route of ZIKV internalization in Vero cells, we examined the effects of CME inhibitors on virus replication. We selected compounds to target multiple steps of the vesicle formation process. These included PitStop (PtS), which inhibits association between ligands with the terminal domain of clathrin [39]; amantadine (AMTD), which stabilizes clathrin-coated pits [31]; MitMAB (MM), which targets the dynamin-phospholipid interaction [36]; and dynasore (Dyn), which inhibits the GTPase activity of dynamin and therefore impairs loss of loaded vesicle from the cell surface [35]. We assessed the impact of each inhibitor on ZIKV infection by measuring the viral yield . The data is presented as the mean ± SD Fig. 11 Influence of NH 4 Cl on ZIKV adhesion to the host cells. Flow cytometry analysis of viral adhesion to cells in the presence of NH 4 Cl was carried out. Vero cells pre-treated with 50 mM NH 4 Cl were overlaid with ZIKV stock and following 2 h incubation at 4°C, they were fixed and ZIKV was immunostained with Alexa Fluor 488. Graph shows median fluorescence normalized to control, which corresponds to the proportion of ZIKVpositive cells in total cells population. vØinhibitor-untreated cells overlaid with ZIKV; NH 4 Cl -NH 4 Cl-treated cells overlaid with ZIKV; mockmock-overlaid inhibitor-untreated cells. The data is presented as mean ± SD. To determine the significance of differences between compared groups, single-factor analysis of variance (ANOVA) was applied. P values < 0.05 were considered significant. One asterisk (*) identifies adjusted P values between 0.01 and 0.05, two asterisks (**) identify adjusted P values between 0.01 and 0.001, three asterisks (***) identify adjusted P values between 0.001 and 0.0001 released from infected cells into the medium at 3 days p.i. All compounds reduced the ZIKV yield significantly (Fig. 3), suggesting that ZIKV enters the Vero cells via the clathrin-dependent route, as reported for human cells [10]. ZIKV fusion occurs in late endosomal compartment Once in the endosomal hub, ZIKV has a plethora of pathways by which it can reach the site of fusion with the host cell membrane. Conditions within endosomal compartments differ with respect to lipid/protein content and pH. To find out how the virus is trafficked to reach the site of fusion, using confocal microscopy we tracked two ZIKV structural proteins, the capsid protein (virus core) and the membrane-bound envelope protein. Co-localization of both viral proteins with cellular proteins (Rab7 in late endosomes, Rab11 in slow recycling endosomes, and LAMP1 in lysosomes) was analyzed at 5-20 min p.i. (full set of images is available in Additional file 1: Figure S1 and Additional file 2: Figure S2). The most evident co-localization of the ZIKV capsid was found with Rab7, peaking at 10-15 min p.i. (Fig. 4). However, the envelope protein showed increased co-localization with both Rab7 and Rab11-positive structures at 15 min p.i. (Fig. 4), suggesting that slowly recycling endosomes may carry viral proteins to the cell surface upon delivery of RNA to the cytoplasm from late endosomes. Finally, no co-localization with LAMP1 was found (Additional file 1: Figure S1 and Additional file 2: Figure S2). Role of cellular proteases during ZIKV infection Upon identification of the fusion site, we next identified host factors taking part in this process. Virus escape from the endosomal compartment usually occurs upon activation of viral fusion proteins, which may be triggered by environmental conditions or/and host proteases [41]. Therefore, we examined the importance of different host cell proteases. Vero cells were treated either with a furin inhibitor (decanoyl-arg-val-lys-arg-chloromethylketone 4 Cluntreated, non-transfected cells; mock -mock-infected, NH 4 Cl-untreated, non-transfected cells. The data is presented as the mean ± SD. To determine the significance of differences between compared groups, single-factor analysis of variance (ANOVA) was applied. P values < 0.05 were considered significant. One asterisk (*) identifies adjusted P values between 0.01 and 0.05, two asterisks (**) identify adjusted P values between 0.01 and 0.001, three asterisks (***) identify adjusted P values between 0.001 and 0.0001 and for 72 h p.i.; and (4) 2 h p.i. to 72 h p.i.. Culture supernatants were collected at 72 h p.i., and viral RNA was isolated and quantified by RT-qPCR. As shown in Fig. 5, infection was not affected by serine and cysteine protease inhibitors; however, the furin inhibitor led to a significant decrease in virus yield when administered p.i., suggesting that furin or furin-like enzymes play an important role during ZIKV replication, assembly or egress. Agents that increase endosomal pH hamper ZIKV entry and infection We know that for some flaviviruses cell entry is sensitive to pH changes; therefore, we used two compounds that increase intravesicular pH (Fig. 6) to check the pH dependence of ZIKV entry. Vero cells were treated with NH 4 Cl or Baf A1 1 h prior to infection. Next, cells were infected with ZIKV in the presence of NH 4 Cl or Baf A1 for 3 days at 37°C. RT-qPCR analysis revealed strong inhibition of infection (Fig. 7). Next, we used confocal microscopy to test whether the compounds indeed inhibit virus entry. Cells were treated with either of the inhibitors for 1 h and then incubated with ZIKV for 40 min in the presence of the inhibitors. Confocal microscopy revealed that viral particles in Baf A1-treated cells were visible in the cytoplasm, probably trapped in the endosomal hub and unable to undergo fusion (Fig. 8). Interestingly, we observed a different intracellular virus distribution in cells treated with NH 4 Cl. Only a small number of ZIKV virions was visible in the cytoplasm, while ZIKV particles localized mainly to the cell surface. Baf A1 blocks virus-cell fusion Baf A1 is thought to prevent endosome acidification, thereby preventing viral fusion and protein activation. Therefore, we tracked viral capsid and envelope proteins separately to visualize their fate during cell entry. To do this, we examined co-localization of these two viral components with markers of different parts of the endosomal hub (Rab7, Rab11, and LAMP1) in Baf A1-treated cells at 5-20 min p.i. (Additional file 3: Figure S3 and Additional file 4: Figure S4). As described above, in non-treated cells the ZIKV capsid and envelope proteins travelled together to late endosomes, where fusion occurs. While capsid proteins To determine the significance of differences between compared groups, single-factor analysis of variance (ANOVA) was applied. P values < 0.05 were considered significant. One asterisk (*) identifies adjusted P values between 0.01 and 0.05, two asterisks (**) identify adjusted P values between 0.01 and 0.001, three asterisks (***) identify adjusted P values between 0.001 and 0.0001 entered the cytoplasm, envelope proteins were slowly re-traffic to the cell surface. In the presence of Baf A1, fusion was blocked, and both components tended to co-localize with Rab11-and LAMP-1-positive structures at 15 min p.i. (Fig. 9), suggesting that in the presence of Baf A1 viral particles do not fuse with the membrane of the vesicle. Rather, they are destined to undergo degradation in lysosomes. However, some are transported back to the cell surface via the slow recycling pathway. NH 4 Cl hampers infection inducing fast recycling of virions back to the cell surface A very different confocal image was observed when cells were treated with NH 4 Cl. In this case, few virus particles were observed in the cytoplasm (in contrast to Baf A1-treated cells, in which the number of internalized virions was similar to that in control cells) (Fig. 10). First, we used flow cytometry analysis to test whether NH 4 Cl affects binding of ZIKV to the cell surface, which would explain this phenomenon. As shown in Fig. 11, NH 4 Cl had no significant impact on ZIKV adhesion to cells. Next, despite the low number of internalized viral particles, we examined their co-localization with cellular markers (Additional file 5: Figure S5 and Additional file 6: Figure S6). In NH 4 Cl-treated cells, both the capsid and envelope proteins co-localized with Rab7 (Fig. 12), suggesting that internalized virions follow their normal entry route. Moreover, slightly increased co-localization of Rab11 with the envelope protein, but not the capsid protein, at 20 min p.i. (Fig. 12), may suggest that, at least for these single virus particles, fusion actually occurs. We observed that, in the presence of NH 4 Cl, although the virus may enter the cell, the number of internalized viruses was very small. Because the inhibitor did not affect the virus-cell interaction, we hypothesized that the observed phenomenon may result from extensive anterograde transport. We did not observe co-localization with Rab27; therefore, we excluded the role of exosomes in NH 4 Cl-redirected virus trafficking (data not shown). To verify the role of fast recycling endosomes, we used a different approach. Because it was very difficult to visualize this rapid process, we examined subcellular localization of ZIKV upon NH 4 Cl treatment of Vero cells in which expression of Rab35, a marker protein that guides endosomes to the fast recycling track, was transiently silenced. In the presence of NH 4 Cl, the majority of virions localized to the cell surface of control and scrambled siRNA-transfected cells, whereas those in cells transfected with Rab35-specific siRNA were retained within the cell (trapped near the cell surface) until 1 h p.i. (Fig. 13). This observation led us to conclude that NH 4 Cl impairs ZIKV infection at an early stage by redirecting virions back to the cell surface. NH 4 Cl-induced re-modelling of intracellular trafficking: effects on viral replication, assembly, and egress Intracellular trafficking is important for virus replication, not only at the early stages but also during virus assembly and egress. The data regarding NH 4 Cl-mediated remodeling of the endosomal hub led us to hypothesize that the compound may also interfere with the late stages of infection. To confirm this, Vero cells were infected for 2 h with ZIKV under normal conditions (i.e., in the absence of pH-modifying agents). Afterwards, cells were rinsed thrice with PBS and then incubated at 37°C for 3 days in standard medium containing Baf A1 or NH 4 Cl. Although Baf A1 did not affect the viral yield, NH 4 Cl led to a 6.5 log reduction (Fig. 14), highlighting differences in the mode of action between these two agents and identifying a role for NH 4 Cl during the late stages of infection. Discussion During infection, viruses hijack the inward transport machinery of the cell [42][43][44][45]. While some viruses are able to fuse with the cell membrane and initiate infection almost immediately after entry, others need to be ferried long distances, e.g., during infection of neural cells [46][47][48][49]. This study focused on events that occur after the initial interaction between ZIKV and its cellular receptor. As observed for other flaviviruses and for ZIKV in different in vitro models [10,12,14], our findings demonstrate that ZIKV enters Vero cells by clathrin-dependent endocytosis. Co-localization of viral particles with clathrin was observed 2-10 min p.i., as expected for CME. However, synchronization of virus entry was not ideal due to the fact that the process was regulated by temperature; the numerical co-localization rates were significant but moderate ( Fig. 1; Pearson's coefficient ranging from 0.103 to 0.216 and Manders' coefficients up to 0.384). To ensure that the observed co-localization is not an artifact, complementary studies, including clathrin silencing (Fig. 2) or CME-specific inhibitors (Fig. 3), were carried out and confirmed our initial observations. Subsequent to clathrin-dependent internalization, the virus is encapsulated within the endosome prior to delivery to a precisely defined location. During transport, the microenvironment within the maturing endosome changes gradually; the falling pH, cellular proteases, alterations in the vesicle's membrane content, and fluctuations in redox potential affect the cargo. Viruses are fine-tuned to become active only under conditions that maximize the chances of a productive infection; therefore in most cases this event takes place at a precisely defined site within the endosomal hub [50][51][52][53][54][55][56][57][58]. Tracking single dengue virus particles revealed that they pass the early endosomes and fuse predominantly with vesicle membranes as they mature into late endosomes at 10-13 min p.i. [42]. Our observations of ZIKV trafficking are congruent with these findings; we noticed separation of the ZIKV envelope and capsid protein trafficking routes between 10 and 15 min p.i. These two viral proteins were seen together for the last time in Rab7-positive structures. These results are also consistent with the data obtained with a novel a novel surrogate-receptor approach described by Rawle et al. [59]. Later the ZIKV envelope appeared to be transported back to the cell surface via slow recycling endosomes. The latter observation is striking, because it is commonly believed that, after endocytosis, viruses avoid leaving evidence of their presence on the plasma membrane as this delays detection by immunosurveillance system [46]. Multiple studies on flaviviruses showed that to acquire fusion competence the envelope proteins need to be primed proteolytically at two sites [50,[60][61][62]. First cleavage occurs during transport through the trans--Golgi network, where a tight complex of prM and E proteins on the surface of newly formed immature virion undergoes a low pH-induced conformational change, followed by cleavage by furin or a furin-like protease [60,61]. Consequently, mature infectious virions that carry the dimeric E protein in a metastable conformation are released from the cell [62]. The second event occurs during endocytosis into a permissive cell; when the E protein is exposed to low pH, it undergoes rearrangment and enters a fusion-competent state [50]. To identify the factors that activate the ZIKV fusion protein, we used two classes of protease inhibitors. Inactivation of cysteine proteases (e.g., cathepsins) and serine proteases did not affect the infection process. However, furin inhibitors hampered the replication cycle, especially when present during the late stages. This observation is consistent with a common belief that furin in essential for maturation of ZIKV, similarly as for other flaviviruses. Despite the fact that ZIKV E and M proteins structures have been resolved [63,64] and furin-specific cleavage site in ZIKV sequences is present [65], no report showing the role of furin during ZIKV replication is available (Fig. 5) [60,[65][66][67][68][69][70][71][72]. One may conclude that furin or a furin-like enzyme activates progeny viruses, while no second protease is required during entry to susceptible cell (as reported for other members of Flaviviridae family) [60,61]. By contrast, some results advocating a role for furin during virus maturation may be due to an artifact, linked to the low specificity of the protease inhibitors used [73]. Here, we show that entry of ZIKV depends on the endosomal pH, and that endosome acidification is a prerequisite for fusion. Because we wanted to map the entry of a single virion into the cell, we used two agents commonly used to assess virus dependence on acidic environments. NH 4 Cl is a water-soluble salt of ammonia that diffuses into the endosome and acts as a proton sink, thereby inhibiting acidification of the endosome [74]. The second compound, Baf A1, is a vacuolar type H + -ATPase inhibitor that binds to the V0 sector subunit c of the ATPase complex and inhibits H + translocation, thereby blocking endosome acidification [75]. Even though the outcome of both treatments was similar, the mechanism of action appeared to be very different. In the presence of Baf A1, ZIKV particles were internalized and trafficked to the late endosomal compartment. However, due to altered pH in the vesicles, virions were not able to enter a fusogenic state and remained trapped in the endosomes, which seemed to progress slowly to lysosomes. Partial degradation and, to a lesser extent, slow recycling to the cell surface are probably responsible for removal of the degradation products, so that whole virions share the fate of the envelope protein. By contrast, in the presence of NH 4 Cl the majority of virions are retained on the cell surface, which suggests limited virus attachment/entry. First, we confirmed that the virus-receptor interaction was not affected by the basic compound; no alterations were observed. Consequently, potential causes of such a phenomenon include limited internalization or re-trafficking to the cell surface. A limited number of virions localizing inside the cells meant the we could not obtain credible results from co-localization assays. Because no significant increase in co-localization with the slow recycling endosome marker Rab11 was observed in NH 4 Cl-treated cells, we tried to inhibit the fast recycling machinery by transiently transfecting Vero cells with siRNAs targeting the fast recycling endosome marker Rab35. Depleting Rab35 resulted in retention of virions inside the cell, showing that the NH 4 Cl-mediated inhibition results from rewiring of the endosomal hub rather than a simple increase in endosomal pH. Conclusions To summarize, we mapped the entry route of ZIKV using Vero cells as the research model. The results are consistent with data on ZIKV entry into other cell types including primary cells, suggesting that the virus uses a universal entry mechanism. First, the virus uses CME to enter the cell, and then travels through the endosomal compartment to reach the late endosomes prior to fusion. Subsequently, the viral envelope tends to recycle to the cell surface. While it is essential that progeny viruses are primed by furin-like enzymes during assembly, entry seems to be protease-independent. Interestingly, we noted that NH 4 Cl (believed to simply buffer the endosomal microenvironment) is in fact re-directing the cargo and ejecting it into the extracellular space. In our previous study we reported
8,951.6
2019-05-03T00:00:00.000
[ "Biology" ]
Observation of 1D Fermi arc states in Weyl semimetal TaAs Abstract Fermi arcs on Weyl semimetals exhibit many exotic quantum phenomena. Usually found on atomically flat surfaces with approximate translation symmetry, Fermi arcs are rooted in the peculiar topology of bulk Bloch bands of 3D crystals. The fundamental question of whether a 1D Fermi arc can be probed remains unanswered. Such an answer could significantly broaden potential applications of Weyl semimetals. Here, we report a direct observation of robust edge states on atomic-scale ledges in TaAs using low-temperature scanning tunneling microscopy/spectroscopy. Spectroscopic signatures and theoretical calculations reveal that the 1D Fermi arcs arise from the chiral Weyl points of bulk crystals. The crossover from 2D Fermi arcs to eventual complete localization on 1D edges was arrested experimentally on a sequence of surfaces. Our results demonstrate extreme robustness of the bulk-boundary correspondence, which offers topological protection for Fermi arcs, even in cases in which the boundaries are at the atomic-scale. The persistent 1D Fermi arcs can be profitably exploited in miniaturized quantum devices. INTRODUCTION Exploring the exotic properties of quasiparticles in topological matters is of great interest in condensedmatter physics [1][2][3]. Weyl semimetals have been theoretically proposed [4][5][6] and experimentally confirmed [7][8][9][10][11][12][13] as an important gapless topological system, which harbors pairs of Weyl points with opposite chirality. As a periodic cross section of the Brillouin zone (BZ) moves across a Weyl point, the Chern number changes from 0 to 1 and back to 0 as it continues across the other Weyl point of the opposite topological charge [6,14]. Consequently, the 2D cross sections between a pair of Weyl points can be viewed as a continuous k-space stack of Chern insulators, leaving a streak of chiral edge states on the surface, forming Fermi arc states [5,15]. In a number of natural gapless crystals, Fermi arc surface states have been identified on surfaces with connecting chiral Weyl points [4][5][6][7][8][9][10][11][12][16][17][18][19][20][21]. The (001) face in TaAs is a representative example of what is termed an arcallowed surface (AAS) [6][7][8]13,20,21], as schematically shown in Fig. 1A. On the other hand, on an achiral surface on which the projections in the 2D surface BZ of the Weyl points coincide, topological Fermi arcs are not expected to exist [6], and this is referred to as an arc-forbidden surface (AFS). TaAs surfaces with Miller indices (100) and (110) are AFSs; whereas, (112) and (114) surfaces (see Fig. 1A), deviating from the (110) AFS, can host the projection of chiral Weyl points. These surfaces have not yet been experimentally investigated from a Weyl physics perspective. Evidently, the Fermi arcs of a Weyl semimetal, and their presence or absence, are derived from the peculiar topology of the Bloch bands of the bulk crystal, which inherently assumes an ideal, infinite crystalline system [14]. Although, in practice, one expects the Bloch band picture to hold for finite crystals with finite surfaces comprised of a large number of unit cells, the fate of Fermi arcs in structures down to atomic scale, such as a step ledge, has yet to be examined experimentally. Here, we report a direct observation of 1D edge states associated with Fermi arcs residing at the step edges on an AFS of a TaAs crystal, as well as on AASs with weak Fermi Natl Sci Rev, 2022, Vol. 9, nwab191 (D) dI/dV spectra captured near the step (labeled with color points in (B)) and 2D surface states far from the edge. A peak just above the Fermi level can be seen near the steps; (E) dI/dV mappings of the same region as presented in (B), with selected bias voltages, show the trivial edge states right on the edge atoms at −200 meV and 250 meV, while the uniform edge states at 55 meV spread into the surfaces from both sides of the steps with a width over 1 nm. arc surface states. These edge states can be viewed as topological Fermi arcs that survive persistently on atomic-scale 1D step ledges, at which the Bloch theorem is not expected to apply. Spectroscopic signatures from a sequence of surfaces gradually deviating from AAS (001) to approach AFS (110) show that the Fermi arcs undergo a continuous crossover from 2D surface states, to eventually complete localization on 1D step edges. Our results indicate that the bulk-boundary correspondence that protects the Fermi arc states is more ubiquitous than previously recognized. Indeed, these topologically protected states exist not only on 2D surfaces, but also on 1D step edges, the latter of which can be used to create interesting 1D quantum devices or a Weyl semimetal single crystal with contiguously covered topological surface states. RESULTS AND DISCUSSION A sequence of atomically flat surfaces with different Miller indices in high-quality TaAs single crystals was measured comprehensively at 4.2 K in a commercial STM system (UNISOKU-1300) (see Materials and Methods). Tunneling spectra and differential conductance (dI/dV) mappings reveal uniform 1D edge states at the step edge of the (110) and the (112) surfaces. However, no signature of such edge states was observed at the steps on the (001) and the (114) surfaces. The particular spatial and energy distributions can distinguish these unique states from common trivial edge states originating from defects, such as dangling bonds. The correspondence between the experimental results and the theoretical calculations show that the observed edge states originate from the localization of Fermi arcs at the step edges. Discovery of 1D edge states on a cleaved (112) surface We started from an atomically flat, pristine (112) surface prepared by in-situ cleaving. A 3D topographic STM image, exhibiting long and straight step edges, is shown in Fig. 1B. Each terrace step is found to be one atom high, i.e. ∼2.4Å. Figure 1C shows an atomically resolved topographic image of the top terrace, which agrees precisely with the configuration of (112). The step edge is along the [131] direction, which is the intersection between the (112) and the (114) crystal facets, and has an inclination angle of ∼52 o from [110] (right panel in Fig. 1C). Tunneling spectroscopy was performed on selected points near a step edge, as marked in Fig. 1B. As shown in Fig. 1D, a clear peak near the Fermi level in the spectra appears proximal to the step edge, in comparison with the spectra that are far from the line step (local density of state (LDOS) on the 2D surface), suggesting localized electronic states near the step edge. In the tunneling conductance mapping presented in Fig. 1E, data were collected on the same area as shown in Fig. 1B with various bias voltages. Trivial dangling bond states can be observed, although mainly confined on the edge atoms around −200 meV and 250 meV. Of special interest are those uniform states next to the step edges at the energy of 55 meV, which correspond to nearzero energy peaks, as indicated by the dotted green line in Fig. 1D. Distinguished from the trivial edge states, these localized edge states disperse from the step to the surface with a width over 1 nm in real space. Further analyses will demonstrate that these are, in fact, remnants of Fermi arcs, although these measurements are taken on an AFS. The above findings are highly interesting, since extant theoretical works have predicted that Fermi arcs can appear as localized states at 1D step edges in a 3D Weyl system [22,23]. We will focus on spectroscopic evidence for these 1D Fermi arc states in this paper. 1D edge states originate from topological Fermi arcs To elucidate the origin of 1D edge states and their possible connection to 2D Fermi arcs, we systematically investigate surface states on a sequence of planes (1, 1, 2n), of which n = 0 is AFS (110), n = ∞ is AAS (001) and n = 1, 2 denotes the nearest surfaces that deviate from (110), i.e. (112) and (114), as shown in Fig. 1A. The STM topographic image captured on the surfaces reveal a number of atomically flat facets. By comparing it with the structural models shown in Fig. 2A-C, a number of highly crystalline facets are identified to be (110), (112) and (114) planes, all containing [110] step edges, as shown in Fig. 2D-F. The [110] step edge is parallel to the (001) AAS surface, and has a height of several atoms. In the following, we will examine the signal of Fermi arcs on these facets. Figure 3B shows the selected tunneling conductance (dI/dV) spectra along the dashed arrow line in Fig. 3A on the AFS (110) facet as the tip approaches the [110] step edge. Non-zero LDOS at the Fermi level indicates the (semi-) metallic nature of the surface. The tunneling spectrum taken far away from the step, representing the 2D surface states, exhibits a small dip near the Fermi level. As the probe is moved toward the step edge, the dip in the tunneling spectrum gradually disappears while a peak grows steadily near the Fermi level. Figure 3C Natl Sci Rev, 2022, Vol. 9, nwab191 This result further suggests the topological origin of the edge states [24,25]. To understand the origin of the emergent peak in STS spectra near the step edge, electronic structure calculations were carried out to describe the lowenergy excitations of the step edge in question. A slab model with terraced surfaces on the top and bottom was used, with Miller indices (n, n, 2) (Materials and Methods, Supplementary Data). The flat region on the terrace has a width of n| c |, with the (001) plane (As-terminated for top and Ta-terminated for bottom) exposed at the step ledges with a height of unit cell | a + b|. The Ta-As chains propagate along the [110] direction (cf. Fig. 2A). Based on a tight-binding Hamiltonian obtained from densityfunctional theoretic calculations, surface Green's functions were obtained with an iterative technique to yield surface spectral function for direct comparison with the tunneling spectra [26]. For the model described above, the flat region is the (110) AFS, and consequently one expects to see topological states only near the step edges, which would correspond precisely with the experimental finding. The spectral functions A(k, ε) on (n, n, 2) (n = 6 in Fig. 3E) for ε = −20 meV on the top surface are displayed in Fig. 3F. It can be seen that both Fermi arcs and trivial Fermi surfaces are present in the surface BZ. Moreover, the Fermi arcs can be observed in the energy range between −5 meV and −30 meV with their maximum intensity at −20 meV, which corresponds to the peaked DOS in the spectra signatures (see Fig. S6). To determine the spatial location of the Fermi arcs on the terraced surface with a width over 7 nm (12 Ta-As chains), we chose one of the Fermi arcs that is clearly isolated from other surface states, and computed its projection on different Ta-As chains as labeled in Fig. 3E on the terraced slab model, and the projection weights have been depicted in Fig. 3F and G. Other visible arcs on the top and the bottom surfaces at ε = −20 meV are shown in Fig. S5. It is evident that the spectral weight of the Fermi arcs is most pronounced at the step edge, and decreases steadily as the distance from the step increases ( Fig. 3G and H). The calculation results support the existence of the 1D Fermi arc edge states at the step edge on the (110) surface, which disperse from the edge to the surface with a width of ∼ 1 nm in real space. Crossover from 2D Fermi arc states to 1D Fermi arc edge states The atomically resolved STM topography of (112) in Fig. 2E shows the relaxation of surface atoms induced by the annealing process (Fig. S3). Figure 4A presents the STM topographic surface plates (112) with steps. The height profile across the surface shows that the height of the small step ledge is ∼0.85 nm (four atomic layers). In addition, the small ledge is along [110] and terminates on the (001) surface, which permits the existence of Fermi arc surface states. For the tunneling spectra acquired far from the step edge, a small shoulder is observed in the curves at a bias energy of −20 meV. When the STM tip approaches the step edge, the shoulder evolves into a pronounced peak in the dI/dV spectra in the energy range between −20 meV and 20 meV (Fig. 4B). The near-zero energy peaks are distinguishable from the trivial edge states, which are precisely confined on the edge atoms (Fig. 4B). They also spread over several atoms on both sides of the step (Fig. 4B), and even disperse across the narrow terrace R2 (∼1.3 nm) (Fig. 4B). The 1D edge states revealed by the near-zero energy peaks can be clearly seen in the spatial-resolved dI/dV along the step edges (Fig. 4C). The peak positions near the steps are spatially steady and essentially energy independent (i.e. without interference patterns), as shown in Fig. 4C. The lack of interference rules out the possibility that the standing wave originated from the scattered electrons on the steps [24]. A slab model with terraced (n, n, 2n + 1) surfaces on the top (As-terminated) and bottom (Taterminated) was also constructed to illustrate the 1D edge states on the (112) terrace. The level region on the terrace has a width of n| c − a − b| (n = 4, width ∼5 nm), with the (001) surface exposed at the atomic-thick step ledge with a height of unit cell | b|, as schematically illustrated in Fig. 4D. The spectral functions A( k, ε) of the terraced surface for ε = −20 meV on the top surface (bottom surface in Fig. S8) are displayed in Fig. 4E. The visible Fermi arcs were selected (in Fig. 4E) to calculate the projection weight on Ta-As chains as labeled in Fig. 4D. The spectral weight exhibits pronounced localization at the step edge (Fig. 4F), which accounts for the observed 1D edge states in Fig. 4B and C. Noticeably, it decreases more slowly than that on the (110) terrace as the distance from the step increases, indicating the existence of 2D topological Fermi arc surface states. Remarkably, the 1D Fermi arc edge states appear to coexist with the 2D Fermi arc surface states in this case. The Fermi arc states in Weyl semimetals, protected by the peculiar topology of the Bloch bands of the bulk crystal, are robust against weak surface perturbations [14,20,27]. Here, we examine how the near-zero energy edge states respond to local perturbation. In Fig. 4A, the whole region is divided into three areas, R1, R2 and R3, which have widths of R1 (∼8 nm) > R3 (∼2 nm) > R2 (∼1.3 nm). The metallicity (surface DOS intensity) of each area revealed by spectroscopic signatures follows the relationship: R1 > R3 > R2 (Fig. 4G). The peak in the 1D edge states is robust and scarcely affected when the change of surface size and metallicity are considered as weak perturbations. Moreover, the peak also shows protection against the weak disorder of local defects. Two kinks induced by the edge parallel translation can be discerned, as shown in Fig. 4A. We took dI/dV curves spatially at each point as num- where dI/dV curves were measured. Two kinks can be observed along the step edge. Right panel: schematic illustration that shows that low-energy peaks in dI/dV spectra are robust without significant change, even at the kinks. Fig. 4H presents the possible atomic configuration of the edge in accordance with that in Fig. 4A. Configuration of the dangling bonds in the kinks is different (position Nos. 5-8 and Nos. [16][17][18][19], in the sense that the kinks can be considered as disorders or point defects to disturb the LDOS. If the 1D edge states were of trivial origins, the corresponding peaks should have been changed by local defects. However, no substantial changes in the spatial-resolved dI/dV spectra at each numbered point (right panel in Fig. 4H) was found, which lends further support to its topological nature. The surface plane of (114) with a step edge along [110] (Fig. 5A) is the last member of the sequence that we prepared. The (114) surface exhibits a decreased inclination angle (∼50 o ) in respect to the AAS (001). The surface states on (114) have been detected. Although the LDOS at Fermi level increases slightly as the STM tip approaches the step edge, no extra-peaked STS features can be seen near the step edge (Fig. 5B). In the spatial spectra results (Fig. 5C), the dangling bond states can be observed on the Ta-As chains at an energy that is above the Fermi level, and increased DOS are also seen on the inclined ledge. However, no uniform 1D edge states with possible topological origin can be discerned. Our theoretical calculations on (114) demonstrate that Fermi arcs distribute all over the surface, and the projections of Fermi arcs on all Ta-As chains are of comparable weight (Fig. 5E-G). This suggests that the 2D Fermi arc surface states dominate the topological information on the (114) surface, which is similar to the (001) surface with steps where no topological edge states exist owing to the 2D Fermi Natl Sci Rev, 2022, Vol. 9, nwab191 Top surface ε = -10 meV Step edge arcs all over the surface (Fig. S2). This confirms the expectation that, as the surface becomes closer to AAS (001), the spectroscopic signature of the surface states increasingly resembles that obtained on the (001) surface. The above results also verify that, when the surface indices are gradually deviating from AFS (110) and approaching AAS (001), the Fermi arcs undergo a continuous crossover from 1D edge states to complete 2D topological surface states. In Fig. 1, it can be seen that plane (112) has the step edge along [131]. Since [131] can be viewed as a ledge of AAS (114) where Fermi arc surface states exist, the chiral Weyl points have a finite weight projected on the ledge, and consequently the 1D Fermi arc edge states appear at the step edge. In aggre-gate, the results further confirm that the 1D Fermi arc states exist ubiquitously in 3D Weyl crystal step edges. CONCLUSIONS In this work, we not only observed 1D Fermi arc edge states, but also explored the evolution of Fermi arc states. In TaAs crystal, the (110) and (001) facets are perpendicular to each other. When a step is formed by the (110) surface (AFS) and the (001) ledge (a finite AAS), the Fermi arcs can only appear on the AAS ledge with a certain penetration depth on the AFS, which therefore forms localized Fermi arc states on the edge. As the surface index changes, however, an evolution of the Fermi arc states can be observed from our calculations. To elucidate the evolution of Fermi arc states, we examined the steps' form by the (1, 1, 2n) surface with the (001) ledge. For n = 0, the surface is the AFS (110), and for n = ∞ the surface is the AAS (001). As shown in Fig. 4 where n = 1, the step is formed by two AASs (112) and (001), and the Fermi arcs can survive on both facets. Therefore, on the (112) facet of the step, the coexistence of 2D Fermi arc surface states and 1D Fermi arc edge states can be observed simultaneously in the calculated results in Fig. 4F. By increasing the index n, the surface (1, 1, 2n) approaches the surface (001), and the 2D Fermi arc surface states on the (1, 1, 2n) facet become more prominent. In the case of n = 2 (Fig. 5), the step is formed by two AASs (114) and (001), and the 2D Fermi arc surface states dominate on the top (114) facet. Overall, the STM/STS measurements and the theoretical calculations performed in this work demonstrate that 1D topological Fermi arc states widely exist on atomic step edges, which can be conceptually viewed as the projection of chiral Weyl points in the bulk of Weyl semimetal TaAs. In addition, the 1D Fermi arc edge states undergo a continuous crossover to the 2D surface states as the surface gradually deviates from an AFS and approaches an AAS, and both the 1D and the 2D Fermi arc states may coexist in the process. The results reveal that the bulk-boundary correspondence in 3D Weyl semimetals remains at work even when the boundary is down to the atomic scale. Details of the sample preparation High-quality single crystals of TaAs were grown by the standard chemical vapor transport method, as described in [28]. For the processed TaAs samples: the surface was polished by abrasive papers after the (110) surface was demarcated by Laue diffraction. Then, the sample was transferred into an ultra-high vacuum chamber and repeatedly sputtered by Ar + ions with energy 500 eV. The annealing process was carried out on the sample by electron beam heating with a temperature of ∼950 o C for 30 min under a vacuum of 10 -9 Torr. For the cleaved samples: the thickness of synthetic TaAs crystals ((001) plane) was polished down to ∼300 μm with abrasive papers from both sides. It was fixed to the sample holder for cleavage on the (110) plane. Cleavage of the sample was carried out in situ in a high vacuum chamber (2.5 × 10 -10 Torr) at room temperature, with a cleaving knife equipped in the Unisoku-1300 STM/STS system. After the cleavage, the sample was transferred without interrupting the high vacuum into the STM chamber. After numerous rounds of trial and error, a region of the (112) plane was captured by STM measurement. STM/STS measurements STM/STS are performed at liquid helium temperature (4.2 K) in the Unisoku-1300 system with a Nanonis controller and the built-in lock-in amplifier. Tungsten tips were used in all of the STM/STS measurements. In the measurement of the topographic images, the constant current mode was used with the setting sample bias V bias = 100 mV and I setpoint = 500 pA. When performing the tunneling spectra (the dI/dV curves) and conduction maps, lock-in techniques were used with a modulation amplitude of 3-5 mV, frequency of 707 Hz, V bias = 200 mV and I setpoint = 500 pA to 1 nA. The difference of energy interval that arises on prominent 1D edge states on cleaved (112) and processed (112) in Fig. 1 and Fig. 4, respectively, may be ascribed to the details of doping in different batches of samples. Calculations The ab initio calculations were performed using the Vienna ab initio simulation package (VASP) [29] within the generalized gradient approximation (GGA) parametrized by Perdew, Burke and Ernzerhof (PBE) [30]. The Kohn-Sham single-particle wave functions were expanded in the plane wave basis set with a kinetic energy truncation at 400 eV. The crystal structure of the unit cell of TaAs was fully relaxed until Hellmann-Feynman forces on each atom were <0.001 eV/Å with a 12×12×3 k-mesh sampled in the BZ. To calculate the surface and bulk electronic structure, a tight-binding Hamiltonian was constructed using the VASP2WANNIER90 interface [31]. The surface states' electronic structures were calculated by the surface Green's function technique [26]. DATA AVAILABILITY All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Data. Additional data related to this paper may be requested from the authors. SUPPLEMENTARY DATA Supplementary data are available at NSR online.
5,458.6
2021-10-25T00:00:00.000
[ "Physics" ]
Head orientation benefit to speech intelligibility in noise for cochlear implant users and in realistic listening conditions Cochlear implant (CI) users suffer from elevated speech-reception thresholds and may rely on lip reading. Traditional measures of spatial release from masking quantify speech-reception-threshold improvement with azimuthal separation of target speaker and interferers and with the listener facing the target speaker. Substantial benefits of orienting the head away from the target speaker were predicted by a model of spatial release from masking. Audio-only and audio-visual speech-reception thresholds in normal-hearing (NH) listeners and bilateral and unilateral CI users confirmed model predictions of this head-orientation benefit. The benefit ranged 2–5 dB for a modest 30 (cid:2) orientation that did not affect the lip-reading benefit. NH listeners’ and CI users’ lip-reading benefit measured 3 and 5 dB, respectively. A head-orientation benefit of (cid:3) 2 dB was also both predicted and observed in NH listeners in realistic simulations of a restaurant listening environment. Exploiting the benefit of head orientation is thus a robust hearing tactic that would benefit both NH listeners and CI users in noisy listening conditions. G r a n g e , Ja c q u e s A. ORCID: h t t p s ://o r ci d.o r g/ 0 0 0 0-0 0 0 1-5 1 9 7-2 4 9X a n d C ullin g, Joh n F. ORCID Thi s v e r sio n is b ei n g m a d e a v ail a bl e in a c c o r d a n c e wit h p u blis h e r p olici e s. S e e h t t p://o r c a . cf. a c. u k/ p olici e s. h t ml fo r u s a g e p olici e s. Co py ri g h t a n d m o r al ri g h t s fo r p u blic a tio n s m a d e a v ail a bl e in ORCA a r e r e t ai n e d by t h e c o py ri g h t h ol d e r s . I. INTRODUCTION Difficulty understanding speech in background noise affects everyone from time to time, but is a particular problem for hearing-impaired listeners. Speech intelligibility is powerfully affected by the speech-to-noise ratio (SNR); just a few decibels can separate perfect comprehension from complete incomprehension. Speech intelligibility in noise can consequently be measured with some precision using a speech reception threshold (SRT), defined as the SNR at which 50% intelligibility is achieved. Hearing impaired listeners often have SRTs only 4-6 dB higher (worse) than normal-hearing (NH) listeners (Plomp, 1986), but this difference is enough to make speech intelligibility in noise their most significant disability (Kramer et al., 1998). Amplification from hearing aids improves speech intelligibility in quiet, but it does not improve SNR and so makes no difference in noise unless the noise is inaudible (Plomp, 1986). Noise reduction algorithms improve SNR. Although they may reduce listening effort (Desjardins and Doherty, 2014), they provide little improvement in intelligibility for listeners with hearing aids, because the speech signal is distorted by the processing (Loizou and Kim, 2011). Cochlear implant (CI) users have even worse problems, with SRTs 10-20 dB higher than NH listeners . Some noise-reduction algorithms and the use of directional microphones have been shown to provide a benefit for CI users in limited conditions Mauger et al., 2012). Any other method of improving SRTs in noise by just a few decibels would provide significant benefits to all listeners, but particularly for users of auditory prostheses. When speech and noise are spatially separated, there is an improvement in SRT called spatial release from masking (SRM). This effect results from a combination of acoustic differences between the stimulus at each ear and processing of these interaural differences by the brain. It is generally assumed that listeners directly face their conversation partner, and it is thought by both researchers and clinicians that this behavior is most natural (Bronkhorst and Plomp, 1990), most frequently encountered (Koehnke and Besing, 1996), or necessary for lip-reading (Plomp, 1986). However, it would clearly be useful to increase the SRM when possible. We first noted the potential benefits of head orientation using a computer model of SRM in noise and reverberation (Jelfs et al., 2011;Lavandier and Culling, 2010). The Jelfs et al. version of the model is the one used here. The model computes an effective target-to-interferer ratio that is the sum of contributions from two mechanisms. The better-ear path computes the better ear SNR resulting from the headshadow effect. The binaural-unmasking path computes binaural-masking level differences in each channel from the interaural phase differences between target and masker and from the masker interaural coherence. Both contributions are weighted according to an importance function for speech, before being integrated across frequency bands, then summed. Head orientation affects both contributions to the model by changing target-to-interferer ratio at the ears as well as interaural time delays. The model uses binaural-room impulse responses in order to reflect the impact of reverberation, when present. The Jelfs et al. model has been validated against a wide variety of SRT data Jelfs et al., 2011;Lavandier et al., 2012), predicting the level of SRM in different spatial configurations with different numbers of masking noises and in different levels of reverberation. Increased SRM was predicted when listeners faced a location between the speech source and a single interfering noise source. This prediction is intuitive, because the head acts as an acoustic barrier, and the ear on the side of the speech is shielded from the interfering noise by the acoustic shadow of the head. In addition to this head-shadow effect, the ear on the side of the speech is more sensitive to sound coming from 30 to 60 because the head acts as a baffle and the pinnae increases sensitivity toward the front. Appropriate head orientation to place the speech source in this region of personal space may thus improve speech intelligibility. Existing quantitative studies of head orientation behavior in naturalistic settings have not been analyzed in such a way that they would identify a tendency to orient at 30 away from the target speaker (Ching et al., 2009;Ricketts and Galster, 2008). Most research on SRM assumes that the target speaker will be directly in front of the listener (Beutelmann and Brand, 2006;Bronkhorst and Plomp, 1992;Peissig and Kollmeier, 1997;Plomp, 1986). SRTs are rarely measured with the target speaker in any other location. The selection of target speech and noise positions can have a substantial impact on the magnitude of SRM. For CI users, SRM is almost always tested speech-facing (i.e., the listener facing the target speaker head on) and with a masker at 90 [see reviews in Van Hoesel (2011) and Culling et al. (2012)]. In this configuration and in a sound-treated room, SRM reaches only 3 to 5 dB (e.g., Litovsky et al., 2009). However, three studies have tested CI users in the symmetrical situation where speech and noise sources are placed at equal and opposite azimuths (645 or 660 ) Laske et al., 2009;Laszig et al., 2004). These studies demonstrated that with speech and noise sources separated by 90 or 120 , a head orientated midway between the sound sources could lead to a significant head-shadow benefit of bilateral over unilateral implantation (10 to 18 dB). This benefit was defined as the SRT improvement from the spatial configuration that acoustically penalized the better ear (or CI) to the mirror-imaged configuration which favored it. The maximum head-shadow benefit predicted by the Jelfs et al. model and experimentally confirmed in Culling et al. (2012) is 18 dB for this case. In a first study focused on the benefit of head orientation to speech intelligibility, Grange and Culling (2016) established a baseline for young NH listeners. In a sound-treated room, we demonstrated that a maximum of 8 dB headorientation benefit (HOB) was predicted and confirmed to occur at a 60 head orientation when speech and noise were placed at 0 and 180 azimuth, respectively. With the noise placed between 150 and 90 , HOB peaked at 4 to 6 dB at head orientations in the 30 to 45 range. In all these configurations, with noise placed in the rear hemifield, most of the available HOB could be obtained at a 30 head orientation. The first experiment of the present report aims to show that in situations similar to those described in Grange and Culling (2016), CI users too, can obtain a significant HOB. We also aim to demonstrate that HOB can be obtained at a modest, 30 head orientation that does not detrimentally affect lip-reading, such that head orientation and lip-reading provide cumulative benefits. The second experiment, addresses the potential criticism that such effects are limited to artificial laboratory situations. The effect, while more limited in reverberation, was shown to be robust in real-life situations by creating a very realistic simulation of a restaurant with a target talker sat at the same table as the listener and many other voices distributed around the room. Loizou et al. (2009). For a bilateral CI user, the model output the better-ear target-to-interferer ratio, assuming equal effectiveness of CIs for speech intelligibility in noise. For a unilateral CI user, the model output the target-to-interferer ratio at their only CI (assuming negligible hearing in the contralateral ear). Here, the Jelfs et al. model was used as per Culling et al. (2012), with the exception that we used as model input binaural room impulse responses acquired with a head-and-torso simulator in the test environment. Culling et al. (2012) argued that the position of a microphone on a processor has a very modest impact on SRM. Incorporating in the model unequal effectiveness of CIs was also found to be unnecessary since it only marginally changed the high correlation between CI data from previous reports and corresponding model predictions. Given the above, no modification of the model was deemed necessary. Selection of spatial configurations Four spatial configurations were selected: target and masker collocated and in front (T 0 M 0 ) served as a reference for SRM data computation; target in front and masker at the rear (T 0 M 180 ) was predicted to provide the maximum attainable HOB; target in front and masker at the side contralateral to the better ear (T 0 M 90 ) or on its ipsilateral side (T 0 M 270 ) were selected because these two configurations were utilized in most prior studies, as discussed in Culling et al. (2012). The three spatially separated configurations are illustrated within each panel of Fig. 1. Jelfs et al. model predictions for SRM as a function of head orientation away from the target speaker are shown in the panels of Fig. 1, as derived from binaural room impulse responses acquired in the test environment. These predictions illustrate the benefit of head orientation in each separated spatial configuration for NH listeners and for bilateral (BCI) and unilateral (UCI) CI users, when the left ear (or CI) is the better ear. Arrows highlight SRM for the favorable 30 head orientation at which, according to the model, a large proportion of SRM can be obtained. Where shown, the difference between BCI and NH predictions corresponds to the binaural unmasking contribution to SRM, assumed to be only available to NH listeners; the difference between UCI and BCI predictions corresponds to the predicted benefit of bilateral, over unilateral implantation (see Culling et al., 2012, for in-depth discussion). In this experiment, the listener either faced the target speaker or faced 30 away (typically favoring the better ear when sources were separated). A modest 30 head orientation was expected to provide a substantial HOB without detrimental impact on the lip-reading. All plots in the results section are transformed to present the left ear as the better ear for speech intelligibility in noise. When the better ear was the right ear, the data were mirrored about the median plane. NH listeners were tested assuming an arbitrary better ear (balanced across participants). Each BCI user's better performing CI in noise was established by comparison of SRTs obtained with speech in front and noise either to the right or to the left in initial practice runs. All CI users were tested in conditions favoring their better or only ear/CI. For UCI users, SRM was additionally measured with the masker at the side ipsilateral to their CI (T 0 M 270 ). Indeed, even in this worst-case scenario, UCI users were predicted to obtain a large HOB from a modest 30 head turn away from the speech direction. Participants Ten young NH (NH y ) participants, self-reported as normal hearing and aged 18-22 years (mean age 20 years), were recruited from the Cardiff University undergraduate population (through the School of Psychology's Experimental Management System). Eight BCI-and nine UCI-user volunteers were recruited from England and Wales through the National CI User Association (NCIUA) and the Cochlear Implant User Group 2004 (Yahoo! CIUG-2004). Table I details the specifics of our CI participants. All but one BCI user (B1) had had their last implant fitted at least a year prior to testing and had sequential implantation with the second implant fitted between 2 and 12 years after the first. Participant B1 was simultaneously implanted and had the implants switched on 3 months before testing. All UCI participants had had their implant fitted at least 3 years before testing. All CI users but one (U9) had hardware and software settings such that no microphone directionality was used during testing. Participant U9 used the Esprit 3 G processor from Cochlear. This participant's data will be treated separately as an illustration of the effect of microphone directionality on HOB. An additional ten NH listeners were recruited from the local Cardiff population, age-matched to the CI users within 65 years. All had normal hearing for their age, as confirmed via pure-tone audiometry screening (<20 dB hearing level from 500 Hz to 4 kHz). From the ten age-matched NH (NH am ) listeners, a subset was age-matched to each CI user group within 0.5 years on average. All participants were briefed verbally and in writing prior to signing a consent form. All testing and forms were approved by the Ethics Committee of the Cardiff University School of Psychology. Laboratory setup Two sound-treated rooms were employed, one in Cardiff University (3.2 m  4.3 m, 2.6 m ceiling height) and one at University College London (2.7 m  4.3 m, 2.2 m ceiling height). Four Minx-10 speakers (Cambridge Audio, London, United Kingdom) fitted 1.3 m above the floor were arranged at cardinal points, at a distance of 1.5 m (Cardiff) and 1.3 m (UCL) from the center of the listener's head. The cross they formed was aligned with the walls and offset to one end of the room such that the rear and side speakers FIG. 1. Jelfs et al. (2011) model predictions, from binaural-room-impulse-response acquired in the sound-treated Cardiff room, of spatial release from masking as a function of head orientation away from the target for normal-hearing listeners (NH, solid black line), bilateral (BCI, solid grey line) and unilateral (UCI, dashed black line) CI users at the three separated spatial configurations: target in front and masker at the rear (T 0 M 180 , center panel), target in front and masker on the side favoring the better ear (T 0 M 90 , right panel) and target in front and masker on the side ipsilateral to a UCI user's CI (T 0 M 270 , left panel). All graphs assume the better ear to be the left ear and the arrows point to the prediction for a favorable 30 head orientation. were equidistant from the nearest walls and the cross was as remote from the access door as practicable. Each channel of the audio chain was judged to be sufficiently consistent for our purposes in level and spectral response via acquisition of impulse responses and comparison of corresponding excitation patterns (Moore and Glasberg, 1983). The reverberation time (to 60 dB) of both rooms was measured to be approximately 100 ms from the impulse responses, using the reverse integration technique (Schroeder, 1965). The two rooms were acoustically matched as far as practicable with the use of twelve 30 cm  30 cm foam panels placed where side reflections were most likely to occur. The acoustical matching was judged sufficient for our purpose when the Jelfs et al. model predictions in Fig. 1 did not differ by more than 1.2 dB at any point and typically differed by less than 0.5 dB. HOB predictions all differed by less than 0.5 dB. Since all NH listeners and most CI users were tested in the Cardiff room, predictions from binaural room impulse responses obtained in that room were used throughout this report. An adjustable swivel chair was positioned in each room such that regardless of chair rotation, the listener's head was at the center of the loudspeaker array. The experimenter remained in the room at all times, outside of the loudspeaker array and as far as practicable from it. This arrangement was essential to aid interaction with CI users and obtain prompt feedback from them. The speakers were powered by an Auna sixchannel solid-state amplifier (Chal-Tec, Berlin, Germany) driven by a MAYA44USBþ digital-to-analogue converter (ESI AudioTechnik, Leonberg, Germany) connected to a laptop computer. All stimuli were controlled by MATLAB (The MathWorks, Natick, MA) custom-designed programs, making use of the Playrec toolbox (Humphrey 2008-2014); For audio-visual presentations, the speech audio and video streams were synchronized by the VLC program (VideoLAN, Paris, France) and presented on a 17-in. video monitor placed immediately below the 0 azimuth loudspeaker. Stimuli Two SRT protocols were employed, each requiring its own set of stimuli. The first made use of Speech Perception in Noise (SPIN) sentences (Kalikow et al.,1 9 7 7 ) recorded audio-visually, so that audio and audio-visual SRTs could be measured and compared. The second employed previously used Grange and Culling, 2016)Institute of Electrical and Electronics Engineers (IEEE) sentences from the Harvard corpus (speakers DA and CW) in order to measure more accurate audio SRTs. For the first protocol, a set of 320 high predictability SPIN sentences were audiovisually recorded with an English male speaker (from southeast England). In addition to 200 original SPIN sentences and to complete the set required, 120 new sentences were generated, following the rules established by Kalikow et al. (1977). In high-predictability SPIN sentences, the target word is the last word, which is rendered easier to identify by the contextual information that previous words provide. The redundancy of these SPIN sentences was expected to assist CI users and help reduce the standard deviation of SNRs used in the SRT computation. The audio-visual recordings were such that the speaker's face covered two thirds of the video monitor height, delivering a near life-size face. The speaker faced the camera at all times, with his face well lit, for lip-reading purposes. The audio-visual files were batch-processed with FFmpeg (Bellard, 2013) to separate audio and video streams and enable adaptive alteration of sound levels. For the second SRT protocol, a set of 360 IEEE sentences was employed. All audio files were equalized for root-mean-square power computed over the 3-4 s recordings. The voice associated with each test was utilized to synthesize the masking noise matched in long-term frequency spectrum to that voice. The speech-shaped noise was created using a 512point finite-impulse-response filter that was based on the calculated excitation pattern of the speech material (Moore and Glasberg, 1983). Audio and audio-visual SRT protocol Changes were made to our "standard" adaptive threshold method described in Culling et al. (2012) in an effort to better adapt the test to CI users. High predictability SPIN sentences (Kalikow et al., 1977) were used instead of IEEE sentences. Initial SNRs were set to À18 dB and À4 dB for NH listeners and CI users, respectively. For the pre-adaptive phase, the SNR increment for each repetition was þ4 dB. In the event that the listener failed to recognize the target word after 4 presentations, a new sentence was presented at the previous presentation SNR. The new sentence could be repeated a maximum of 3 times (with þ4 dB increments) before being replaced with another sentence (again, with no SNR increment). In fact, none of the listeners required more than two sentences (i.e., more than seven presentations) before recognizing a target word, the trigger required to start the adaptive phase. Once the staircase commenced, SNR was adaptively changed in 62 dB increments, as per the standard protocol. However, each sentence was presented up to three times at increasing SNRs, rather than being renewed at each SNR, until the target word was identified. Repetition of sentences following unsuccessful trials was intended to make more economical use of the relatively small number of audio-visually recorded SPIN sentences. Following Culling et al. (2012), the overall sound level throughout an experiment was maintained at 65 dB A (as measured by a digital sound-level meter): an increase in SNR was achieved by simultaneous increase of target level and decrease of masker level, such that overall stimulus level was fixed and could not become uncomfortable. This new protocol is hereafter referred to as the "SPIN AV protocol." The measurement precision of the SPIN AV protocol was compared to that of the standard protocol (that used ten sentences) as a function of the number of sentences used in an audio-only and collocated-source paradigm. The standard deviation of 40 T 0 M 0 SRT measurements per protocol with four NH y listeners asymptoted with the SPIN AV protocol at the same level (1.9 dB) as the standard protocol when using nine SPIN sentences per run. Nine sentences were therefore used for each SRT-experiment measurement. An SRT offset of À1 dB with the SPIN AV protocol compared to the standard was judged inconsequential, given our interest in SRM (i.e., relative) measures. Because of the large number of conditions and to avoid excessively long testing sessions, only two adaptive tracks were performed per condition. Audio-only SRT protocol Given that only two adaptive tracks per condition in the SPIN AV protocol might give rise to substantial data variability, an additional, audio-only protocol was developed that would enable five or six SRT measurements per condition, thereby leading to more accurate SRM measures. The audioonly protocol made use of IEEE sentences, following Grange and Culling (2016), but used the same sentencesubstitution regime as the SPIN AV protocol. The requirement for triggering the adaptive phase was also relaxed from the recognition of at least two, to the recognition of at least one of the five key words. The remaining sentences in the list of ten were presented only once following the standard protocol adaptive phase. Here too, the overall sound level was maintained at 65 dB A. This audio-only protocol is hereafter referred to as the "IEEE A protocol." Testing sessions and condition rotation A first session of SRT measurements employed the SPIN AV protocol. The five selected configurations were H 0 M 0 ,H 0 M 180 ,H 30 M 180 ,H 0 M 90 , and H 30 M 90 , where the subscripts denote the head (H) and masker (M) azimuths compared to the target speech. Audio and audio-visual SRTs were measured in separate blocks, each comprised of five spatial configurations. Half of the participants began with an audio-only block, the other half with an audio-visual block, and the sequence of spatial configurations was rotated. The order of the sentence lists remained constant for all participants. Two adaptive tracks were performed and SRTs subsequently averaged between runs. A second session of SRT measurement in the same five spatial configurations later employed the IEEE A protocol. UCI users were also tested in the H 0 M 270 and H 30 M 270 configurations, so that we could explore the potential benefit of head orientation in a spatial configuration that is most detrimental to unilaterally deaf patients. Indeed, placing the masker on the same side as their CI was predicted to lead to negative SRM, if they remained facing the speech. BCI users were also tested in the H 0 M 0 ,H 0 M 90 , and H 30 M 180 configurations with each of their implants disabled in turn, which would later enable computation of summation and squelch in these configurations. For NH listeners and UCI users, these configurations were rotated within a block of five and seven configurations, respectively, and the blocks repeated six times. For the BCI users, the monaural conditions were run between binaural blocks and rotated within two dedicated blocks (right, then left CI disabled). All conditions were repeated five times. C. Results In each (separated) spatial configuration, for each participant and making use of SRTs measured with the IEEE A protocol, (1) speech-facing SRM was computed as the speech-facing SRT (condition H 0 M a6 ¼0 )s u b t r a c t e df r o mt h e collocated SRT (condition H 0 M 0 ) and (2) HOB was computed as the 30 head-orientation SRT (condition H 30 M a6 ¼0 )s u btracted from the speech-facing SRT (condition H 0 M a6 ¼0 ). Consequently, the sum of speech-facing SRM and HOB is the SRM resulting from concurrent spatial separation of sound sources and 30 head orientation. As such, speechfacing SRM and HOB can be displayed as cumulative measures. Figure 2 displays speech-facing SRM (lower panels), HOB (middle panels) and their cumulative effect (upper panels) averaged within each listener group for all three separated spatial configurations. The standard error of group means did not exceed 1 dB and averaged 0.65, 0.38, 0.55, and 0.63 dB for NH y and NH am listeners and BCI and UCI users, respectively. The isolated directional microphone case (UCId) had a mean standard error of 1 dB (across five repeat runs). SRM and HOB outcomes are compared below to Jelfs et al. (2011) model predictions computed from binaural-room impulse responses acquired in the Cardiff test room. Any concern relating to young NH listeners not having been specifically screened for hearing loss was alleviated by the standard deviation of audio-only SRTs averaged across spatial configurations being as low as 0.6 dB (1.7 dB range). Speech-facing SRM At T 0 M 180 and for all groups, speech-facing SRM was large (1.6-2.6 dB) compared to the 0.5-0.7 dB predicted by the model. At T 0 M 90 , speech-facing SRM measured 3.1-5.1 dB and compared favorably with predictions for all groups (within 0.4-1.4 dB). Speech-facing SRM was increased by 1.5-10 dB with a directional microphone, depending on masker location. At T 0 M 270 , UCI users' speech-facing SRM measured -2.1 dB and was comparable to prediction (-3.2 dB). Analyses of variance (ANOVAs) operated within each listener group on speech-facing SRTs confirmed a significant effect of masker separation [NH y F(2,18) Head-orientation benefit At T 0 M 180 , HOB measured 1.9 to 5.0 dB across groups and was notably smaller than predicted by the model (5.0 to 7.6 dB). At T 0 M 90 , HOB measured 1.5 to 3.9 dB and was comparable to the prediction (4.1 dB), except for BCI users. Overall, BCI users obtained notably less HOB than predicted. At T 0 M 270 , UCI users' HOB measured 3.6 dB and was comparable to the prediction (4.3 dB). Across listener groups and configurations, 30 HOB was confirmed significant by an ANOVA that compared SRM between head orientations [F(1,32) ¼ 338.2, p < 0.001]. HOB was confirmed significant within each listener group by separate ANOVAs [NH y F(1,9) ¼ 146.4; NH am F(1,9) ¼ 141.0; BCI F(1,7) ¼ 18.9; UCI F(1,7) ¼ 129.2, p 0.005 for all groups]. Cumulative effect of masker separation and 30 head orientation on SRM For NH listeners, adding speech-facing SRM and HOB led to SRM in reasonably good agreement with model predictions at T 0 M 180 (6.4 and 7.6 dB for NH y and NH am listeners, respectively, versus 8.3 dB predicted) and at T 0 M 90 (7.6 and 8.4 dB for NH y and NH am listeners, respectively, versus 10 dB predicted), but older NH adults obtained less SRM than their younger counterparts in both conditions. For UCI users, cumulative SRM was again in good agreement with predictions (1.5, 5.6, and 6.1 dB versus predicted 1.1, 5.5, and 7.6 dB at T 0 M 270 ,T 0 M 180 , and T 0 M 90 , respectively). For BCI users, cumulative SRM was lower than predicted (4.8 FIG. 2. Speech-facing SRM (bottom panels), head-orientation benefit (middle panels) from a beneficial 30 head orientation away from the speech and SRM resulting from the combination of source separation with a 30 head orientation away from the speech, as measured in each of the three separated spatial configurations [T 0 M 270 (left panels), T 0 M 180 (center panels) and T 0 M 90 (right panels)] and for each listener group [young NH adults (NH y ); bilateral and unilateral CI users (BCI and UCI); a single unilateral CI user with directional microphone enabled (UCI d ); NH adults age-matched to the CI users (NH am )]. Speech-facing SRM is the benefit of spatial separation of target and masker, when the listener faces the target speaker. HOB is the additional benefit of a 30 head orientation with the same spatial separation. Consequently, the sum of speechfacing SRM and HOB is the SRM resulting from concurrent spatial separation and head orientation. Error bars denote standard error of crossparticipant means, except for the unilateral CI user with a directional microphone, where error bars denote standard error of within-participant means. and 4.6 dB versus 5.5. and 7.6 dB at T 0 M 180 and T 0 M 90 , respectively), primarily due to their HOB being lower than other listeners'. The directional microphone case As can be seen in Fig. 2, speech-facing SRM increased by 10 dB at T 0 M 180 in our directional-microphone UCI user case, compared to the omnidirectional-microphone UCI user group mean. At T 0 M 90 , speech-facing SRM was also increased by nearly 1.5 dB. A significant HOB was found in all configurations; although it was reduced a little compared to that of omnidirectional UCI users. BCI users' summation and squelch Summation is defined here as the H 0 M 0 SRT improvement found when activating the worse-performing CI in addition to activating only the best-performing CI. Squelch is defined as the same benefit, but for spatially separated sound sources. Squelch is traditionally measured in the H 0 M 90 configuration, where only the masker signal is subject to interaural level differences. We measured it also in the H 30 M 180 configuration, where both speech and noise signals differ between ears. Summation and squelch outcomes are plotted in Fig. 3, as extracted from SRTs acquired with the IEEE A protocol. An average summation of 2.9 dB (1 dB standard error) was measured while squelch was 2.0 and 2.6 dB (0.5 and 1 dB standard error) at H 0 M 90 and H 30 M 180 , respectively. A within-subject t-test (2-tailed) comparing H 0 M 0 SRTs with both CIs enabled to SRTs with the best CI enabled showed the summation effect to be significant [t (7) Lip-reading benefit In each spatial configuration, for each participant and making use of SRTs measured with the SPIN AV protocol, the lip-reading benefit was computed as the audio-visual SRT subtracted from the audio-only SRT. Figure 4 displays lip-reading averaged within each listener group for the five configurations common to all groups (H 0 M 0 ,H 0 M 180 , H 30 M 180 ,H 0 M 90 , and H 30 M 90 ). The benefit of lip-reading measured typically 3 dB for NH listeners and 5 dB for CI users. Across listener groups and spatial configurations, an ANOVA for SRTs in the two presentation modalities confirmed a significant benefit of visual cues [F(1,32) ¼ 368.9, p < 0.001]. An interaction between modality (audio or audio-visual) and listener type indicates that CI users are better lip-readers and/or more dependent on visual cues [F(3,32) ¼ 7.45, p < 0.001]. The lack of interaction between modality and spatial configuration [F(4,128) ¼ 0.56, p ¼ 0.69] indicated that configuration had no impact on lipreading. Most relevant to our study was that a 30 head turn had no detrimental effect on lip-reading within each group [NH y F(1,9) Thus, a sidelong regard, i.e., orienting the gaze to compensate for a modest head orientation away from the target speaker, facilitates a significant benefit of head orientation, additive to that of lip-reading. III. EXPERIMENT 2 Experiment 1 demonstrated the effectiveness of head orientation in a sound-treated room with a single interfering sound source. It also showed that the benefit of lip-reading is robust to head rotation of at least 30 . In a real listening environment, such as a bar or restaurant, there are likely to be multiple interfering sounds sources and there will certainly be reverberation. The second experiment addresses the question of whether the head-orientation benefit still occurs in such an environment. The approach taken is to simulate, as FIG. 3. Measures of summation in the collocated configuration (H 0 M 0 _SUM label) and squelch in separated configurations (H 0 M 0 _SQ and H 30 M 180 _SQ labels), averaged across bilateral CI users and defined as the benefit of activating the poorer CI in addition to the better CI (the CI that provides the better speech-in-noise intelligibility). Error bars are standard errors of the means. realistically as possible, a restaurant listening situation using a methodology similar to that of Culling (2016). A virtual simulation was created of a real restaurant, and the effect of head orientation in this virtual environment was measured. Participants Sixteen young, self-reported NH adults, aged 18-21 years (mean age 20.2 years) were recruited in the same manner as NH y participants of experiment 1 and participated in a 90-min session. Stimuli and methods The virtual simulated restaurant was created by convolving dry speech (i.e., without reverberation) with binaural-room impulse responses. The 475-ms impulse responses were recorded in a Cardiff restaurant (Fig. 5) during its closing hours using the tone-sweep method (Farina, 2007;M€ uller and Massarani, 2001). Ten-second exponential tone sweeps were presented from a Minx-10 loudspeaker (Cambridge Audio, London, United Kingdom) to a B&K-4100 head and torso simulator (Br€ uel & Kjaer, Naerum, Denmark). Source and receiver locations were chosen directly opposite each other at each of 18 tables in the restaurant. Impulse responses were recorded between every combination of source and receiver locations. The head of the B&K simulator was also oriented to each of three positions (À30 ,0 ,3 0 ). Thus, a total of 18 source positions  18 receiver positions  3 head orientations ¼ 972 impulse responses were recorded. A subset of 180 impulse responses were needed in this experiment. In the simulations, the listener was seated at one of six tables and adopted each of the three head orientations at each table. Target speech was presented from the seat opposite. Nine interfering voices (five female and four male) with British accents, or nine interfering speech-shaped noises were distributed in a randomly selected, but fixed configuration across other tables (see Fig. 5). SRTs were measured with stimuli presented over headphones and using Harvard IEEE sentences standard methods (Culling and Mansell, 2013;Plomp and Mimpen, 1979) except that the interfering sources produced continuous speech or noise. Ten sentences were used to obtain an SRT. The interfering speech was taken from book readings posted on librivox.org. The interfering noises were filtered to match the interfering voices in excitation pattern. SRTs were measured for 6 listener positions  3 head orientations  2 interferer types ¼ 36 conditions with 36 lists of ten sentences. Listeners were familiarized with the procedure by two practice runs with a single interfering noise, using spatial configurations different from those used in the experiment. Because of the large number of conditions, each participant received a random sequence of conditions, while the sentences were presented in a fixed order. Figure 6 shows the mean SRTs for each table, head orientation and interferer type (symbols). Also shown are predictions based on the Jelfs et al. (2011) model of speech reception in noise and reverberation (lines). It can be seen that SRTs are highest when the listener directly faces the speech source in the majority of cases. An analysis of variance for SRT, with factors listener table number, head FIG. 5. Plan view of the Mezzaluna restaurant (Cardiff) where impulse responses were acquired from 18 different listener seats and with 18 talker or interferer (opposite) seats. Blackfilled circles highlight the listener positions tested for, light-grey-filled circles the noise or female-voice interferer, dark-grey-filled circles the additional noise or male-voice interferers and the open circles the target male talkers facing listener positions. FIG. 6. SRTs obtained in situations with left (À30 )/front (0 )/right (þ30 ) head orientations (L/F/R labels, on the lower horizontal axis) for each of the listener/talker pairs (at Tables 3, 6 , 9, 12, 14, and 18, labels on the upper horizontal axis) and with speech (black-filled circles) or noise (open circles) interferers. Error bars are standard errors of the means. Black lines represent model predictions with their mean equalized to that of the noise-masker conditions. orientation, and interferer type, confirmed a significant benefit of head orientation [F(2,30) ¼ 23.3, p < 0.001]. From Fig. 6, orienting 30 away from the target source improved speech reception in speech-shaped noise (open symbols) in each listening position, in line with the predictions of the Jelfs et al. model. When interfering speech was used (filled symbols), the picture was a little more mixed, but shows the same average pattern, and the interaction between head orientation and interferer type was not significant. SRTs in speech and noise did not differ significantly. A main effect of table number [F(5,75) ¼ 53.7, p < 0.001] revealed that there are systematic differences between listening positions with some seats in the restaurant allowing lower SRTs than others. Averaging the mean SRTs for speech and noise, a strong correlation between data and predictions [r(1,17) ¼ 0.88, p < 0.001] confirmed that the model also predicts the variations across tables and head orientations accurately. IV. DISCUSSION SRTs measured in a sound-treated environment confirmed the predicted benefit to speech intelligibility in noise of a modest (30 ) head orientation away from a talker when a single steady-noise interferer is azimuthally separated from the speech by 180 or 90 . This HOB was significant for normal-hearing listeners (3-5 dB) as well as for UCI users (2.5-5 dB) and BCI users (1.5-2.5 dB). The lip-reading benefit extracted from comparing audio-visual to audio-only outcomes was significant and somewhat larger in CI users (5 dB) than in NH listeners (3 dB). Crucially, lip-reading was not detrimentally affected by a 30 head orientation. The SRT data therefore showed that significant HOB can be exploited by CI users, in addition to the lip-reading that nonblind hearing-impaired listeners rely on. Data from a UCI user that made use of a directional microphone suggest that a directional microphone does not remove this HOB. A. Speech-facing SRM and HOB The speech-facing SRMs for NHy listeners (2.6 dB at T 0 M 180 and 4.4 dB at T 0 M 90 ) were in reasonable agreement with those obtained by Plomp (1976), 3.0 and 5.4 dB, respectively. SRM obtained with our CI participants at the typical H 0 M 90 configuration (3-4 dB) falls within the range covered by previous reports and reviewed , although BCI users' SRM is on the low end. The headshadow effect measured from our UCI users (6 dB) also falls in the range covered by previous reports and reviewed by Van Hoesel (2011) and is a very good match to that measured by Culling et al. (2012). Summation and squelch results are compared with the results from Litovsky et al. (2006) in the bilateral-CI-users section below. Addressing the main discrepancy with model predictions The T 0 M 180 speech-facing SRM was higher across all listener groups than predicted by the model. Since the prediction was based on acoustic measurements of the sound-treated room itself, the result cannot be explained by modest reverberation in that room. When facing the speech, there is a sharp predicted improvement in SRT for any deviation in correct head orientation. As a result, the measured SRTs should be reduced by any misalignment of the head. In contrast, for other head orientations the predicted SRT changes in different directions with head misalignment, so the SRT measurements are not biased by random misalignments. Misalignment of the head orientation during the SRT runs thus seems the most likely explanation for the high speechfacing SRM in T 0 M 180 (see also Grange and Culling, 2016). The fact that UCI users (the only listeners predicted not to gain HOB by turning either way, see Fig. 1) obtained by far the lowest T 0 M 180 speech-facing SRM (see Fig. 2) reinforces the above interpretation of the data. Group differences The measures of SRM in configurations that facilitate binaural unmasking were lower for CI users than for NH listeners, which is consistent with the assumption made that CI users do not benefit from binaural unmasking. Both CI users and NH y also had lower HOB than predicted. If, as argued above, the T 0 M 180 speech-facing SRM was inflated by head misalignment, 1-2 dB of the measured T 0 M 180 speechfacing SRM may in fact have been HOB. This misattribution would account for a deflated measure of T 0 M 180 HOB. However, it does not fully account for the reduced HOB in NH am listeners. These older, NH adults may have suffered from a loss of binaural unmasking, consistent with recent reports of an age-related decline in the binaural processing of temporal envelope and fine structure (King et al., 2014;Moore et al., 2012;Hopkins and Moore, 2011) that reduced their HOB and their overall SRM. The case of the UCI user who used a directional microphone setting demonstrated how, by suppressing sound waves coming from the rear, the T 0 M 180 speech-facing SRM was increased by over 10 dB for T 0 M 180 . However, the T 0 M 90 and T 0 M 270 speech-facing SRM values were increased by only 1.5 dB. Thus, if the masker were placed in the frontal hemifield, SRM was hardly affected by the sensitivity pattern of a directional microphone. Just as importantly, a significant 30 HOB remained in all three configurations, so microphone directionality does not remove HOB. This result is also predicted by the model, because the diffracting effects of the head alter the directional microphone sensitivity pattern to favor sounds 30 -40 away from the front. Figure 7 illustrates the effect of the head with the speech-weighted directional response of in situ directional microphones. These predictions were based on measurements of head-related impulse responses from the microphones of Oticon behind-the-ear hearing aids, placed on an acoustic manikin. The directional patterns in Fig. 7 represent only an illustrative example rather than the particular fixed directional pattern that would be produced by the Esprit 3 G processor, or the directional pattern that would be produced by the Oticon hearing aid on which it is based. Nonetheless, they capture an asymmetry in the left-and right-ear responses that would be common to any two-port in situ directional microphone which produces a stronger response to sounds from 630 -50 . It should be noted that this "distortion" in the directional pattern is probably a desirable feature for bilaterally implanted patients, because it reflects the fact that interaural level differences are preserved. Bilateral CI users BCI users stood out in that their measured HOB was less than half of model predictions. At T 0 M 180 this outcome may again be explained by inaccuracies in head orientations during testing. However, at T 0 M 90 , the HOB shortfall clearly requires another explanation, because the overall SRM sits 3 dB lower than predicted. Additional measures of summation (2.9 dB at H 0 M 0 ) and squelch (2.0 dB at H 0 M 90 and 2.6 dB at H 30 M 180 ) from BCI users were found to be significantly larger than previously reported in the literature. These correspond to the "diotic" and "binaural" benefits reviewed by Van Hoesel (2011). Compared to summation outcomes reported in the Litovsky et al. (2006) multi-center study (the effect they call binaural redundancy), our mean summation seems larger than their 1.5 dB, but their range, À6toþ9 dB, was comparable to ours, À3.5 to þ6.5 dB. Given their much larger sample, and standard errors being large (1 dB) in both studies, the difference is probably not significant. Their measure of squelch matched ours, at 2 dB. Consistently with Litovsky et al. (2006), binaural summation or squelch effect size in BCI users was much smaller than the T 0 M 90 SRM of our BCI users or the T 0 M 90 head-shadow effect of our UCI users. Assuming BCI users do not benefit from binaural unmasking, both summation and squelch are believed here to be due to the information provided by the two CIs differing in spectral content, in a complementary manner such that spectral summation occurs. Our middle-aged or older BCI users are unlikely to have equal nerve survival along their spiral ganglia, and some CI electrodes may be disabled, so as to prevent, for instance, unintended facial nerve excitation. It is therefore plausible that their two CIs deliver information from complementary spectral regions. The model ignores the SNR at the poorer ear, but the poorer ear could still be relevant to speech intelligibility if it contains such complementary spectral information . HOB may have been lower in BCI than in UCI users because BCI users already benefit from spectral summation when facing the speech and turning away from the speech might reduce the summation effect. Indeed, spectral summation should be maximum when SNRs at the two ears are similar. Orienting the head so as to bring the better ear closer to the target speech will not only improve the SNR at the better ear as the model predicts, it will also reduce the SNR at the poorer ear, thereby reducing the benefit of providing the speech information from that ear to the brain. Even if summation occurred only as a result of a reduction of internal noise at a central auditory brain level, the same principle would apply. The fact that with an additional CI, BCI users' SRM obtained with a 30 head turn is lower than UCI users' in both spatial configurations (by up to 1.5 dB at H 30 M 90 ) further reinforces the above interpretation of the data. It therefore seems that BCI users' HOB can be reduced by a loss of summation in some spatial configurations. B. Reliance on lip-reading A sidelong regard with a head orientation of 30 maintained the benefit of lip-reading at the same level as when directly facing the speaker. A linear regression analysis of lip-reading benefit versus H 0 M 0 audio-only SRTs showed a negative correlation between proficiency of listeners in recognizing speech in noise and the added benefit of visual cues (r ¼ 0.66, t ¼ 4.31, p < 0.001). This correlation is not surprising since an elevation in listeners' audio-only SRT will increase their reliance on lip reading and also can motivate individuals to improve their lip-reading skills (e. g., Strelnikov et al., 2009). Every 6 dB in SRT elevation was partially compensated for by 1 dB improvement in lipreading benefit. Since talkers differ in the ease with which they can be lip-read, the regression slope of data acquired with a different talker could be significantly different to the slope we found. One might expect that the easier the talker is to lip-read, the higher the slope. Thus, for more familiar talkers, lip-reading might go much further toward compensating for the threshold elevation CI users suffer from. Previous studies also showed that the lip-reading benefit is highly dependent on the ease of lip-reading of the sentence material (Macleod and Summerfield, 1987). To date, it has not been established whether stimulus material and talker contributions to the ease of lip-reading are independent or interact. C. Realistic listening conditions Experiment 2 examined HOB in realistic listening conditions, and showed that consistent benefits exist in the presence of multiple interferers and reverberation. One might FIG. 7. Sensitivity patterns of in situ directional microphones, generated from a simple broadband delay-and-subtract operation on impulse responses acquired from the two microphones of an Oticon behind-the-ear hearing aid fitted either side of an acoustic manikin. This figure aims to illustrate that a directional pattern is modified by the head-shadow in such a way that sensitivity maxima sit in the 630 À50 regions. imagine that the effect of such distributed interference would be to suppress any effects based on head-shadow and betterear listening, because both ears would receive roughly the same level of noise. Indeed, Hawley et al. (2004) and showed that if just two or three nearby interfering sources are located in different hemifields, effects attributable to better-ear listening become negligible. However, SNR depends on the levels of both the speech and the noise. While many of the interfering sound sources in a noisy room are in the reverberant field and consequently reach both ears at a similar sound level, the target speech is usually close by, in the direct field, and reaches the nearer ear at a higher sound level. Here, the benefit of "headshadow" is not a shadowing effect at all, but the amplification of a target wave of near-normal incidence reflecting back on itself after bouncing off the surface of the head. By turning the head, one can place one ear into this amplified part of the target's sound field. This benefit should occur for practically any listening situation and practically any listener, provided the target source is close. The reader might consider the sidelong-regard posture unnatural or more effortful. Informal feedback from all CI users who participated in the study was that they did not perceive this strategy to be an issue for them or for familiar conversation partners. They actually welcomed it. In addition, it is not uncommon for listeners to instinctively use a sidelong regard in noisy situations. This strategy is common place in loud industrial settings, for instance. The human oculomotor range is limited to a 655 eye-in-head lateral angle (Guitton and Volle, 1987). Although maintaining a lateral angle up to 30 may be more effortful than viewing the speaker's face head-on, we feel that HOB will outweigh the potential extra effort. This expectation remains to be confirmed. D. Importance of our findings to the hearing impaired CI users are known to struggle to understand speech in noisy social settings. Despite all the recent efforts made to restore access to interaural time delays at low frequencies, BCI users exhibit negligible binaural unmasking and pitch cues are limited by the relatively sparse encoding of sound by CIs. As a result, CI users only benefit from head-shadow and lip-reading benefit effects, binaural unmasking being inaccessible (Churchill et al., 2014;Van Hoesel et al., 2008) and discrimination of voice fundamental frequencies very limited (Carroll and Zeng, 2007;Geurts and Wouters, 2004). Dip-listening is also much harder for CI users (Nelson et al., 2003). Given the limited cues available to CI users, any guidance about how to optimally combine head-orientation and lip-reading benefits could be highly valuable to them. Such guidance could make the difference between social isolation and active enjoyment of social interactions. As guidance may benefit interactions with a familiar, easierto-lip-read conversation partner, it is even more critically important for unfamiliar, harder-to-lip-read conversation partners. While the research presented here focusses on CI users, it can equally well serve to help other hearingimpaired listeners, whether partially and/or unilaterally deaf. Since binaural unmasking represents a small part of a NH listener's SRM and hearing-impaired listeners often exhibit a reduction in binaural unmasking, the conclusions drawn from the present studies may transfer to hearing aid users as well as unaided hearing-impaired listeners. V. CONCLUSION The presented study has shown that there is a substantial head-orientation benefit available to CI users' speech understanding in noise. In sound-treated rooms, NH listeners obtained a large benefit, which was somewhat reduced by a loss of binaural unmasking in the older NH adults, who were age-matched to our CI user participants. Despite the absence of binaural unmasking in unilateral CI users, their headorientation benefit matched that of young NH listeners (5 dB) with the masker initially at the rear. The benefit was reduced, but still significant with the masker initially to the side contralateral to their CI (2.5 dB). Bilateral CI users exhibited the lowest benefit of head orientation, presumably because they already benefitted from substantial spectral summation. A modest 30 head orientation did not affect the lip-reading benefit measured in NH listeners (3 dB) and CI users (5 dB). Head orientation up to 30 and lip-reading therefore provide cumulative benefits. In normal-hearing listeners, head-orientation benefit of >1 dB was found to be robust in a realistic listening environment with multiple interfering sounds sources (speech-shaped noises or voices) and reverberation. These findings with CI users and NH listeners may extend to other hearing-impaired listeners, so all listeners can enjoy the benefits of the sidelong regard in noisy environments.
12,527.2
2016-12-02T00:00:00.000
[ "Engineering", "Medicine" ]
Deployment of small cells and a transport infrastructure concurrently for next-generation mobile access networks The exponential growth of mobile traffic means that operators must upgrade their mobile networks to provide higher capacity to final users. A promising alternative is to deploy heterogeneous networks (HetNets) that combine macro Base Stations (BSs) and SmallCells (SCs), although this increases the complexity and cost of the transport (SCs to Fiber Access Point–FAP). Most of the planning strategies outlined in the literature are aimed at reducing the number of SCs and ignore the impact that the transport segment might have on the total cost of network deployment. In this paper, heuristics are used for the joint planning of radio (i.e., SCs) and transport resources (i.e., point-to-point fiber links). These were compared and examined to determine the advantages and disadvantages of each approach, and in some cases, this led to a 50% reduction in total costs, while still creating a non-scalable network. Introduction Some industrial and academic specialists predict that Global IP traffic will increase nearly threefold over the next 5 years, and will have increased 127-fold in the period from 2005 to 2021. Overall, IP traffic will grow at a Compound Annual Growth Rate (CAGR) of 24 percent from 2016 to 2021 [1] [2]. Allied to this growth, other new types of internet connection are emerging, that have led to new networks like Vehicular Networks (VN) [3] and Wireless Sensor Networks (WSN) [4][5][6][7][8]; it is predicted that these will be combined with others, forming a new paradigm called Internet of Things (IoT) [9]. These new technologies demand even more capacity, not only in terms of throughput but also in latency, and this will require a considerable investment in the mobile network infrastructure. The Fifth-Generation (5G) of mobile networks encompasses all the requirements of IoT and has the capacity to interconnect all existing and emerging technologies. Related works The deployment of HetNets comprising of Small Cells with a fiber-based transport system is expected to be a very attractive means of providing coverage and capacity in densely populated areas. A fiber-based backhaul solution offers the high capacity needed to meet this requirement, but it is costly [2] and time-consuming to deploy, when not readily available. Hence, when deploying the infrastructure of next-generation cellular systems, backhaul links should be included in combination with SCs to reduce network costs and optimize performance. There have been extensive studies of RNP in the literature because of its importance. For example, Guo et al. [34] established a theoretical framework to maximize the spectral efficiency of the network and avoid interference caused by SC deployment. Cheng et al. [35] and Shimodaira et al. [36] adopted the throughput of a system as a performance metric to find optimal locations for placing small static cells. Coletti et al. [37][38] devised outage deployment mechanisms in realistic metropolitan scenarios. In [39], a promising strategy was employed to offload a significant amount of data from a macro BS through an SC placement service. This approach was adopted as a means of dimensioning Long Term Evolution (LTE) cellular networks so that the number of BSs required to cover an area of interest could be determined. It had to take into account factors such as user density, service subscriptions, resource allocation, and interference mitigation. In [40], the approach was extended to the use of simulated annealing for HetNets. In [41] a greedy micro BS deployment strategy was employed over the existing macro cellular network with the aim of maximizing the energy efficiency of the network while meeting the growing demand for capacity. In [42], Xu et al. propose a Q-learning based network selection algorithm for a heterogeneous wireless network scenario and found a solution that can achieve a good performance in terms of blocking probability. In [43], Helou et al. recommend a network-assisted approach for radio (BS) selection, with the aim of improving network performance and user experience. Network-centric and user-centric strategies are set out in [44], where the authors examine the resource allocation problem by determining the number of resources that must be assigned to the users by each BS. Both strategies involve conducting an analysis of a multihoming approach. Previous recommendations have made significant contributions to the deployment of SCs. Although, in the author´s opinion, this challenge has not been fully investigated. For this reason, this study supplements previous research studies by offering new strategies (considering multiple features like interference, Signal-to-Interference-Plus-Noise Ratio, coverage and minimum QoS) and comparing them with others in the literature. Basic features of SCs and transport deployment strategies SC deployment traditionally calculates coverage on the basis of traffic density. This traffic is difficult to characterize, especially in view of its dynamic nature and the shifting trends in usage patterns and social mobility. Nonetheless, according to [45], a great deal of traffic information can be inferred and forecasts made on the basis of the following: i) demographics: the distribution of the residential and business community on the basis of demographic data; ii) the traffic system: vehicular data based on public transport and the movement patterns of private vehicles; iii) fixed line data plans: based on a correlation with fixed line phone call records, (most mobile data traffic occurs indoors). Machine learning algorithms are described as learning a target function that best maps input variables to an output variable and are applied in several areas like routing protocols for different types of users and networks [46][47][48][49][50], TCP/IP protocols optimization [51][52], energy efficiency [53], data classification [54], and etc. In the case of SCs deployment, when there is a set of possible cell-site locations, iterative techniques are usually used to scan the optimal locations. Optimization methods such as integer programming, simulated annealing, and genetic programming algorithms, can be employed to search for optimal solutions. The mobile SC deployment problem is an NP-hard problem and this can be proved by a reduction from the SCs facility location problem. The proof is not included here owing to a lack of space but was explained in [55], where the authors formulated an optimized SC deployment problem with the aim of maximizing the service time provided by small mobile cells for all users, while taking account of a finite number of small mobile cells and inter-cell/cross-cell interference. Moreover, in the same study, it was proved that this is a NP-hard problem. The paper in [56] studied SC deployment in existing HetNets and stated that this is an NP-hard problem too. A good solution was found in [33], where a commercially available CPLEX linear programming solver was used to establish an optimization framework. However, as pointed out in the paper, the computation time was very significant and depended on the scale of the dataset. If there is a need to plan a SC network for a large region, a heuristic approach may be required to achieve a satisfactory result within a reasonable period. The authors in [45] state that there is a temptation to deploy SC without articulated radio planning and rely on signal processing techniques to improve the performance. The danger of adopting this approach is that it is hampered by a lack of effective interference mitigation techniques and also involves a huge increase in the network deployment costs. For these reasons, this paper both uses and compares heuristics for SC and backhaul deployment, by including real-world factors such as the following: the existence of fiber resources that are sparsely located, interference, costs, coverage and QoS. Network Parameters The downlink Signal-to-Interference-Plus-Noise Ratio (SINR) over a given subcarrier n assigned to user k, can be expressed as follows: in which P k;bðkÞ is the received power on subcarrier n assigned to user k by its serving BS b(k); s 2 is the thermal noise power; and I k is the inter-cell interference from neighboring SCs. It was assumed that all the SCs are transmitting with maximum power PS. The received power at user k from b(k) can be calculated by means of (2), which relates the received power to a node and is the result of the transmitted power and the fading of the signal calculated by the Stanford University Interim (SUI) model [57] This can be expressed as: The value of L SUI is calculated by the three equations shown below: In which: • d = distance from the antenna to the measured point, in meters. • h b = base station height, which can be between 10 to 80 meters. • a, b and c = constants dependent on the terrain category, which can be seen in Table 1. • S = shadowing effect, which can be between 8.2 and 10.6 dB. By correctly assigning the input parameters, it is possible to simulate urban and suburban environments with shading [57]. Each user achieves Shannon´s capacity limit [59], i.e., the data rate for user k is expressed in (6) as: in which B is the bandwidth. Proposed heuristics solution procedure The heuristics were divided into two groups with different spatial perspectives: one with a predefined location for the SCs and the other based on the users' location. Two techniques were employed for the first group and one for the second. These are outlined below, together with their peculiarities, as well as their benefits and drawbacks, which will be shown in the results section. A. Heuristics based on pre-defined SC locations Consider a geographical area A in which a number of SCs must be deployed. The candidate location model is given by where S contains the possible places to install an SC without fiber optic connection and Sf locations with fiber backhaul connectivity. Note that these Fiber Access Points (FAPs) are also potential locations for the SC deployment since the SCs must be connected to the core network through some kind of backhaul solution. In the interests of simplicity, each Sf element is termed a node. The deployment variable x i can be defined as follows: If the i st node is selected to place a SC 0; otherwise: ( Additionally, since the 5G networks access schemes have not yet been defined, the Orthogonal Frequency Division Multiple Access (OFDMA) was used as an alternative. OFDMA is based on the Orthogonal Frequency Division Multiplexing (OFDM) technique and thus acquires its immunity to InterSymbol Interference (ISI) in a frequency selective fading channel and offers good flexibility and a performance of reasonable complexity [60]. The users of the same cell are multiplexed in frequency, and the data of each user are transmitted on a subset of the sub-carriers of an OFDM symbol. Adaptive resource allocation and link adaptation techniques are essential to achieve the challenging targets of spectral efficiency and user throughput targets. In OFDMA systems, resource allocation techniques can make use of the time and frequency variations of the system to optimize the use of the available resources. They exploit the available Channel State Information (CSI) at transmitter side so that they can carry out the power allocations and share the subcarriers with the users [60]. As stated in [32], it was assumed that N subcarriers were available for downlink transmission and that there was a predefined user distribution in an area of interest. A simple model was employed for the non-heterogeneous distribution of users, by randomly distributing them over the whole map (divided into quadrants). Four dense areas were created to characterize the office spaces, (this distribution will be illustrated in the results section). The objective of the heuristics is to find the minimum number of SCs that can still ensure coverage and provide capacity requirements planning for all the users, while at the same time reducing the total cost of deployment, including that of both the wireless and wired infrastructure. It can be assumed that each user can only be served by exactly one cell and, thus, user demand is indivisible. In formal terms, the problem can be formulated as: Where: • x i = The Binary variable that assumes a value of 1 if the SC was deployed; • C a = The fixed Cost of a SC deployment; • C i = Total Cost of SCi; • C Trn = Trenching cost per unit; • C Fib = Fiber cost per unit; • pi = Position where SCi was deployed; • q i = Position of the point of access for SCi; • d p;q = Distance between point p to q based on Taxicab geometry; • Users i = Number of users that are assigned in the SCi; • Z = Binary variable that assumes a value of 1 if it includes the Transport Costs in the optimization process; In addition to the objective function, there are some constraints that need to be noted: In Eq 10, A k;i = 1 means that user k was assigned to SCi, so this restriction guarantees a minimum percentage (value of X) of covered users. Eq 11 ensures that Shannon's capacity (received by the user k from SCi) must have a minimum value and Eq 12 certifies that the number of PRBs delivered per SCi will not be greater than the total number of existing PRBs. The heuristic Type 1 (T1) is only based on the SC cost (Z = 0) and the Type 2 (T2) on the total deployment cost (SC and Transport). In both cases all the candidate SCs are tested one at a time, and then the SC is removed, to find out if it was dispensable or indispensable. Below are presented the proposed heuristics that aim to select the SCs necessary to cover a minimum coverage (coverage_min) and QoS (QoS_min) to the all users (UE). The initial of the algorithm 1 describes the process of allocating UEs to the closest SCs (where St is the list of possible SCs). After this phase, is calculated the deployment cost of each i 2 St using Eqs 8, 9 and 10 (Z is used as a binary variable, to determine whether the cost of transport will be considered in the cost function), and then St is sorted by cost in descending order, St will serve as input to Algorithm 2. At the end of this phase, interference between SCs and UEs are calculated and the maximum capacity of each k 2 UE is calculated, considering the resources available in each SCs (i.e., PRBs), which are divided evenly (regardless of the channel quality of the UE). In Algorithm 2, in addition to the list (St), a minimum percentage of coverage (coverage_min) and throughput capacity (QoS_min) are given as input. In this algorithm, all i 2 S is tested to determine if this SC is indispensable: the test is done by taking out SC and calculating whether the minimum requirements for coverage and QoS are met. If is, the SC is removed, otherwise, the SC i is deployed from St. In the last phase, algorithm 1 is called again to recalculate the UEs assignment and Shannon capacity. After all tests, the remaining SC are deployed and are interconnected by means of Prim's algorithm [61], which is a greedy algorithm that finds a minimum spanning tree for a weighted undirected graph. This is a standard algorithm used worldwide in the literature for the purpose of making comparisons, although our framework is flexible enough to use any other algorithm. B. Heuristic based on the user´s location Unlike the previous heuristics, the Type 3 (T3) does not include an initial set of SCs to be tested; instead, the users' positions are noted so that the semi-optimal locations to deploy SCs can be found. A K-means clustering algorithm was used to find these locations [62]. In the first stage, it is necessary to know the number of centroids (in this case, the number of SCs) that will be used. Before carrying out this task, it is essential to find out how many users each SC can serve. This number is obtained from the maximum capacity of an SC (in Physical Resource Blocks-PRBs) divided by the PRBs_perUser (number of PRBs, on average, that each user needs to meet the QoS requirements). The information on how many PRBs is available for each SC is provided by the Total_PRBsPerSC variable. tasks of counting the number of users, detecting the number of SCs and calculating the position of the SCs (centroids), are carried out locally, in each quadrant. The algorithm checks all the users (one at a time) so that it can allocate them to a base station, a task that can only be undertaken if the SC has the number of PRBs required to meet the users' predetermined requirements. In line 12 is checked if the SCs created attend the minimal coverage and QoS of all users. In positive case, the Prim's algorithm is called to calculate the trenching costs, and with this information is possible to know the total cost of deployment of all SCs. In negative case, the algorithm 3 is called recursively, an input value is increased (PRBs_perUser) and a new SC deployment scheme is formed, with more SCs. In this way, the capacity of the network is increased. This process is repeated until the values of QoS_min and covera-ge_min are satisfied. Results An example is given of an application of heuristics in a typical urban area in Stockholm (Sweden). The scenario under examination was modeled as a Manhattan street grid, and no macro layer was included. As discussed in the earlier sections, areas with dense SC deployments were added to our simulated network to illustrate some of the key operations of the 5G networks. Fig 1 shows an example where a number of users are spread over the area. The top left and bottom right districts represent sparse residential areas (i.e., low concentration of users), the top right and bottom left districts are business areas (i.e., with a high concentration of features such as shopping malls, football stadiums, and locations with a high level of indoor traffic). The general simulation parameters are shown in Table 2 and Figs 1 and 2. Road intersections were chosen as the intended locations for the SCs deployments (for the T1 and T2 heuristics). 169 possible nodes were selected for SC deployments in this scenario, and these are shown in the X dots (Fig 1). The FAPs were positioned in different places. The purpose of applying the heuristics (T1 and T2), is to identify the ideal locations for SC from the network of roads, and find the optimal routes for fiber planning that branch out from the existing fiber access points to these selected SC locations. Fig 1 shows the divisions in the quadrant that were used in the T3 heuristic. Each quadrant is a block with a side that is 1.25Km long. Four of the sixteen quadrants have a large crowd of users, making up 12.5% of the total number; the other 50% are randomly distributed over the map, which means that these crowded areas may contain more than 12.5% of the users. These quadrants are the same as those used in the T3 heuristic to find the positions of the SCs. The results were obtained after thirty iterations, in each of which the position of the users and FAPs were different. Another key factor is¨user densification¨, which allowed scenarios to be created with four different numbers of users: 400, 600, 800, 1000. The approaches were compared by means of the following performance measurements: the number of deployed SCs, total cost of deployment, resource distribution (Jain's Fairness Index) and scalability. The measurement used to calculate the resource distribution was the Shannon capacity formula described in 6. A. Number of SCs deployed The results given in this section are linked to the number of SCs that each heuristic needed for coverage and to meet the QoS requirements of all the users. Fig 3 shows the (average) number of SCs deployed. The heuristic T1 roughly needed between 68 SCs (400 users) and 84 SCs (1000 users) to properly serve the network, which led to a reduction of approximately 60% of SCs (400 users) and 50% (1000 users). This approach is based on the concentration of users in the same SC. The T2 heuristic had a lower performance; and only achieved a reduction of 58% (400 users) and 11% (1000 users). This can be attributed to the choice of SCs, as well as the positioning of the FAPs and, hence, the trenching required to meet this backhaul requirements, which are traditionally the most expensive, even if the number of SCs is higher. The T3 heuristic achieved better results and between 20 and 32 SCs were needed to serve all the users. The avoidance of street networks orientations and pre-defined positions dramatically affected the results. It should be noted that this approach depends a great deal on free areas, which limits its scope. (Fig 4), it is clear that the T2 heuristic performs better in scenarios with low densification and approximate numbers of SCs that need to be installed,. In other cases (when there are 600, 800 and 1000 users), the difference in the number of points to be covered, raises the total cost of SCs, fiber (per km) and, hence trenching (per km), which makes the T1 heuristic more efficient. These results demonstrate that the search for optimized scenarios using RNP and transport, at the same time, may not lead to satisfactory results. As expected the T3 Heuristic, incurred the lowest costs. Compared with T1, there is a reduction of approximately 50% of the total cost, rising to 55% in dense scenarios. More significant results were found when a comparison was made between T3 and T2: a reduction of 50% with 400 users with an increase in later scenarios, reaching 76% in results with 1000 users. In the deployment of large networks, it is essential to know which resource will need more investment. With regard to this, Fig 5 shows the percentage of each item in terms of the total cost. As stated in [29], the trenching cost represents a significant proportion of the total cost. On average (for all densification scenarios), the total cost of the TI heuristic consists of the trenching cost (69.6%), fiber cost (17.8%) and SC deployment cost (12.6%); in T2, 67.6% is made up of trenching, 17%, fiber and 15.4%, SCs; finally, in T3, the trenching cost is responsible for 71.4% of the total cost. The rest is divided between fiber (18.2%) and SCs (10.4%). Although these values are very close in percentage terms (e.g., in a scenario of 1000 users, the trenching costs is responsible for 69.6% of the total cost in T1 and 71.4% in T3), the absolute C. Resource distribution and scalability The main purpose of the algorithms is to attend the users and meet the QoS requirements;. Jain's Fairness Index is a coefficient that is used to determine if users are being allocated a fair share of the resources (in this case, evaluate if the resources are attending the minimum QoS and are well distributed among the users). In LTE mobile networks, the resources are represented by the PRBs and are affected by distance and signal path loss conditions [63]. Each user may need some or even dozens of PRBs to achieve the established minimum capacity. In view of this, Jain's Fairness Index was calculated by means of the Shannon capacity of each user. Table 3 shows the results obtained from Jain's Fairness Index. The T3 approach obtained the best results and a minimal variation between the users' densification. This means that the algorithm was able to maintain the quality of this metric. On the other hand, the T1 and T2 techniques had wider variations although they were not so significant. It should be noted that, in numerical terms, the T3 heuristic distributed the network resources approximately 2.5 times better than the other two. The fact that the distribution of the resources in T3 was better than T1 and T2 can be attributed to the optimized allocation of the SCs, which allow a better distribution of the users. However, if there are significant changes in the number of users (the tidal effect), it means the T3 approach has already reached its limit, while the other approaches have resources that can be redistributed. The T2 approach in particular, obtained a larger number of installed BSs and, thus, was able to have more PRBs for allocation. This factor is illustrated by Table 4 and proved by previous results where there was a considerable growth in the number of the SCs deployed. As a result, there was a large number of PRBs (especially in crowded areas), and this created the most scalable network. Conclusion SCs provide an increase in coverage and capacity by offloading macrocells and thus are able to alleviate the congestion of the mobile network, which was not initially designed for data traffic. The optimal allocation of SCs is still an open problem and becomes even more complex when factors such as the transport network and minimal QoS are taken into account. By adopting a heuristic approach for their allocation, this study has sought to provide users with the minimum levels of quality of QoS and reduce the total cost of SC and transport deployment. The results showed a significant reduction in the total cost and to a great extent, relied on the user distribution and position of the FAPs. The benefits resulting from this study are threefold: (a) proposal of a new heuristic for SC deployment based on a clustering approach; (b) unlike other studies in the literature, the heuristic was formed by including cross-layer factors (throughput and SINR); (c) it allows a comparison to be made between different approaches based on SC and the transport deployment heuristic for next generation mobile networks. It should be stressed that this work does not consider the concept of Complex networks. This has attracted a great deal of attention among researchers and involves many important phenomena which cannot be encompassed by isolated networks. Since real-world complex systems are becoming increasingly dependent on each other, the study of interdependent networks has become a key issue in network science. The investigation into complex real-world systems (such as the relation between mobile access networks and power systems, for example) has shown that interdependent networks can lead to new discoveries that cannot be explained by establishing a single-network framework [60]. Problems like vulnerability to attacks, dynamics and synchronization behavior scale-free onion like networks or the structural controllability of a network within complex networks, are not addressed here. However, in future work, we intend to study further the optimization problem that arises from deploying radio networks, while taking account of these factors. Finally, with regard to the limitations of this study, the following areas should be mentioned here as subjects of future work: the changing position of users over a given period of time; the different radius for SCs and an evaluation of the traffic requirements for different applications.
6,253.8
2018-11-26T00:00:00.000
[ "Engineering", "Computer Science" ]
The development of an immunoassay to measure immunoglobulin A in Asian elephant feces, saliva, urine and serum as a potential biomarker of well-being Identifying biomarkers of well-being will be beneficial to our understanding of animal welfare. We developed and validated an assay for measuring Asian elephant immunoglobulin A (IgA) in feces, saliva, urine and serum as a potential welfare measure, and show that longitudinal assessments are necessary because of high intra-individual variability. Introduction Modern zoos have a responsibility to maintain animals under the highest standards of care, the key to which is understanding species biology and natural history to ensure captive environments meet both physical and psychological needs. In recent years, a scientific approach to studying zoo elephant welfare has led to great strides in improving the care and management of African (Loxodonta africana) and Asian (Elephas maximus) elephants. Indeed, a recent epidemiological study in North America revealed a number of variables correlated with positive welfare outcomes (Carlstead et al., 2013;Brown et al., 2016;Miller et al., 2016;Morfeld et al., 2016;Prado-Oviedo et al., 2016;Greco et al., 2016aGreco et al., , 2016bHoldgate et al., 2016aHoldgate et al., , 2016bMeehan et al., 2016aMeehan et al., , 2016b. Although several important factors were identified, these were primarily population-level results, making it difficult to assess individual well-being. Despite improved understanding of elephant physiology over the last three decades, significant health (Fowler and Mikota, 2006) and reproductive (Brown, 2014) issues remain, so additional measures to assess physiological state would be beneficial to species management. Traditional welfare assessment methods have focused primarily on negative states, such as the occurrence of abnormal behaviors, poor health and survival, the lack of reproductive function, or elevated stress hormones (glucocorticoids, GC) (Broom, 1991). Glucocorticoid measures can be a useful marker of physiological state, especially when assessed noninvasively (Schwarzenberger, 2007), but need to be interpreted correctly. Increases in concentrations are associated with acute (Scheiber et al., 2005;Viljoen et al., 2008;Voellmy et al., 2014) and chronic (Gobush et al., 2008;Blickley et al., 2012;Parry-Jones et al., 2016) stress, but also can occur in animals coping appropriately with day-to-day challenges, including positive stimuli such as pleasure, excitement and arousal (Ralph and Tilbrook, 2016). They may also reflect normal physiological states; e.g. during the estrous cycle (Fanson et al., 2014) and pregnancy (Kersey et al., 2011;Marciniak et al., 2011). Indeed, individuals may be more or less responsive to potential challenges due to different coping styles (Curley et al., 2008;Koolhaas, 2008). This normal variation must be taken into account when using GCs as a welfare measure, necessitating longitudinal analyses to reliably understand biological relevance. Although these measures are still of great importance, attention has turned more recently to finding additional markers of well-being, including those that indicate positive affect (Yeates and Main, 2008). Incorporating measures of both positive and negative states allows an evaluation of welfare as a continuum, assessing factors that are good for an individual, as opposed to just not being bad. Biomarkers of immune function have previously been used to assess welfare, because stress can have immunosuppressive effects (Siegel, 1987). For example, cell-mediated and humoral immune responses were influenced by housing condition and stocking density of ewes (Caroprese et al., 2008), and alterations in biomarkers of the innate immune response and acute phase reaction were associated with potentially stressful changes in housing of pigs (Marco-Ramell et al., 2016). Another potential biomarker of well-being is immunoglobulin A (IgA) (Staley et al., 2018), an antibody that plays an important role in the immune defense against pathogens. There are typically two forms of IgA, which differ both in structure and in function (Kerr, 1990). Secretory IgA exists as a dimer that also contains a J-chain and a secretory component to protect against proteases. This form is produced at mucosal linings, and is present in saliva, tears, bile, milk and mucosal secretions of the reproductive, respiratory and gastrointestinal systems (Pihl and Hau, 2003), where it acts as the first defense against pathogens including viruses and bacteria. Monomeric IgA is found in serum, produced by plasma cells in the bone marrow and acts as a secondary line of defense to eliminate pathogens that breach the mucosal surface (Woof and Kerr, 2004). Due to the abundance of IgA secreting cells in normal mucosa, IgA comprises at least 70% of immunoglobulins produced in mammals (Macpherson et al., 2008). In addition to being an indicator of immune function, IgA has been shown to decrease during times of stress. Physical stressors such as intensive exercise (Gleeson et al., 1995;Skandakumar et al., 1995), psychological challenges (Deinzer and Schuller, 1998;Ng et al., 1999), metabolic demand (Royo et al., 2005), and relocation to a new environment (Bundgaard et al., 2012) have all been associated with decreased IgA. Interestingly, however, IgA has also been shown to increase in response to positive stimuli, such as relaxation and positive emotional states (Green et al., 1988), and so has been suggested to be a potential marker of positive well-being (Yeates and Main, 2008). A further advantage to the use of IgA is that it can be measured in multiple biological samples, including serum (Maes et al., 1997;Mishra et al., 2011;Moazzam et al., 2013), saliva (Kikkawa et al., 2003;Lucas et al., 2007;Kvietkauskaite et al., 2014), urine (Eriksson et al., 2004;Rehbinder and Hau, 2006;Paramastri et al., 2007), and feces Rehbinder and Hau, 2006;Paramastri et al., 2007). Although IgA has been measured in a variety of species, including cats, dogs, humans, pigs, primates, reindeer and rodents, studies often are limited to either a single sample type or a limited number of samples over time. Immunoglobulin A production and secretion is tightly controlled at the local level, and influenced by physiological signals including those associated with immune and stress responses (see Staley et al. (2018) for a review), meaning concentrations may be variable both between sample types and over time. Past research has highlighted inconsistencies in the IgA response to acute stressors (Staley et al., 2018), perhaps because of differences in the prior state of the individual or in the type of response required to deal with the stressor involved. Furthermore, acute stressors may be associated with increases in IgA, whereas chronic stress may (Staley et al., 2018). For IgA to be a useful physiological biomarker of animal wellbeing, it is imperative to determine the degree of within and between individual variability, and to understand how acute or chronic challenges may impact IgA concentrations. The goal of this study was to develop an enzyme immunoassay (EIA) to measure IgA in multiple biological sample types, specifically feces, saliva, urine and serum in Asian elephants. We then set out to compare concentrations of IgA and GCs concurrently to investigate relationships between these two biomarkers across multiple sample types and over time, as a first step to determining if IgA can be a useful marker to assess well-being in this species. Animals and sample collection Samples were collected over a 6-month period from four female Asian elephants at the Smithsonian's National Zoological Park, designated A-D, which were 69, 42, 42 and 27 years of age, respectively. This research was approved by the Animal Care and Use Committee of the Smithsonian National Zoological Park and Conservation Biology Institute (NZP-ACUC #15-03). Blood was collected from an ear vein as part of the weekly management routine, allowed to clot at room temperature (RT), centrifuged, and the serum harvested. Saliva was collected on the same day as serum, using a Cortisol-Salivette ® system (Sarstedt Inc., Newton, NC). Urine was collected opportunistically, generally free-catch, and typically on the same day as serum and saliva, or within 1-2 days. Feces were collected the day following serum and saliva collection, to allow for an estimated gut transit excretion rate of 24 h in this species (Fuller et al., 2011;Edwards et al., 2015). In addition, fecal samples collected surrounding a significant health event was analyzed in a fifth elephant (E; 39 years of age). All samples were frozen -20°C until analysis. Fecal extraction For analysis of fecal IgA, feces were dried in a lyophilizer, sifted to remove fibrous material, weighed 0.1000 g (±0.0010g) and added to 3 ml of phosphate buffered saline with Tween (PBS-T; 0.01 M phosphate buffer, 0.50 M NaCl, 0.1% Tween 20 ® , pH 7.2). Samples were vortexed thoroughly to ensure free-mixing of the fecal powder, and agitated overnight on a multi-tube pulse vortexer (Glas-Col, Terre Haute, IN). Samples were then vortexed briefly and centrifuged at 1800 × g for 20 min at 4°C to pellet fibrous material. The supernatant was decanted into a clean tube and centrifuged again at 3500 × g for 10 min at 4°C to pellet the particulate. From this, 2.0 ml of supernatant was removed, evaporated to dryness under air, re-suspended in 0.5 ml ultra purified water, and stored at −20°C until analysis. For analysis of fecal GC, fecal samples were processed using a dry-weight shaking extraction technique adapted from Scarlata et al. (2011). In brief, 0.1000 g (±0.0010 g) lyophilized fecal powder was added to 5.0 ml of 80% methanol. Samples were vortexed and agitated on a multi-tube pulse vortexer for 30 min, before being centrifuged at 1500 × g for 20 min. Supernatants were decanted before a further 5.0 ml of 80% methanol was added to the original tubes containing the fecal pellets, vortexed, and centrifuged again (1500 × g for 15 min). Combined supernatants were evaporated to dryness before being re-suspended in 1.0 ml 100% methanol. Extracts were dried again before final resuspension in 1 ml phosphate buffer (0.039 M NaH 2 PO 4 , 0.061 M Na 2 HPO 4 , 0.15 M NaCl; pH 7.0), and stored frozen at −20°C until analysis. The average extraction efficiency of this process was 86.3% (range 77.9-99.8%) based on addition of 3 H-corticosterone to each sample prior to extraction. Glucocorticoids were measured using three different assays for the four sample types, according to assay validation results. Fecal GC metabolites were measured using a double antibody EIA incorporating a secondary goat-anti rabbit IgG antibody (A009, Arbor Assays, Ann Arbor, MI) and polyclonal rabbit anti-corticosterone antibody (CJM006, C. Munro, University of California, Davis, CA) adapted from Munro and Stabenfeldt (1984) and validated for Asian elephants by Watson et al. (2013). In brief, secondary antibody (150 μl; 10 μg/ml in coating buffer [X108, Arbor Assays]) was added to 96-well microtiter plates (Costar, Corning Life Sciences, Tewkesbury, MA) followed by incubation at RT for 15-24 h. After incubation, unbound antibody was washed from wells with wash buffer (X007, Arbor Assays). Blocking solution (250 μl; X109, Arbor Assays) was added to each well and left to incubate for 4-24 h at RT. Blocking solution was then removed and plates were dried at RT in a desiccator cabinet, packaged in vacuum-sealed bags, and stored at 4°C until use. Corticosterone standards (50 μl; 0.078-20 ng/ml), controls (50 μl), and samples (50 μl; diluted 1:10 in phosphate buffer [0.039M NaH 2 PO 4 , 0.061M Na 2 HPO 4 , 0.15M NaCl; pH 7.0]) were added to plate wells in duplicate. Corticosterone-HRP (25 μl; 1:25 000; C. Munro, University of California, Davis, CA) was added to all wells. The primary anti-corticosterone antibody (25 μl; CJM006 1:60 000) was added to all wells except for the non-specific binding (NSB) wells, followed by incubation for 2 h at RT. Unbound components were removed by washing five times with wash buffer (X007, Arbor Assays), followed immediately by the addition of a chromagen solution containing TMB (100 μl, X019, Arbor Assays) to each well. After incubation for 30 min at RT, the reaction was halted by the addition of stop solution (50 μl; X020 Arbor Assays) and optical densities were determined at 450 nm with a reference of 630 nm. Serum cortisol was measured using a solid-phase 125 I radioimmunoassay (RIA) (Corti-Cote, MP Biomedicals, Santa Ana, CA) with some modifications. In brief, 25 μl of each calibrator, control, and sample were added in duplicate to pre-coated tubes containing cortisol antiserum. 250 μl of 125 I-labeled cortisol tracer solution was added, tubes were mixed briefly and incubated for 45 min in a water bath at 37°C. Tubes were decanted thoroughly before being counted in a gamma counter (Iso Data 20/20 series). Statistical analyses Concentrations of IgA and GCs measured in each sample type were compared across the four elephants using generalized linear mixed models (GLMM) with individual as a random effect to account for non-independence of data. Potential relationships between IgA and GC concentrations within a sample type were compared using GLMMs, either by individual (sample date as random effect) or with all females combined (individual and sample date as random effects). Similarly, relationships between IgA concentrations measured in the four sample types were compared by individual (sample date as 4 random effect) and with all females combined (individual and sample date as random effects). Data were log 10 transformed where necessary to improve distribution, and GLMMs were performed in MLwiN version 2.02 (Rasbash et al., 2005) using a normal distribution. Significance of the fixed effects was determined using a Wald test with chi-squared distribution (χ 2 ), with alpha set to 0.05. Results Immunoglobulin A was successfully quantified in all four sample types, although urinary IgA was only detected in 32-95% of samples within an individual; the remaining samples were below the detection limit of the assay. IgA and GC concentrations in feces, saliva, urine and serum are summarized in Table 1. There was considerable intra-individual variability in both IgA and GC concentrations across the four sample types (Figs 1-4). With the exception of urinary IgA and serum cortisol, concentrations also varied significantly among the four females (Table 1). Fecal IgA was higher in females A and C compared to D and B (Table 1, Fig. 1). Fecal GC metabolite concentrations were also higher in female A compared to the other three females. However, fecal IgA and GC metabolite concentrations were not related in any of the four females individually (P > 0.122), or when all data were combined (P = 0.229). Salivary IgA exhibited high inter-individual variability, being highest in female A, followed by B, then D, and was lowest in female C (Table 1, Fig. 2; all comparisons P ≤ 0.009). Salivary GC was highest in female B, significantly higher than females A and D (Table 1). IgA and GC concentrations in saliva were not related in any of the four females individually (P > 0.121), or combined (P = 0.111). Urinary IgA did not differ statistically among the four females, with concentrations reasonably stable over the study period (Table 1, Fig. 3). GC concentrations were also relatively low and stable in the urine of three of the females, but significantly higher in female C. There was no correlation between urinary IgA and GC when samples from all four females were combined (P = 0.460), or for three of the females individually (B-D, P > 0.155). However, in female A there was a positive relationship between IgA and GC (χ 2 = 4.378, df = 1, P = 0.036). Serum IgA was generally less variable within individuals over the study period, with the exception of female A (Fig. 4). This individual exhibited mild clinical signs of illness including lethargy and anorexia for a few days in mid-July. This illness was preceded by an increase in serum IgA concentrations, which subsequently decreased to below her typical concentrations at the time clinical signs were apparent. Female D, who exhibited no clinical signs during the study period, and Female A both had serum IgA concentrations around 3-fold higher than the other two females (Table 1). Serum cortisol did not differ among the four females, and IgA and cortisol were not significantly related, either when all data were combined (P = 0.350), or within each individual (P > 0.318). Temporal patterns in IgA across the four sample types were generally not well correlated within-individual females (Fig. 5). Although there were some significant relationships within individuals, these tended to be when concentrations were not very variable across the study period. In female B, serum IgA was a significant predictor of fecal IgA (χ 2 = 6.130, df = 1, P = 0.013), and in female C, urinary IgA was significantly related to salivary IgA (χ 2 = 7.780, df = 1, P = 0.005) and serum IgA (χ 2 = 10.111, df = 1, P = 0.001). It should be noted, however, that the number of urine samples with detectable concentrations of IgA was limited in this individual. In all other cases, IgA concentrations within one sample type were not correlated with other sample types. When samples from all females were combined, fecal IgA concentrations were positively correlated with both salivary IgA (χ 2 = 4.137, df = 1, P = 0.042) and urinary IgA (χ 2 = 8.174, df = 1, P = 0.004) concentrations. Fecal IgA and GCs from Female E during a severe health event are presented in Fig. 6. Although no definitive diagnosis was made, the female presented with lethargy, inappetence, abdominal distension and other idiopathic signs of discomfort. The episode, thought to be a systemic infection, lasted around 7 weeks in total, with three more severe bouts during the initial 3 weeks. Fecal GCs showed a 4-fold increase over baseline concentrations, beginning 15 days prior to the onset of clinical signs, and lasting until all clinical signs had resolved. Interestingly, fecal IgA also increased 30-fold, peaking around the end of the third more severe bout, and in association with more acute clinical signs. Although both fecal IgA and GCs increased during this period, the two were not significantly correlated (χ 2 = 1.048, df = 1, P = 0.306). Other species have shown high inter-and intra-individual variation in IgA excretion (Paramastri et al., 2007). Royo and colleagues (2004) found there was about a 10-fold difference in fecal IgA between rats with the highest and lowest concentrations. Similarly, mean IgA concentrations varied among individual chimpanzees, tending to be higher in mature compared to immature individuals (Lantz et al., 2016), and both age and sex-related differences were reported in reindeer (Yin et al., 2015). The four females in this study differed widely in age, and the oldest elephant had the highest mean IgA concentrations in feces, saliva and serum; however, patterns among the other three individuals were not as consistent and overall age-related differences were not apparent. This suggests that age alone cannot explain all of the inter-individual variability observed here. In a previous study by Bundgaard and colleagues (2012), fecal IgA concentrations in mice followed a bimodal distribution, with distinct groups of high and low excretion. In the 6 weeks following the transfer of mice to a novel environment, the majority of high-excreting animals had switched to being low-excreters, without any intermediate states. All animals were from the same age cohort, and the same distribution was true for both males and females, so there must be some other explanation for the two non-overlapping groups. In addition to inter-individual variability in IgA concentrations, within-individual variation can be influenced by external factors. Both diurnal and/or seasonal differences in IgA measures have been observed in chimpanzees (Lantz et al., 2016) and Sichuan golden monkeys (Huang et al., 2014), and so should be taken into consideration when exploring changes in IgA in relation to health and welfare status. Analysis of IgA in fecal extracts, saliva, urine and serum over the same period revealed that similar trends were not always apparent. Serum IgA has a different structure and function to that of secretory IgA (Kerr, 1990), as would be found in feces, saliva and urine, so a lack of similarity in profiles is perhaps not surprising. Serum IgA is produced by plasma cells in the bone marrow and acts as a secondary line of defense to 8 eliminate pathogens that breach the mucosal surface (Woof and Kerr, 2004). However, in contrast to secretory IgA, the role of serum IgA in health and welfare remains relatively unexplored (Leong and Ding, 2014). With the exception of the brief illness in female A, serum IgA was generally less variable within individuals over the course of this study, but did reveal inter-individual variation that warrants further investigation. By contrast, secretory IgA in feces, saliva and urine is produced locally by plasma cells at mucosal linings to prevent invasion of inhaled and ingested pathogens, and it is this form of IgA that has been previously proposed as a potential welfare measure. When secretory IgA data were compared using fecal, saliva and urine samples within the same week, both saliva and urine were predictive of fecal concentrations, suggesting there is some similarity in excretion rates among the three routes. However, this did not hold true within individuals, so perhaps this overall relationship is reflective of similarities in the relative concentrations across the three sample types for each female, rather than between repeated samples within an individual over time. These data suggest that urine may not be the best measure of IgA in elephants due to the relatively low concentrations observed, with only around a third of samples quantifiable in one individual. Feces and saliva on the other hand generally had both higher and more variable concentrations of IgA throughout the study. Further investigation is required to determine what measure may be the most reflective of biological state in elephants, including analyzing concentrations around specific events, to determine if acute and/or chronic changes are related to physiological or mental status. Considering the complexity of both the hypothalamic-pituitary-adrenal (HPA) axis and immune response to stressors, it is feasible that single time-points may not be fully reflective of underlying physiology, and longitudinal analyses will provide useful insight into the relationships between these two physiological biomarkers. In a study by Tress and colleagues (2006), it was determined that to gain a reasonable representation of individual IgA concentrations, four fecal samples per individual were required, collected on 2 consecutive days, 28 days apart. This allowed for identification of dogs with consistently low fecal IgA concentrations, despite high intra-individual variability. Based on the observed variability in the current study, particularly for measures of secretory IgA, single samples likely will not be sufficient to use IgA concentrations as a measure of overall well-being in elephants. 9 Previous research in other species has suggested a negative correlation exists between IgA and GCs, including salivary measures in humans (Hucklebridge et al., 1998) and dogs (Skandakumar et al., 1995), and fecal measures in reindeer (Yin et al., 2015). However, in many of these cases, data were obtained from single or duplicate samples per individual, as opposed to the longitudinal approach used here. Where repeated samples have been taken over a number of weeks in the past, for example during the acclimatization of mice to different cage types and social groupings, no correlation was apparent between fecal IgA and GC metabolites (Bundgaard et al., 2012). This may be a reflection of the duration of the stressor; it has been suggested that IgA may be a useful biomarker of long-term stress (Valdimarsdottir and Stone, 1997), whereas HPA activity may be more appropriate for acute stressors . Indeed, Tsujita and Morimoto (1999) suggested that salivary IgA can be a useful marker of welfare if the delayed effect of chronic stress is considered separately from the immediate effect of acute stress on this measure. However, with the exception of changes during cases of illness in two elephants, it should be noted that we did not assess the response to specific stressors in this study. Primarily an immune protein, IgA can be highly responsive to health status, typically with decreased concentrations reflective of chronic pathology. Selective IgA deficiency is the most common form of primary immunodeficiency in humans (Cunningham-Rundles, 2001), and is associated with chronic gastrointestinal disease in both humans (Petty et al., 1979) and dogs (Maeda et al., 2013), where individuals with inflammatory bowel disease had significantly decreased concentrations of fecal IgA compared to healthy controls (Maeda et al., 2013). The data from elephant E demonstrated short-term changes in excretion coincided with a severe systemic illness. In that case, both fecal IgA and GCs increased significantly, peaking at concentrations around 30and 4-fold higher than baseline, respectively. Unfortunately, during the illness of elephant E, saliva, serum and urine were not collected, precluding us from determining whether this response would be evident in all sample types. This increase could indicate a physiological response to an acute stressor 10 ( Jarillo-Luna et al., 2015) or an immune response to the pathology. Similarly, although from a shorter and less severe illness, data from female A further suggests that increases in IgA and GCs may be reflective of underlying health issues. Indeed the difference in magnitude of the responses observed in these two females could be reflective of the type or severity of their underlying condition. However, additional research is necessary to investigate this relationship further, determine if health issues that do not include gastrointestinal signs would invoke a similar response, and whether the inappetance that occurred in these two cases will have impacted gut-transit time or fecal composition, and the effect that may subsequently have on fecal IgA concentrations. Results of this study highlight the importance of understanding differing response mechanisms when using IgA as a welfare indicator-chronic stressors may result in immune suppression and reductions in IgA, but acute illness also may be associated with increases in IgA concentration as part of an immune response to cope with underlying pathology. Thus, interpretation of IgA measures, like GCs, may not always be straightforward. Both IgA and GCs have been shown to increase in response to acute stressors of a non-immune nature (Tsujita and Morimoto, 1999;Jarillo-Luna et al., 2015), and this certainly warrants further investigation before increased IgA concentrations can be considered a positive welfare indicator. As with other potential indicators of well-being, it is important to understand normal variation in physiological biomarkers both within and between individuals, as well as in response to specific events. Biomarkers must be put into context, incorporating longitudinal measurements of multiple indicators, such as IgA alongside GCs, to delineate concentrations indicative of an acute immune response or stressor, compared to those associated with longer-term positive or negative welfare states. The methodology described here provides a robust technique to investigate IgA in elephants, and these data provide a necessary baseline to interpret future data alongside other health and well-being measures, to determine whether incorporating IgA measurements will provide useful insight into elephant welfare.
6,148.8
2019-01-01T00:00:00.000
[ "Biology" ]
LAIPT: Lysine Acetylation Site Identification with Polynomial Tree Post-translational modification plays a key role in the field of biology. Experimental identification methods are time-consuming and expensive. Therefore, computational methods to deal with such issues overcome these shortcomings and limitations. In this article, we propose a lysine acetylation site identification with polynomial tree method (LAIPT), making use of the polynomial style to demonstrate amino-acid residue relationships in peptide segments. This polynomial style was enriched by the physical and chemical properties of amino-acid residues. Then, these reconstructed features were input into the employed classification model, named the flexible neural tree. Finally, some effect evaluation measurements were employed to test the model’s performance. Introduction Post-translational modification (PTM) is one of the most significant processes in the field of biology. More than 650 types of post-translational modification were reported across several decades of efforts. Among these types of post-translational modification, several modifications have the ability to reverse their processes. PTM provides a fine-tuned control of protein function in various types of cells in the field of disease research and drug design [1][2][3][4]. For example, the well-known tumor suppressor p53 is subject to many post-translational modifications, which have the ability to alter its localization, stability, and other related functions, thus ultimately modulating its response to various forms of genotoxic stress [5][6][7][8][9][10]. Therefore, p53 drives both the activation and repression of a large number of promoters, which ultimately define its tumor suppressor abilities. This tumor suppressor is a critical transcription factor in the field of post-translational modification [11]. With these reversible modifications, protein structures change and their functions are enriched to some degree. As one of the most typical and classical reversible types of modification, lysine acetylation was reported about half a century ago [1,2]. Acetylation occurs on the ε-amino group of lysine residues; it was noted that three enzymes take part in this process. Whereas lysine deacetylases (KDACs) remove the acetyl groups of proteins, lysine acetyl transferases (KATs) transfer the acetyl group across proteins [3][4][5][6]. Considering the key role of lysine acetylation in several diseases and novel drug designations, a great deal of experimental approaches were proposed and introduced to identify the acetylation sites of lysine residues in protein sequences. These experimental approaches, including radioactivity chemical methods, chromatin immune precipitation (ChIP), and mass spectrometry, play their roles in various degrees [7,8]. Unfortunately, these experimental methods can hardly meet the need of identifying sites, and they are time-consuming and expensive. Considering this issue, effective identification methods, Comparison with Other Features In order to evaluate the performance of the polynomial form features, several state-of-the-art methods were chosen for comparison, including binary encoding, amino acid composition (AA composition), grouping AA composition, physico-chemical property, k nearest neighbor features, and secondary tendency structure. The details of these comparisons are shown in Tables 1-3. Comparison with Other Models In order to more objectively evaluate the performance of the proposed feature description and employed classification model, we compared it with several state-of-the-art methods, including DBD (Breakdowns of B)-Threader, iDNA-Prot, and other similar tools in the field of sequence classification and post-translational modification. Details of these comparisons are shown in Tables 4-6. In order to show the proposed model's stability and generalization, we utilized the ROC (receiver operating characteristic curve) curve to show the classification results. Meanwhile, some cross-validation methods (fourfold, sixfold, eightfold, and 10-fold) were also utilized. The detailed ROC curves for each species are shown in Figures 1-3. Performance Using Differences Bandwidths In this work, the bandwidths of sliding windows played a significant role in the feature size. On the one hand, the lack of a bandwidth can waste computational resources and result in ineffective feature description. On the other hand, different species may have unique bandwidths in this classification model. Therefore, we tested bandwidths ranging from 21 to 31, with an interval of 2. Detailed results for each of these bandwidths in the selected species are shown in Table 7. In order to more objectively show the results, we compared them with other machine learning methods, including SVM, NN, and RF. From the above table, we can easily determine that the most appropriate bandwidths for Homo sapiens, Mus musculus, and Escherichia coli were 25, 25, and 23, respectively Furthermore, the FNT model performed better than the three other machine learning methods in the majority of measurements among these bandwidths. Performance of Polynomial Feature Description In this section, we discuss the parameter selection of polynomial feature description. The three proposed feature description methods constitute five parameters, one of which is described by Equation (1), two of which are described by Equation (2), and two of which are described by Equation (3). The three proposed methods were compared, taking into account the coefficients a 1 and a 2 , and the constants b 1 , b 2 , and c. We defined a 1 and a 2 in the range [−10, 10], and three constants in the range [−100, 100], to test the performance of the employed classification method. We determined that the most appropriate parameters of the three proposed features were as follows: c = 57.6, a 1 = 4.1, a 2 = −2.7, b 1 = 27.1, and b 2 = 67.1. The three proposed features performed differently. The abovementioned classification models were also used for comparison. In order to reduce the usage of unnecessary computational resources, the most appropriate bandwidths determined previously were used. The Details of the results are shown in Table 8. Materials and Methods Because of the ubiquity and universality of lysine acetylation at the protein level, we can find several acetylated proteins in various databases, including NCBI (National Center for Biotechnology Information), Uniprot, and other related proteomics databases. In this study, we selected about 30,000 protein sequences, which contain more than 111,200 acetylation sites among them [49]. These proteins could be extracted from the Protein Lysine Modification Database (PLMD) version 3.0 [50]. PLMD is one of the most well-known and commonly used post-translational modification site databases, and it contains more than 20 types of lysine modification in more than 170 species at the protein level. Generally, this database can be treated as the largest available acetylation database; thus, it was employed as the benchmark dataset in this work. Unfortunately, overestimation may be one of the most significant limitations when using machine learning. In order to overcome this shortcoming, CD-HIT (Cluster Database at High Identity with Tolerance) was utilized to remove some homologous sequences [51][52][53][54]. In this work, we utilized a threshold of 40% similarity with this tool. Following this process, we obtained 59,532 proven acetylated modification sites from 20,527 protein sequences. These protein data were used to construct the training, testing, and independent datasets. During this classification process, we defined the proven acetylated sites as positive samples and the non-proven modifications as negative samples. Detailed information of the employed datasets is shown in Table 9, and details with regards to the construction of datasets are shown in Figure 4. Because of the ubiquity and universality of lysine acetylation at the protein level, we can find several acetylated proteins in various databases, including NCBI (National Center for Biotechnology Information), Uniprot, and other related proteomics databases. In this study, we selected about 30,000 protein sequences, which contain more than 111,200 acetylation sites among them [49]. These proteins could be extracted from the Protein Lysine Modification Database (PLMD) version 3.0 [50]. PLMD is one of the most well-known and commonly used post-translational modification site databases, and it contains more than 20 types of lysine modification in more than 170 species at the protein level. Generally, this database can be treated as the largest available acetylation database; thus, it was employed as the benchmark dataset in this work. Unfortunately, overestimation may be one of the most significant limitations when using machine learning. In order to overcome this shortcoming, CD-HIT (Cluster Database at High Identity with Tolerance) was utilized to remove some homologous sequences [51][52][53][54]. In this work, we utilized a threshold of 40% similarity with this tool. Following this process, we obtained 59,532 proven acetylated modification sites from 20,527 protein sequences. These protein data were used to construct the training, testing, and independent datasets. During this classification process, we defined the proven acetylated sites as positive samples and the non-proven modifications as negative samples. Detailed information of the employed datasets is shown in Table 9, and details with regards to the construction of datasets are shown in Figure 4. In this work, we employed the general dataset as the training and testing datasets. In order to evaluate the generalization and stability, we employed three species incorporating lysine acetylation sites as the independent datasets. After constructing the available datasets, some peptide segments were extracted from the whole protein sequences. In order to reduce the unnecessary usage of storage space and computational resources, some peptides with a central lysine residue were extracted in this work. We made use of sliding windows to extract peptide segments with a size of 2n + 1 [55], where n is the length of the upstream or downstream fragment, and 1 is the position of the central lysine residue in the segment. In this work, the length of the upstream fragment was equal to that of the downstream fragment, and n ranged from 10 to 15. Thus, the whole length of the sliding window was between 21 and 31. In the next section, we discuss the performances of the various selected lengths of sliding window. Encoding of Protein Fragments Several different types of features for quantifying biological sequences were presented across many years of protein research, such as amino-acid composition, position special scoring matrix, physico-chemical properties, and other related features [56][57][58] These features can demonstrate sequence information in various aspects, and they play various roles in protein sequence analysis. However, few features can demonstrate the relationships of amino-acid residues. In this paper, each peptide was treated as a sample. According to biological concepts, neighboring amino-acid residues present both coordination and individual functions. On this basis, we tried utilizing some of these functions to describe the relationships in this work. We propose a polynomial method to describe the relationships between the central lysine residue and the neighboring amino-acid residues. Several forms of polynomial styles exist, such as the constant form, linear function form, quadratic function form, cubic function form, and so on. For example, we show the curves of these four forms in Figure 5. In this work, we employed the general dataset as the training and testing datasets. In order to evaluate the generalization and stability, we employed three species incorporating lysine acetylation sites as the independent datasets. After constructing the available datasets, some peptide segments were extracted from the whole protein sequences. In order to reduce the unnecessary usage of storage space and computational resources, some peptides with a central lysine residue were extracted in this work. We made use of sliding windows to extract peptide segments with a size of 2n + 1 [55], where n is the length of the upstream or downstream fragment, and 1 is the position of the central lysine residue in the segment. In this work, the length of the upstream fragment was equal to that of the downstream fragment, and n ranged from 10 to 15. Thus, the whole length of the sliding window was between 21 and 31. In the next section, we discuss the performances of the various selected lengths of sliding window. Encoding of Protein Fragments Several different types of features for quantifying biological sequences were presented across many years of protein research, such as amino-acid composition, position special scoring matrix, physico-chemical properties, and other related features [56][57][58] These features can demonstrate sequence information in various aspects, and they play various roles in protein sequence analysis. However, few features can demonstrate the relationships of amino-acid residues. In this paper, each peptide was treated as a sample. According to biological concepts, neighboring amino-acid residues present both coordination and individual functions. On this basis, we tried utilizing some of these functions to describe the relationships in this work. We propose a polynomial method to describe the relationships between the central lysine residue and the neighboring amino-acid residues. Several forms of polynomial styles exist, such as the constant form, linear function form, quadratic function form, cubic function form, and so on. For example, we show the curves of these four forms in Figure 5. In Figure 5, L1, L2, L3, L4, and L5 follow Equations (4)-(8), respectively. From Figure 5, we can easily determine that both L2 and L4 are even functions, while the other curves are odd functions. Considering that the upstream and the downstream fragments played the same role in the selected peptide segments, the even functions were selected for this work; therefore, we utilized three types of functions. The first one was the constant function, whereby all amino-acid residues in the peptide segments have the same influence, as described in Equation (1). The second function followed Equation (2), and the third function followed the Equation (3). where the parameters a 1 , a 2 , b 1 , b 2 , and c 1 were optimized in this work. It was noted that both Equations (1) and (2) could hardly be described as linear functions. Thus, the center of the last two functions was designated as the origin point, i.e., the classified modification sites in the peptide segments. Regions to the left and right part of this origin point were designated as the upstream and the downstream segments, respectively. The influence of each neighboring amino-acid residue is defined below. According to Equation (1), the relationship between a neighboring amino-acid residue and the central lysine is shown in Equation (9). where influ1 contains 2n + 1 elements in each sample, and c 1 is the relationship between each amino-acid residue in the selected peptide segment. In this function, every amino-acid residue has the same influence; thus, the amino-acid composition can be regarded as a special form of this style. According to Equation (2), the relationship between the neighboring and central residues are shown in Equation (10). where influ2 also contains 2n + 1 elements, and each value of influ2 follows the discrete values of Equation (2) and has the range [−n, n]. According to the Equation (3), the relationship between two amino-acid residues is shown in Equation (11). in f lu 3 = [a 1 n 2 + b 1 , · · · a 1 + b 1 , b 1 , a 1 + b 1 · · · , a 1 n 2 where he influ3 also contains 2n + 1 elements, and each value of influ3 follows the discrete values of Equation (11) and has the range [−n, n]. After demonstrating the fundamental relationship of amino-acid residues within the classified peptide, the next step was to enrich the related properties of amino-acid residues. In this step, physical, chemical, evolutional, structural, and other related information was enriched using the three styles proposed above. Physico-Chemical Properties Physico-chemical properties are widely and successfully utilized in the identification of protein post-translational modifications, including ubiquitination, phosphorylation, and others [59,60]. These properties can help determine the fundamental characteristics of proteins in several aspects. One of the most well-known and widely utilized databases is AAIndex [61,62], which contains a great deal of physico-chemical and biochemical information for each amino-acid residue and some amino-acid compositions. The latest version of this database describes 544 properties of amino acid residues. Among these properties, following previous efforts and research [62], we selected several of them, which are listed in Table 10. Considering the abovementioned elements, we minimized the presence of useless information; therefore, the area under the receiver operating characteristic (ROC) curve (AUC) was used to evaluate the measurements in this work. AA composition of ejecta of single-spanning proteins 13 QIAN880101 Weights for alpha-helix at the window position of −6 Prediction Algorithm The computational identification of modification sites focuses on classification models in the field of machine learning. In this thesis, we employed machine learning models, including the flexible neural tree. We employed three machine learning methods for the three elements in the classification. The first element involved the bandwidth of the sliding windows in the classified peptide segments, the second involved the parameters of polynomial feature description, and the third involved the selection of different combinations. Therefore, the classification model was designed to deal with these three elements; the detailed outline of this algorithm is demonstrated in Figure 6. Considering the abovementioned elements, we minimized the presence of useless information; therefore, the area under the receiver operating characteristic (ROC) curve (AUC) was used to evaluate the measurements in this work. Prediction Algorithm The computational identification of modification sites focuses on classification models in the field of machine learning. In this thesis, we employed machine learning models, including the flexible neural tree. We employed three machine learning methods for the three elements in the classification. The first element involved the bandwidth of the sliding windows in the classified peptide segments, the second involved the parameters of polynomial feature description, and the third involved the selection of different combinations. Therefore, the classification model was designed to deal with these three elements; the detailed outline of this algorithm is demonstrated in Figure 6. The flexible neural tree (FNT) was proposed by Chen [63,64], and it can be treated as an alternative tree neural network. Therefore, this model can be utilized to deal with the issues of classification and prediction in the field of machine learning. The typical structure of an FNT is shown in Figure 7. The flexible neural tree (FNT) was proposed by Chen [63,64], and it can be treated as an alternative tree neural network. Therefore, this model can be utilized to deal with the issues of classification and prediction in the field of machine learning. The typical structure of an FNT is shown in Figure 7. From the above figure, we can easily determine that the model contains three types of layers-the input layer, the hidden layer, and the output layer. The network function of this model is shown in Equations (12) (13) where wj is the weight of the j-th input element, and yj is the j-th element of the input sample. Both mi and ni are parameters in this network. Performance Measurements Some well-known methods exist in the field of machine learning for evaluating performance measurements,. In this work, some typical measurements, including sensitivity, specificity, accuracy, F1 scores, and Matthew's correlation coefficients (MCCs) [65,66], of the identified modification sites were used. Furthermore, the AUC [67] was also employed to test the performance of imbalanced classification problems, whereby the negative sample size was much bigger than the positive sample size. In this classification problem, samples can be defined as two types-positive samples and negative samples. Positive samples refer to peptide segments where the central lysine is acetylated, while negative samples refer to peptide segments where the central lysine is not. According to the definitions of the classified samples, there can be four outcomes. If a positive sample is classified as true, this can be deemed a true positive (TP). If a positive sample is classified as false, this can be deemed a false positive (FP). Following this concept, a negative sample classified as true is a true negative (TN), and a negative sample classified as false is a false negative (FN). According to the number of TP, TN, FP, and FN, we can easily obtain measures of sensitivity, specificity, accuracy, F1 scores, and MCC. From the above figure, we can easily determine that the model contains three types of layers-the input layer, the hidden layer, and the output layer. The network function of this model is shown in Equations (12) and (13). where w j is the weight of the j-th input element, and y j is the j-th element of the input sample. Both m i and n i are parameters in this network. Performance Measurements Some well-known methods exist in the field of machine learning for evaluating performance measurements. In this work, some typical measurements, including sensitivity, specificity, accuracy, F1 scores, and Matthew's correlation coefficients (MCCs) [65,66], of the identified modification sites were used. Furthermore, the AUC [67] was also employed to test the performance of imbalanced classification problems, whereby the negative sample size was much bigger than the positive sample size. In this classification problem, samples can be defined as two types-positive samples and negative samples. Positive samples refer to peptide segments where the central lysine is acetylated, while negative samples refer to peptide segments where the central lysine is not. According to the definitions of the classified samples, there can be four outcomes. If a positive sample is classified as true, this can be deemed a true positive (TP). If a positive sample is classified as false, this can be deemed a false positive (FP). Following this concept, a negative sample classified as true is a true negative (TN), and a negative sample classified as false is a false negative (FN). According to the number of TP, TN, FP, and FN, we can easily obtain measures of sensitivity, specificity, accuracy, F1 scores, and MCC. Speci f icity = TN TN + FP (16) where P is the number of positive samples and N is the number of negative samples. Nevertheless, in Equations (14)- (18), there is a lack of intuitiveness, and they can hardly be described as easy to understand for the majority of researchers in the field of biology. The interpretation of MCC in particular is not at all intuitive in this form, although this measurement plays a key role in the evaluation of the classification model's stability. Therefore, we made use of the concept based on Chou, proposed at the beginning of this century. In this concept, the total number of positive samples can be defined as N + , and the total number of negative samples can be defined as the N − . Then, the number of misclassified positive samples can be treated as the N + − , and the number of misclassified negative samples can be treated as the N − + . With this definition, TP, TN, FP, and FN can be described in Equations (19)- (22). The interpretations of each performance metric in Equations (23)- (27) are far more intuitive and easier to understand for biological researchers. For instance, when samples can be correctly classified, whereby all positive samples are classified as true and all negative samples are classified as false, we get N − + = 0 and N + − = 0, and the sensitivity and specificity are both equal to 1. Meanwhile, the accuracy is equal to 1 and MCC is also equal to 1 in such a situation. On the contrary, if all positive samples are classified as false and all negative samples are classified as true, N − + and N + − are both equal to 1, and the sensitivity and specificity are both equal to 0. Furthermore, the accuracy is equal to 0, and the MCC is equal to −1 in this situation. In a random classification issue, N − + = 0.5N − and N + − = 0.5N + . Thus, the accuracy is equal to 0.5 and MCC is equal to 0 in this situation. This definition method has several advantages [68][69][70][71]; however, utilizing these five measurements can hardly meet required performance in a scenario of imbalanced classification. Therefore, we made use of ROC and precision recall. ROC can be shown by the relationship between the true positive rate (TPR) and the false positive rate (FPR) in the classification. Meanwhile, precision recall can be demonstrated by the relationship between the precision and recall. Conclusions In this article, we proposed a lysine acetylation site identification with polynomial tree method (LAIPT), making use of the polynomial style to demonstrate the amino-acid residue relationships in peptide segments. The polynomial style was enriched by the physico-chemical properties of amino-acid residues. Then, these reconstructed features were input into the employed classification model, named the flexible neural tree. Finally, some effect evaluation measurements were employed to test the model's performances. We demonstrated that the three employed species modification sites constituted unique feature descriptions. In the future, we hope to determine more useful forms of feature description and to utilize effective classification models to deal with them. We hope that the algorithm described herein has the ability to deal with other types of protein post-translational modification sites in various species. Conflicts of Interest: The authors declare no conflicts of interest.
5,717.6
2018-12-29T00:00:00.000
[ "Computer Science", "Biology" ]
A test of the lateral semicircular canal correlation to head posture, diet and other biological traits in “ungulate” mammals For over a century, researchers have assumed that the plane of the lateral semicircular canal of the inner ear lies parallel to the horizon when the head is at rest, and used this assumption to reconstruct head posture in extinct species. Although this hypothesis has been repeatedly questioned, it has never been tested on a large sample size and at a broad taxonomic scale in mammals. This study presents a comprehensive test of this hypothesis in over one hundred “ungulate” species. Using CT scanning and manual segmentation, the orientation of the skull was reconstructed as if the lateral semicircular canal of the bony labyrinth was aligned horizontally. This reconstructed cranial orientation was statistically compared to the actual head posture of the corresponding species using a dataset of 10,000 photographs and phylogenetic regression analysis. A statistically significant correlation between the reconstructed cranial orientation and head posture is found, although the plane of the lateral semicircular canal departs significantly from horizontal. We thus caution against the use of the lateral semicircular canal as a proxy to infer precisely the horizontal plane on dry skulls and in extinct species. Diet (browsing or grazing) and head-butting behaviour are significantly correlated to the orientation of the lateral semicircular canal, but not to the actual head posture. Head posture and the orientation of the lateral semicircular canal are both strongly correlated with phylogenetic history. The need for a reliable and reproducible way of orienting dry skulls for cranial measurements has led to a considerable amount of literature suggesting that the plane of the lateral semicircular canal (LSC) of the bony labyrinth (the osseous capsule of the inner ear) is horizontal when the head is held in its "habitual" (i.e. not actively attained) or "alert" positions [1][2][3][4][5][6][7][8][9][10][11][12] . This is backed by the hypothesis that a horizontal orientation of the LSC would mechanically maximize the recording of rotational and linear head movements made in the horizontal plane by placing the sensory hair cells of the semicircular canal and its associated ampulla perpendicular to the horizontal plane [12][13][14][15][16] . The subsequent use of the orientation of the plane of the LSC as a proxy to infer head posture in fossil vertebrates has grown more popular among paleontologists, as it is being applied to dozens of extinct taxa such as archosaurs, including dinosaurs, and synapsids, including mammals 12,[17][18][19][20][21][22][23][24][25][26][27][28][29] . This has raised discussion on some crucial paleobiological questions, such as the evolution of bipedalism in ancient hominin 14,18 and paleodiets. As browsers are expected to hold their head higher than grazers 30 , head posture has been invoked in reconstructing ancient diet in fossil herbivorous species 20,24 . Semi-aquatic species, on the other hand, would hold their head tilted upward 27 (but see Neenan and Scheyer 31 ). In addition, head posture is directly involved in discussions about the origin of endothermy, as blood pressure to perfuse the head, and particularly the brain, directly depends on head posture and thermophysiology (species with low metabolism have a lower blood pressure than species with a high metabolism, and therefore cannot perfuse their brain if their head is held far above their heart) 32 . Head posture may thus be crucial for inferring the evolution of endothermy in birds, mammals, and their respective Materials and methods Sampling. As the inclusion of a statistically-significant number of taxa was essential to this study, we chose to focus primarily on ungulate-grade mammals (i.e. Paenungulata, Artiodactyla, Perissodactyla, and Tubulidentata). Ungulates are more abundant than carnivores or primates in zoos, easily identifiable, and well represented in institutional dry skull collections. They display a wide array of body sizes, a greater variety of documented head postures 47 , a wider range of expected inner ear orientations (as hypothesized from the inclination of the snout compared to that of the brain-case 48 ), and more varied degrees of adaptation to head-butting 34 than any other mammalian group. Moreover, they are the ideal target group to address if diet (browsing v. grazing) plays a significant role in the orientation of the LSC, as previously suggested in the literature 20,24 . Finally, they usually display an elongated snout, which makes it easier to compare the orientation of the head in live animals to that of the corresponding dry skulls. Head posture in live animals. Head posture was documented by taking pictures of zoo animals in lateral view using a camera equipped with a spirit level ( Fig. 1) to ensure that pictures were taken as close to the horizontal plane as possible. The animals were photographed in 2018 and 2019 at the National Zoological Garden, Pretoria (South Africa), Johannesburg Zoo (South Africa), Montecasino Bird Garden, Fourways (South Africa), Lory Park Animal and Owl Sanctuary, Midrand (South Africa), Ménagerie du Jardin des Plantes, Paris (France), Parc Zoologique de Paris (France), Prague Zoo (Czech Republic), Chester Zoo (United Kingdom), Zoologischer Garten Berlin (Germany), Tierpark Berlin (Germany), and Zooparc of Beauval (France). The saiga antelope pictures were kindly provided by K.H. Vogel. The dataset represents about 10,000 pictures documenting the head posture of 129 species and is available here: https ://osf.io/4vpnj /?view_only=3dc98 7012f cd44a 6a64a d7d89 49ec0 1f (https ://doi.org/10.17605 /OSF.IO/4VPNJ ). The pictures were taken from outside the enclosures to avoid interaction with the animals. It was essential for this study that the animals remain calm and act naturally, so their environment was not disturbed, and the animals were not put on leash or isolated. As such, individual identification was not possible. Representatives of both sexes are mixed in the dataset as sexes could not always be determined. The typical photography set up is illustrated in Fig. 1. To ensure that the photographed head postures were comparable between individuals and species, all pictures were taken by one of the authors only (J.B., except for the saigas, which were taken by Alexander Sliwa from the Kölner Zoo). The pictures used for this study were selected to reflect as closely as possible what will hereafter be referred to as the "neutral" head posture. The neutral posture of an "ungulate" is here defined as the angle between the main axis of the head and the horizontal when an animal's head remains still, its attention is not attracted by a moving or immobile target, and it is not foraging, drinking, or performing any other identifiable activity involving head movements (e.g. sniffing). The animal can be standing or lying down. "Neutral" head posture differs from head posture "at rest" as it encompasses ruminating animals and individuals slowly walking with their head steady (not pitching up and down while moving). Alert postures 49 were included only if the animal's attention was not directed toward an identifiable direction, and were avoided as much as possible. That is why this study focuses on zoo animals, which are accustomed to human presence. For consistency and to enable comparisons, pictures of semiaquatic "ungulates" (e.g. hippos) were taken when the animal's head was not immersed so that their head posture was not influenced by buoyancy. The orientation of the head compared to the horizontal plane was measured by J.B. using ImageJ as the angle between the horizontal border of the picture (horizontality of which was ensured by the use of a spirit level on the camera, Fig. 1) and the main axis of the head (traced as the axis running from just above the upper lip to the middle of the occiput on the back of the head) in strict lateral view (Fig. 2a). An average neutral head posture was then calculated for each species (Table 1). The intraspecific standard deviation (measurement error) for neutral head posture is ± 1.6°. The bony labyrinth is one of the first organs to completely ossify in mammals as its adult size and shape are reached at mid-gestation 50,51 . However, the orientation of the LSC seems to show age-related variations in some tetrapod species, including humans, which may impact their head posture 26,52,53 . As such, juveniles were excluded from the dataset. Supplementary Table S1 for details), were used. The bony labyrinths of each skull were segmented manually and reconstructed in 3D using the software AVIZO 9 (FEI VSG, Hillsboro OR, USA) at the virtual imaging labs of the Evolutionary Studies Institute and the Natural History Museum of Basel. The skull was reconstructed using either the Isosurface or threshold functions under the same software. The angle between the plane of the LSC and the main axis of the skull was then measured in lateral view in 2D (Fig. 2b). The plane of the LSC was determined visually in lateral view, following most previous authors 19,20,22,[24][25][26] . The main axis of the skull was traced as the axis running from just above the www.nature.com/scientificreports/ premaxilla (approximately at the level of the centre of the nasal opening) to the middle of the occiput (Fig. 2b) in order to maximize the homology with the measurements taken on living animals. This angle represents the anterior tilting of the head if the LSC is considered horizontal. This angle is hereafter referred to as "the reconstructed cranial orientation" or "reconstructed head posture". Measurements were taken bilaterally when both bony labyrinths were available and then averaged for each species (Table 1). For consistency, all measurements were taken by the same author (J.B.). None of the samples expressed strong lateral tilting of the LSC or an undulating morphology that could impede taking this measurement or affect its accuracy. The intraspecific standard deviation (measurement error) for reconstructed head posture is ± 2.1°. The complete dataset of reconstructed head postures is available in the Supplementary Table S1. This dataset was complemented by measurements made on the published pictures from Girard and Schellhorn 5,30 (see Supplementary Table S1). As for the picture dataset, only the individuals showing reasonable signs of maturity (e.g. cranial bone fusion, erupted molars) were considered. Data processing. The dataset was analyzed using phylogenetic comparative methods to control for the non-independence of observations [54][55][56] . We used the time-calibrated phylogenetic tree of mammals of Bininda-Emonds et al. 57 because it encompasses all the species in our dataset and fossils can be easily added to it in future analyses. The tree was pruned to match the species in our dataset using function 'drop.tip' in R package ape 58 . All subsequent analyses were performed in R v3.6.3 (R Core Team, 2020). The phylogenetic signal of individual variables was estimated using Pagel's lambda 59 for continuous features (reconstructed cranial orientation, neutral head posture, body mass) using function 'phylosig' in package phytools 60 . Lambda was chosen over the other commonly used estimator K 61 because of the latter's poor performance for trees with small sample sizes and polytomies 62,63 , both of which can be found in our dataset. For binary traits (head-butting; see below), phylogenetic signal was estimated with the D-statistic 64 using function 'phylo.d' in package caper 65 . To test whether the plane of the LSC can be used as a reliable proxy to reconstruct the neutral head posture, we regressed the neutral head posture of living animals on the reconstructed cranial orientation using data in Table 1, and phylogenetic generalized least squares (PGLS) regressions 66,67 . PGLS were compiled using the 'gls' function in package nlme 68 , with correlation structures for each evolutionary model specified in ape 58 . A model selection procedure based on the corrected Akaike information criterion (AICc) was applied to the regressions using the package AICcmodavg 69 . Five evolutionary models were considered for this selection procedure (see 70 ): Brownian Motion, Pagel's Lambda, Ornstein-Uhlenbeck, Early Burst, and White Noise -i.e. non-phylogenetic, ordinary least squares (OLS) regression. All regressions were performed using raw and log-transformed data (natural logarithm). Both variables in the models are in the same unit and order of magnitude, and the models www.nature.com/scientificreports/ built with raw data showed a higher significance and met parametric assumptions better than models built with log-transformed data. For this reason, we used the former to assess the relationship between the two variables. Because of the high degree of body mass allometry in neuroanatomical features [71][72][73] , body mass measurements for all species in the sample were taken from the literature (Supplementary Table S1) and included as a co-predictor to be tested against models built with only the reconstructed and neutral head postures as predictors in the AICc-based model selection procedures. The coefficient of determination and p-value for generalized least squares regressions cannot be compiled straightforwardly due to the autocorrelated structure of the residuals 67 . Following Paradis 55 , we compiled a pseudo-R-squared and p-value based on McFadden's formula 74 , based on a likelihood ratio test between our model and a null model. Normality and homoscedasticity of the residuals were assessed using a Shapiro-Wilk test and a Q-Q plot, and graphically using residuals v. fit plots, respectively 75 . Finally, phylogenetic one-way Analyses of Variance (phylANOVA) 76 with False Discovery Rate posthoc corrections 77 were used to test for a difference between groups in three separate factors, for both reconstructed cranial orientation and neutral head posture ( Table 1). The first factor is diet, for which species were categorized as "browser", "grazer", "mixed" (for a mixed diet between browsing and grazing), or "other" (for omnivorous and myrmecophagous species). The second predictor is whether a species practices head-to-head combat. The head-butting category includes wrestling and ramming species (hereafter referred to simply as head-butting species) but excludes flank-butting species (e.g. giraffes). The last predictor is the habitat, which was scored between open (savannah or steppes), closed (forest or jungle), mixed (mix of open and closed habitats), rocky (for species living on steep, rocky slopes), or semi-aquatic. The scoring of all three predictors was done using the literature (see the list in Table 1). PhylANOVAs were performed using function 'phylANOVA' in phytools 60 . Ethics declarations. As the animals were not approached or armed, no ethical clearance was necessary for this study. Results All the data in the dataset for which a phylogenetic signal could be measured (neutral head posture, reconstructed cranial orientation, body mass, and head-butting) carry a strong phylogenetic signal (lambda > 0.8 for the first three variables; D = − 0.2841056 for head-butting) (See Supplementary Table S1). Species with body mass under 100 kg have an average neutral head posture of 30° and reconstructed cranial orientation of 39°, whereas species larger than 100 kg have a neutral head posture averaging 37° and average cranial orientation of 40°. This suggests an effect of body mass on head posture as was hypothesized by Köhler 47 , but not on the orientation of the LSC (Fig. 3). This is consistent with statistical analyses, which identify a very weak effect of body mass on neutral head posture (R 2 = 0.040; p-value = 0.014), and none on reconstructed cranial orientation (R 2 = 0.023; p-value = 0.054) using OLS. However, once corrected for phylogeny using PGLS, the effect of body mass on head posture (R 2 = 0.030, p-value = 0.06025) and cranial orientation (R 2 = 0.012, p-value = 0.9903) is no longer significant. Phylogenetic regressions (Fig. 4a) identify a statistically significant (p-value = 4.519e−07) but relatively low correlation (R 2 = 0.261) between neutral head posture and the reconstructed cranial orientation. This supports that the orientation of the LSC in life is correlated to the neutral head posture in "ungulates". The equation of the linear model is: The 95% confidence interval for the slope (0.242-0.526) is significantly different from 1, which means that this model cannot be approximated to an isometric relationship (which would be expected if the LSC was held horizontally). The model including body mass, neutral, and reconstructed head postures, and the interaction term of the three as co-predictors was selected by AICc as fitting our data best (Fig. 4b). This model shows results very similar to those of the simple regression model, being significant with a slightly stronger correlation (R 2 = 0.325; p-value = 3.235e−07). The equation of the resulting model is written: The slope confidence interval is slightly lower than that of the previous model (for reconstructed cranial orientation: 0.148-0.508), which removes it even further from an isometric relationship (Fig. 4b). The very low www.nature.com/scientificreports/ coefficients for body mass and reconstructed orientation × body mass are not significantly different from zero (see Supplementary Table S1), and are reported here to ensure full transparency of our results. Surprisingly, for both simple and multiple regression models, the evolutionary model selected by AICc was the Early Burst (EB) model 61 , representing a rapid adaptive radiation followed by stasis. EB models are known to be rarely selected as the best evolutionary model in such selection procedures 78 . On average, browsers tend to hold their heads less tilted anteriorly (26°) than mixed feeders (32°), and grazers (36°) in neutral posture (Fig. 5). This seems to reflect on the reconstructed cranial orientation as browsers have a higher reconstructed head posture (33°), than mixed feeders (40°) and grazers (44°) (Fig. 5). Phylogenetic ANOVAs indicate a significant difference between browsers and grazers for the reconstructed orientation of the skull (F = 7.723; p-value = 0.046), but not for the neutral head posture (F = 2.663; p-value = 0.516). Mixed feeders are statistically indiscernible from both browsers and grazers in any case. Variations of reconstructed cranial orientation do not show any trend with habitat preference (Fig. 6), but a slight trend toward more downwardly tilted head postures seems to occur in species living in a more open habitat (Fig. 6); however, this trend is not significant (F = 1.343; p-value = 0.792). Semi-aquatic species seem to have a more posteriorly tilted LSC resulting in a higher reconstructed cranial orientation (23°) than fully terrestrial species, but this does not reflect on the neutral head posture (Fig. 6). Unfortunately, the sample size for this category was too low to effectively test if this difference was significant or simply the result of the scarcity of semi-aquatic species in the dataset. www.nature.com/scientificreports/ Head-butting is found to have a highly significant statistical effect on reconstructed head posture (F = 39.467; p-value = 0.002), with a difference of 13° between head-butting and non-head-butting species on average (Fig. 7); however, with only 1° difference on average (Fig. 7), the same is not true for the neutral head posture (F = 3.126; p-value = 0.591). Discussion LSC orientation is correlated with head posture, but is not horizontal. The assumption that the plane of the LSC can be used as a reliable indicator of the horizontal plane on dry skulls and thus can serve as a proxy to reconstruct the "habitual" or "alert" head posture in extinct species has been a long-held 17 , yet insufficiently tested hypothesis in mammals. Attempts to test this hypothesis have highlighted that the plane of the LSC is often tilted upward compared to the horizontal in most mammalian species 5,6,11,30,45 . A famously www.nature.com/scientificreports/ baffling example is that of humans, in which aligning the plane of the LSC to the horizontal plane results in a "habitual" posture of the head inclined 30° down anteriorly [1][2][3][4][5][6]8,11,12,18,46 . In the current study, the steenbuck (Raphicerus campestris) is the only species in which the average neutral and reconstructed head postures are the same (Table 1), which means that, on average, the LSC is parallel to the horizontal plane when the steenbuck's head posture is neutral. Among the species for which the neutral head posture and reconstructed cranial orientation could be compared, only half of them show a difference between the averages of the two that is below 10°. As such, even though the plane of the LSC should be horizontal on theoretical grounds 13 , this is not the rule in "ungulates". The two phylogenetic regressions provided here (Fig. 4) are the first large sample size attempts to address the existence and nature of a correlation between the orientation of the plane of the LSC and the neutral head posture in mammals across a large taxonomic sampling. We find that whether corrected or not for body mass, the correlation between the reconstructed cranial orientation and the neutral head posture is significant (p-value < 0.0001); however, if the plane of the LSC was held horizontally in neutral head posture, the regression line should not differ significantly from an isometric line. Instead, both regression lines have slopes that significantly differ from 1 (Fig. 4), which means that they cannot be approximated by isometric lines. For this reason, though there is a significant correlation between the orientation of the LSC and that of the head in "ungulates", the plane of the LSC should not be considered horizontal when reconstructing ancient head posture. According to the phylogenetic regressions, the equation that describes the relationship between the reconstructed cranial orientation and the neutral head posture is given in Eq. (1), and that between cranial orientation, head posture, and body mass is given in Eq. (2). As estimating body mass in extinct species is always contentious [79][80][81][82] , the first of these equations may seem more practical to estimate the actual head posture of a given extinct ungulate species. In both cases, the variance of the residuals is high, which might indicate a low predictive power of these models (R 2 equals 0.26 and 0.33, respectively). The misalignment between the plane of the LSC and the horizontal is consistent with the results obtained by Marugán-Lobón et al. 43 on birds using Duijm's dataset 8 , though their approach to the study of head posture was different from the one presented here, which limits comparisons. The reason why the LSC would not be aligned with the horizontal in the neutral posture is still unclear. It may be explained by the very function of the canals and ampullae, which are meant to record head movements and play crucial roles in the vestibulo-ocular and vestibulo-collic reflexes to compensate for the movements and accelerations of the head compared to the eyes and the rest of the body 6,13,14,43 . As such, recording movements and monitoring reflexes during locomotion and head movements along the predominant axis of yaw would be the main drivers of LSC adaptations 83 , which would result in a relaxed selection on the functions performed when the head remains still, such as aligning the plane of the LSC to the horizontal plane during neutral head posture. In support of this hypothesis, a recent study by Dunbar et al. 84 on the orientation of the LSC during locomotion in horses found a 66° inclination of the head below the horizontal during slow walk, and a higher head posture of 56° and 55° during trot and www.nature.com/scientificreports/ canter. They argued that fast locomotion brought the plane of the LSC to about 5° around the horizontal in their specimens 84 . According to Zubair et al. 85 , domestic cats keep their LSC about 10° to the horizontal during locomotion, although Hullar 12 reports a tilting up to 60° of the LSC in cats during "normal activities". Primates appear to keep the plane of their LSC within a 20° range about the horizontal during locomotion 84 , which is consistent with its orientation at rest 12 . Published data about the orientation of the LSC during locomotion are scarce and some are contradictory. Additionally, they are difficult to acquire as animals rarely maintain a static head posture during locomotion as they often pitch their heads repeatedly 84,85 . In the future, such data could nevertheless enable testing whether the orientation of the LSC would be a better predictor of the head posture during locomotion rather than at rest. Another hypothesis is that an overall misalignment of all three semicircular canals would enable all semicircular canals to record a component of horizontal and vertical accelerations 43 . The effect of phylogeny. Both the neutral and reconstructed head postures carry an important phylogenetic signal (Lambda equals 0.97 and 0.84, respectively). For paleontologists, this strong phylogenetic signal implies that the best way to predict the head posture of an extinct "ungulate" is to look at the neutral head posture of its modern relatives. In comparison, once the data are corrected for phylogeny, diet is found to have only a weak correlation with the reconstructed cranial orientation (F = 7.723; p-value = 0.046), and no significant effect on the neutral head posture (F = 2.663; p-value = 0.516). This reflects well in the dataset. The Tylopoda is the group in which the head is the most consistently tilted upward, with an average neutral head posture of 10°. This reflects on their reconstructed cranial orientation which averages 17°. Tylopods nevertheless include grazers (genus Vicugna), mixed feeders (Camelus bactrianus), and browsers (Camelus dromedarius) that all keep their heads relatively high (Figs. 4, 8a). On the other end of the spectrum, pigs display remarkable consistency at holding their head low (average neutral head posture for Suoidea = 46°), with for example the grazing Phacochoerus and the browsing Catagonus both keeping their head 46° below the horizontal plane on average (Table 1, Fig. 4). The species that holds their head the lowest below the horizontal belong to the Equidae (average neutral head posture = 60°; average reconstructed cranial orientation = 49°) and Alcelaphinae (average neutral head posture = 57°; average reconstructed cranial orientation = 61°) (Figs. 4, 8b). The species with the most tilted head posture is the Grevy's zebra (Equus grevyi), with a 66° tilt on average (Table 1), comparable to the extreme 67° reconstructed cranial orientation of the sauropod dinosaur Nigersaurus 20 . Tilting of the head was hypothesized to be correlated to body size in "ungulates" 47 , with small, forest-dwelling species holding their head higher than large species adapted to savannah. A similar trend is found here, with species above 100 kg having their head tilted 37° anteriorly on average whereas species below 100 kg hold their heads 30° below the horizontal on average (Fig. 3). Statistical tests on our dataset find a significant correlation between body mass and head posture (p-value = 0.014), but this correlation is no longer significant once the data are corrected for the effect of phylogeny (p-value = 0.060). This strongly suggests that the trend observed here and by Köhler 47 actually reflects www.nature.com/scientificreports/ the fact that large savannah herbivores belong to just a few clades (e.g. equids, alcelaphins, and hippotragins) whereas the small ones mostly belong to the Antilopinae. A more significant effect of body mass might nevertheless be found while including very small-bodied species (e.g. rodents, shrews) because their head posture would be more constrained by its proximity to the substrate. Similarly, species with a sprawling posture would also have to keep their head higher, as already observed in many reptiles 12,[41][42][43] . The relationship between LSC orientation and phylogeny has been empirically anticipated as the way the LSC enters the vestibule (either directly above the posterior ampulla or at different levels within the ampulla) is distinctive between different clades of ruminant 50,51,[86][87][88] . The selection of an Early Burst model as the best fit for the whole dataset suggests that the evolution of head posture might represent a classic example of adaptive radiation 61,78 . This may be the effect of the abundance of bovids in our dataset, which originated in the early Neogene and rapidly adapted to a wide variety of ecological niches, diet, body mass, social behavior, and habitat 48,89 . However, the presence of a strong phylogenetic signal for both head posture variables cannot be directly interpreted as evidence for such a radiation in terms of evolutionary process, which would require further analyses of diversification rates 90 . The increase of phenotypic divergence resulting in different niches for a given character in a clade does not always correspond to an adaptive radiation, even when it closely matches the underlying phylogeny. This is due to ecological interactions between distantly related species, which can result in a similar timing of evolutionary shifts for distinct clades, the detail of which is often very difficult to decipher without exhaustively sampling each of these clades 91 . Indeed, even if an early radiation in head posture diversity would be consistent with the high discrepancy between taxonomic groups for both variables (Fig. 4), the small sample size for all groups except Ruminantia prevents a straightforward discussion of specific evolutionary constraints in that context. Furthermore, other factors such as a high proportion of sympatric species in the sample may also artificially increase the fit of an EB evolutionary model 78 . A larger and broader sampling among mammals is thus likely to blur such a signal and could result in a more homogeneous distribution of residuals that would not necessarily match so closely the observed pattern of relative phylogenetic proximity. Diet. As grazers have to keep their head low while foraging on grass, whereas browsers have to catch leaves higher in bushes and trees, and because herbivores spend most of their time acquiring low-energetic food 49,92,93 , it is expected on an evolutionary scale that the skull, neck musculature, and vestibular apparatus of herbivores would adapt to these different feeding strategies and that it would reflect in their head posture, even at rest 20,30,45 . As such, a gradually more anteriorly tilted neutral head posture is expected as moving from browsing species to mixed feeders, and finally grazers 20,30 . Schellhorn 30 was the first to compare the reconstructed cranial orientation of Rhinocerotidae to their actual head posture (using an open-access database of photographs) and found results consistent with such a gradient. Measuring the reconstructed cranial orientation from Schellhorn's published figures and adding our own observations of neutral head posture, we do find a more tilted reconstructed cranial orientation in the grazing Cerathotherium (average reconstructed cranial orientation = 38°), than in the mixed feeder Rhinoceros (average reconstructed cranial orientation = 34°), and the browsing Dicerorhinus and Diceros (average reconstructed cranial orientation = 31°) (Fig. 9); however, this gradient does not reflect on the neutral head posture that shows no particular trend among rhinocerotids (Fig. 9). Despite the large difference between the average head posture of browsers and grazers, the very low head posture of mixed feeder rhinocerotids casts some doubts on the validity of the correlation between head posture and diet (Fig. 9). The Cervidae constitutes a more striking example (Fig. 10). In cervids, the head is consistently kept within about 10° around the average neutral head posture (20°) regardless of diet, and browser and grazers are not distinguishable based on neutral head posture (average = 30° for browser; average = 28° for grazers) or reconstructed cranial orientation (average = 41° for browsers; average = 45° for grazers) ( Table 1). The situation in rhinocerotids illustrates that for the whole dataset. Diet is found to have no statistical effect on neutral head posture (p-value = 0.516), and its relationship to the reconstructed cranial orientation is barely significant (p-value = 0.046). Average values for neutral and reconstructed head postures show a visible increase in tilting with a more grass-rich diet, but there is a strong overlap between dietary groups for both variables (Fig. 5). This suggests that even though diet could potentially be reconstructed in extinct species using the orientation of the LSC, caution should be taken as i) browsers and grazers could be statistically discriminated, but mixed feeders could not; ii) the correlation between reconstructed cranial orientation and diet does not seem to reflect on the neutral head posture. The reason why remains unknown; and iii) the high p-value suggests that adding more data (particularly CT data) in the future may affect this correlation. Semi-aquatic adaptation. No significant result indicative of a correlation between habitat and head posture or reconstructed cranial orientation was found; however, semi-aquatic species show a noticeably high reconstructed posture of the skull on average (23°) compared to other species (Fig. 6). The low number of semiaquatic species in the dataset (one rhinocerotid, three tapirids, and two hippopotamids, Table 1) likely prevents this trend to be identified as significant in our sample. A high head posture is not observed in semi-aquatic species (Fig. 6), even though it would be expected of species that have to keep breathing above water level most of the time while immersed 27,30 . Recently, a semi-aquatic habit for the Triassic archosaur Proterosuchus and the therapsid Lystrosaurus has been hypothesized as these two species would have their head tilted upward anteriorly when the plane of the LSC is horizontal 27,94 (Fig. 11c). In Proterosuchus this upward tilting would be about 17°2 7 , whereas in Lystrosaurus it would be between 19° and 23°9 4 . However, an upward tilting of the neutral head posture is also habitually observed in many fully terrestrial species, such as Camelus dromedarius and was occa- www.nature.com/scientificreports/ sionally spotted in Capra ibex ( Fig. 11a; see Supplementary Table S1). Among modern archosaurs, an upward tilting of the head is observed in some sea birds and the common starling Sturnus vulgaris 8 . An upward tilting of the beak when the dry skull is held with the plane of the LSC horizontal seems to be observed in the razorbill (Alca torda) and the Heron (Ardea cinerea) (as inferred from the figures and measurements in Duijm 8 , see Supplementary Table S1); however, the reconstructed cranial orientations in the razorbill and heron vary between 0° and -4° only (Fig. 11b). Among fossil species, the likely terrestrial sauropod Ngwevu intloko (Fig. 11c) would also have had its reconstructed cranial orientation tilted 17° above the horizontal (identified as Massospondylus carinatus in Sereno et al. 20 ; see Chapelle et al. 95 ). The peculiar orientation of the LSC in Lystrosaurus, Proterosuchus, and Massospondylus is remarkable as such a downward tilting of the LSC superior to 15° has never been found in any modern species to date, particularly not in the semi-aquatic species studied here which all have a posteriorly tilted LSC as all other "ungulates" (average reconstructed cranial orientation of semi-aquatic species = 23°) ( Fig. 6; Table 1). The heron and razorbill mentioned above would have an anterior tilting of less than 4° (Supplementary Table S1), and Duijm's 8 dataset includes mostly semi-aquatic species, which limits comparisons. Overall, a correlation between an upward tilting of the LSC and semi-aquatic lifestyle is not supported by current data. Noteworthily, the most aquatic species of the dataset, Hippopotamus amphibius, stands out on the scatter-plots as an outlier (Fig. 4). This is due to the large difference between its strongly anteriorly tilted neutral head posture and its almost horizontal reconstructed cranial orientation (difference = 39°). Hippopotamus amphibius normally spends very little time on land 49 , and it can be hypothesized that its bony labyrinth morphology would be more adapted to life in water than on land 96 . This hypothesis is supported by the orientation of the head while swimming in H. amphibius, which is more consistent with its reconstructed cranial orientation (Fig. 12), and the fact that the difference between the neutral and reconstructed head postures in the more terrestrial Hexaprotodon liberiensis (23°) falls more within the range of variation of "ungulates" (Fig. 4). Further field observations of underwater hippopotamus head posture will be necessary to address this hypothesis. The influence of head-butting. Taxa that engage in head-to-head combat have almost the same average neutral orientation of their head as non-head-butting taxa (34° and 33° respectively) (Fig. 7). In sharp contrast, they show a significantly more anteriorly tilted reconstructed cranial orientation (average = 46°) compared to non-head-butting taxa (average = 33°). Unlike what is observed between browsers and grazers, the values here are markedly different (Fig. 7) and the difference is highly statistically significant (p-value = 0.002). It is unlikely that the reconstructed cranial orientation reflects the posture during head-butting, as animals keep their head Figure 13. Comparison of the cranial (transparent), endocranial (pink), and LSC (green) orientations in a nonhead-butting species (Tapirus indicus, a) and a head-butting species (Connochaetes taurinus, b). Scientific Reports | (2020) 10:19602 | https://doi.org/10.1038/s41598-020-76757-0 www.nature.com/scientificreports/ extremely low during this activity 49,92,93 , much lower than their corresponding reconstructed cranial orientation (Fig. 13b). A phenomenon of re-orientation of the braincase and basicranium in head-butting "ungulates" that would not affect head posture overall seems more probable. Such cranial flexure would result in a misalignment of the main axis of the braincase with that of the snout (Fig. 13), a condition termed cyptocephaly and commonly encountered in head-butting "ungulates" 33,48,92,[97][98][99] . This implies that natural selection on the alignment of the plane of the LSC to the horizontal was muted by another, likely more important adaptation to head-butting. This may be the necessity to re-orientate the braincase and basicranium in order to align the fighting surface of the skull, occipital condyles, and vertebral column to help dissipate the energy of the impact to the body and away from the brain 24,100,101 . Another hypothesis would be that the re-organization of the braincase serves to accommodate the development of the large cranial apparatuses found in most head-butting species (e.g. horns and antlers) 29 . In our dataset, 56% of species without horns do not head-butt, and 83% of species with horns do head-butt (Supplementary Table S1), which would partly support this hypothesis. In contrast, intraspecific variations in the presence or absence of horns appear to have no impact on neutral head posture on average ( Fig. 14; Supplementary Table S1). These two hypotheses will need further observations and biomechanical modeling to be addressed adequately. Concluding remarks Neutral head posture is here found to be significantly correlated to the orientation of the plane of the LSC in "ungulate" mammals, but this relationship is loose, and it appears that diet and head-butting have an effect on LSC orientation although not on neutral head posture as would be expected. This suggests an overall relaxed constraint on the alignment of the plane of the LSC to the horizontal at rest. Head posture during locomotion and/or adaptation to head-butting might play a more significant role in the orientation of the LSC than its horizontality at rest, two possibilities that will have to be addressed further. In this contribution, some noteworthy trends between the orientation of the LSC, body mass, diet, adaptation to a semi-aquatic environment, and head-butting are pointed out, although many of these ecological components are difficult to disentangle. "Ungulates" living in closed habitats are often smaller than in open habitats, more solitary, browsers and tend to fight for mates by stabbing each other with their horns and teeth, whereas species from more open habitats usually graze in large herds and perform head-butting to ascertain dominance and attract mates 34,36,40,47 . Finally, although this study finds that there is some interesting ecological and behavioral signal in the orientation of the LSC of ungulates that could be exploited by paleontologists, it is crucial to highlight that the phylogenetic signal was highly significant for all the variables examined here and as such, what the orientation www.nature.com/scientificreports/ of the LSC reflects the best in "ungulates" is their phylogeny more than anything else. Further understanding of the evolutionary processes associated with such a strong phylogenetic disparity will require investigating each subclade in the sample individually and a more exhaustive sample for each of them. Data availability The datasets analysed during the current study are available in the Supplementary
9,165.8
2020-11-11T00:00:00.000
[ "Biology", "Environmental Science" ]
Decomposition of Symmetry into Ordinal Quasi-Symmetry and Marginal Equimoment for Multi-way Tables For the analysis of square contingency tables with ordered categories, Agresti (1983) introduced the linear diagonals-parameter symmetry (LDPS) model. Tomizawa (1991) considered an extended LDPS (ELDPS) model, which has one more parameter than the LDPS model. These models are special cases of Caussinus (1965) quasi-symmetry (QS) model. Caussinus showed that the symmetry (S) model is equivalent to the QS model and the marginal homogeneity (MH) model holding simultaneously. For square tables with ordered categories, Agresti (2002, p.430) gave a decomposition for the S model into the ordinal quasi-symmetry and MH models. This paper proposes some decompositions which are different from Caussinus’ and Agresti’s decompositions. It gives (i) two kinds of decomposition theorems of the S model for two-way tables, (ii) extended models corresponding to the LDPS and ELDPS, and the generalized model further for multi-way tables, and (iii) three kinds of decomposition theorems of the S model into their models and marginal equimoment models for multi-way tables. The proposed decompositions may be useful if it is reasonable to assume the underlying multivariate normal distribution. Zusammenfassung: Zur Analyse quadratischer Kontingenztafeln mit geordneten Kategorien führte Agresti (1983) das lineare Diagonal-Parameter Symmetrie (LDPS) Modell ein. Tomizawa (1991) betrachtete ein erweitertes LDPS (ELDPS) Modell, das um einen Parameter mehr hat als das LDPS Modell. Diese Modelle sind Spezialfälle des Quasi-Symmetrie (QS) Modells von Caussinus (1965). Caussinus zeigte, dass das Symmetrie (S) Modell äquivalent dem QS Modell ist und dass das marginale Homogenitäts(MH) Modell dann auch hält. Für quadratische Tafeln mit geordneten Kategorien gab Agresti (2002, p.430) eine Zerlegung des S Modells in das ordinale Quasi-Symmetrie und das MH Modell an. Wir schlagen Zerlegungen vor, die sich von jenen in Caussinus und Agresti unterscheiden. Wir liefern (i) zwei Arten Zerlegungssätze des S Modells für zwei-weg Tafeln, (ii) erweiterte Modelle entsprechend dem LDPS und ELDPS, das generalisierte Modell für mehr-weg Tafeln, and (iii) drei Arten Zerlegungssätze des S Modells in deren Modelle und marginal Equimoment Modelle für mehr-weg Tafeln. Die vorgeschlagenen Zerlegungen könnten nützlich sein, falls die Annahme einer zugrunde liegenden multivariaten Normalverteilung begründet ist. Introduction Suppose that an R × R square contingency table has the same categories in the row classification as in the column classification.Let X 1 and X 2 denote the row and column variables, respectively, and let p ij denote the probability that an observation will fall in the ith row and jth column of the table (i, j = 1, . . ., R).Thus, Pr(X 1 = i, X 2 = j) = p ij .The symmetry (S) model is defined as where ψ ij = ψ ji (Bowker, 1948;Bishop, Fienberg, and Holland, 1975, p.282).This indicates that the probability that an observation will fall in the (i, j) cell, i = j, is equal to the probability that the observation falls in the symmetric (j, i) cell.Caussinus (1965) considered the quasi-symmetry (QS) model, defined by where ψ ij = ψ ji .A special case of this model with {α i = β i } is the S model.Denote the odds ratio for rows i and j (> i) and columns s and t (> s) by θ (i<j;s<t) .Thus θ (i<j;s<t) = (p is p jt )/(p js p it ).Using the odds ratios, the QS model is further expressed as Therefore, the QS model has characterization in terms of symmetry of odds ratio.For the QS model, also see, e.g., Bishop et al. (1975, p.286), Goodman (1979a), Darroch and McCloud (1986), and Agresti (2002, p.425). The marginal homogeneity (MH) model is defined by where (Stuart, 1955, Bishop et al., 1975, p.293).This indicates that the row marginal distribution is identical with the column marginal distribution. For square tables with ordered categories, Agresti (1984, p.203) proposed the linear diagonals-parameter symmetry (LDPS) model defined by where ψ ij = ψ ji .A special case of this model obtained by putting δ = 1 is the S model.Note that the LDPS model is a special case of the diagonals-parameter symmetry model of Goodman (1979b).The LDPS model may be also expressed as where φ ij = φ ji .Moreover, it may be expressed as This indicates that the probability that an observation will fall in the (i, j) cell, i < j, is δ j−i times higher than the probability that the observation falls in the (j, i) cell.Moreover, Agresti (2002, p.429) considered the ordinal quasi-symmetry (OQS) model defined by where ψ ij = ψ ji and u 1 ≤ • • • ≤ u R denote the ordered scores which assigned for both the rows and columns.Note that the OQS model with integer scores {u i = i} is identical to the LDPS model.Tomizawa (1991) considered a model defined by where Agresti (1983) described the relationship between the LDPS model and the joint bivariate normal distribution as follows.When σ 2 1 = σ 2 2 , the f (u, v)/f (v, u) has the form ξ v−u for some constant ξ, and hence the LDPS model may be appropriate for a square ordinal table if it is reasonable to assume an underlying bivariate normal distribution with equal marginal variances.Tomizawa (1991) described that the ELDPS model rather than the LDPS model would be appropriate if it is reasonable to assume an underlying bivariate normal distribution which does not require the equality of marginal variances.Caussinus (1965) gave the theorem that the S model holds if and only if both the QS and MH models hold for square contingency tables.Bishop et al. (1975, p.287) and Bhapkar and Darroch (1990) gave the decompositions for the S model for three-way tables and for multi-way tables, respectively.Agresti (2002, p.429) showed that the S model holds if and only if both the OQS and MH models hold.Note that the LDPS (OQS) and ELDPS models are special cases of the QS model.Since the OQS model has restrictions stronger than the QS model, we are interested in decomposing the S model into a model with weaker restrictions instead of the MH model. In this paper we propose the other decompositions for the S model and give some extended models for the multi-way tables.Section 2 proposes two kinds of decomposition theorems of the S model for two-way tables.Sections 3 and 4 propose the extended models corresponding to the LDPS and ELDPS models, and the generalized model further for multi-way tables, and give some decomposition theorems of the S model.Ordinal Quasi-Symmetry and Marginal Equimoment Define the monotonic function as where the function is specified.Consider the marginal mean equality (ME) model defined by where µ t = E(g(X t )).This indicates that the mean of g(X 1 ) is equal to the mean of g(X 2 ).We shall consider the decompositions for the S model as follows: Then the LDPS and ME models are expressed as and where where 1), ( 2) and (3), we see and Note that K(•, •) is the Kullback-Leibler information.From (4) we obtain where Since π is fixed, we see and then {p * ij } uniquely minimizes K(p, π) (see Darroch and Ratcfiff (1972); Darroch and Speed (1983); Bhapkar and Darroch (1990). Let where and then {p * * ij } uniquely minimizes K(p, π).Therefore, we see Namely the S model holds.The proof is completed. Next, consider the marginal variance equality (VE) model defined by where σ 2 t = var(g(X t )).This indicates that the variance of g(X 1 ) is equal to the variance of g(X 2 ).We shall consider the other decomposition for the S model as follows. Theorem 2.2 The S model holds if and only if all the ELDPS, ME and VE models hold. The proof is omitted because it is obtained in a similar way to the proof of Theorem 2.1.Theorems 2.1 and 2.2 may be useful for seeing the reason for the poor fit when the S model fits the data poorly. Extension to Three-way Tables We shall extend the LDPS and ELDPS models to three-way tables and consider a generalized model.Furthermore we shall give the some decomposition theorems of the S model for three-way tables. Models For an R × R × R contingency table, let X 1 , X 2 , and X 3 denote the first, second, and third variable, respectively, and let p ijk denote the probability that an observation will fall in the (i, j, k) cell of the table for 1 ≤ i, j, k ≤ R. The symmetry model is defined by where (Bishop et al., 1975, p.301).We shall denote this model by S-3.First, consider a model defined by where Without loss of generality we may set, e.g., α 3 = 1.This model may be also expressed as where (l, m, n) is any permutation of (i, j, k).It is easily seen that this model is an extension of the LDPS model to three-way tables.We shall denote this model by LDPS-3.For example, when X 3 is constant, p ijk /p jik = (α 2 /α 1 ) j−i , namely, the more the difference between X 1 and X 2 is large, the more the LDPS-3 model shifts from symmetry greatly exponentially.Consider now three variables U , V and W having a joint normal distribution with means for some constants ξ 1 , ξ 2 , and ξ 3 .Hence if it is reasonable to assume this underlying three-variate normal distribution, the LDPS-3 model may be appropriate for an ordinal three-way table (see Section 7). Secondly, consider a model defined by where Without loss of generality we may set, e.g., α 3 = β 3 = 1.It is easily seen that this model is an extension of the ELDPS model to three-way tables because for two-way tables this model indicates that the p ij /p ji has the form δ j−i γ j 2 −i 2 for some constants δ and γ.We shall denote this model by ELDPS-3.If it is reasonable to assume an underlying three-variate normal distribution which does not require the equality of marginal variances, then the ELDPS-3 model rather than the LDPS-3 model may be appropriate for an ordinal three-way table (see Section 7). Finally, consider a model defined by where Without loss of generality we may set, e.g., α 3 = β 3 = γ 23 = 1.We shall denote this model by GLDPS-3.A special case of this model obtained by putting γ 12 = γ 13 = γ 23 = 1 is the ELDPS-3 model; namely, this is an extension of the ELDPS-3 model.If it is reasonable to assume an underlying more general three-variate normal distribution which does not require the equality of marginal variances and the equality of correlations, then the GLDPS-3 model rather than the ELDPS-3 model may be appropriate for an ordinal three-way table (see Section 7). Decompositions for the Symmetry Model Using the monotonic function as where this function is specified, first, consider the marginal mean equality (ME-3) model defined by where µ t = E(g(X t )). Secondly, consider the marginal variance equality (VE-3) model defined by where σ 2 t = var(g(X t )).Finally, consider the correlation equality (CE-3) model defined by where ρ st is the correlation between g(X s ) and g(X t ).We obtain the following theorems.The proofs of these theorems are omitted because these are obtained in a similar way to the proof of Theorem 2.1. Extension to Multi-Way Tables We extend the models and decompositions in Section 3 to multi-way tables.For an R T contingency table, let p i 1 ...i T denote the probability that an observation falls in the (i 1 , . . ., i T ) cell of the table (i t = 1, . . ., R; t = 1, . . ., T ). We shall denote this model by S-T .In particular, when T = 3, the S-T model is defined as Secondly, we now consider a model defined by where ψ i 1 ...i T = ψ j 1 ...j T with (j 1 , . . ., j T ) ∈ D(i 1 , . . ., i T ).Note that we may set, e.g., α T = 1.We shall denote this model by LDPS-T . Thirdly, we consider a model defined by where ψ i 1 ...i T = ψ j 1 ...j T with (j 1 , . . ., j T ) ∈ D(i 1 , . . ., i T ).Note that we may set, e.g., α T = β T = 1.We shall denote this model by ELDPS-T .Lastly, we consider a model defined by where ψ i 1 ...i T = ψ j 1 ...j T , with (j 1 , . . ., j T ) ∈ D(i 1 , . . ., i T ).Note that we may set, e.g., α T = β T = γ T −1,T = 1.We shall denote this model by GLDPS-T .When T = 2, this model is identical to the ELDPS model.Thus, the GLDPS-T model is defined when T ≥ 3. Note that Bishop et al. (1975, p.303) defined the QS model for three-way tables, and Bhapkar and Darroch (1990) defined the hth-order (1 ≤ h < T ) QS model for multi-way R T tables (also see Agresti, 2002, p.440, for the first order QS model).We note that the LDPS-T and ELDPS-T models are special cases of the first order QS model and the GLDPS-T model is a special case of the second order QS model. Denote the ME, VE and CE models for R T tables by ME-T , VE-T and CE-T , respectively.Then we obtain the following decomposition theorems of the S-T model for R T tables. Theorem 4.1 The S-T model holds if and only if both the LDPS-T and ME-T models hold. Theorem 4.2 The S-T model holds if and only if all the ELDPS-T, ME-T and VE-T models hold. Theorem 4.3 The S-T model holds if and only if all the GLDPS-T, ME-T, VE-T and CE-T models hold. The proofs of these theorems are omitted because they are obtained in similar ways to the proof of Theorem 2.1. Table 1: Numbers of degrees of freedom (df) for models applied to the R T table (T ≥ 2), where the GLDPS-T model is defined when T ≥ 3. Assume that a multinomial distribution applies to the R T table.The maximum likelihood estimates of expected frequencies under each model could be obtained using the Newton-Raphson method to the log-likelihood equations or using the iterative procedures, for example, the general iterative procedure for log-linear models of Darroch and Ratcfiff (1972).Each model can be tested for goodness-of-fit by, e.g., the likelihood ratio chi-square statistic (denoted by G 2 ) with the corresponding degrees of freedom (df).Note that e.g., for square tables, G 2 is where n ij is the observed frequency in the (i, j)th cell, and m ij is the maximum likelihood estimate of expected frequency m ij under the given model.The numbers of df for models are given in Table 1.Note that the number of df for the S-T model is equal to the sum of those for the decomposed models. Example 1 Table 2 taken directly from Agresti (1984, p.206) is the father's and son's occupational mobility data in Britain.These data have been analyzed by some statisticians including Bishop et al. (1975, p.100), Goodman (1981Goodman ( , 1984)), Agresti (1984, pp.205-206), and Tomizawa (1990a, 1990b, 1990c, 1991).Table 3 gives the values of the likelihood ratio statistic G 2 for models applied to these data.The S model fits the data in Table 2 very poorly since the value of G 2 is 37.5 (p < 0.001) with 10 df.The LDPS model does not fit these data so well yielding G 2 = 17.1 (p = 0.047) with 9 df.However the ELDPS model fits these data well yielding G 2 = 11.1 Table 2: Occupational status for British father-son pairs; from Agresti (1984, p.206).The parenthesized values are the maximum likelihood estimates of expected frequencies under the ELDPS model.(p = 0.194) with 8 df.Using Theorems 2.1 and 2.2, we shall consider the reason why the S model fits these data poorly. The VE model with g(k) = k, k = 1, . . ., 5, fits the data in Table 2 very well, but the ME model with g(k) = k fits these data poorly (see Table 3).Therefore it is seen from Theorem 2.2 that for these data, the poor fit of the S model is caused by the influence of the poor fit of the ME model rather than the ELDPS and VE models because the ELDPS and VE models fit these data well. Example 2 The data in Table 4 give results of the treatment group only in randomized clinical trials conducted by a pharmaceutical company in anemic patients with cancer receiving chemotherapy.The response is the patient's hemoglobin (Hb) concentration at baseline (before treatment) and following 4 and 8 weeks of treatment.Table 4 shows the 3 × 3 × 3 array of counts of Hb response that is classified as ≥ 10g/dl, 8 − 10g/dl, and < 8g/dl. The S-3 model fits these data in Table 4 very poorly, yielding G 2 = 76.2 (p < 0.001) with 17 df (Table 5).By using the decompositions for the S-3 model, we shall consider the reason why the S-3 model fits these data poorly.Each of the GLDPS-3 and VE-3 models with g(k) = k fits the data in Table 4 very well, but the LDPS-3, ELDPS-3, ME-3 and CE-3 models fit these data poorly (see Table 5).From Theorem 3.2, the poor fit of the S-3 model is caused by the influence of the poor fits of both the ELDPS-3 and ME-3 models with g(k) = k (rather than the VE-3 model).Also, from Theorem 3.3, the poor fit of the S-3 model is caused by the influence of the poor fits of both the ME-3 and CE-3 models with g(k) = k (rather than the GLDPS-3 and VE-3 models).Ohio, 1940(from Bishop et al., 1975, p.305). Example 3 The data in Table 6, taken directly from Bishop et al. (1975, p.305), give the 3 × 3 × 3 array of counts of stationary two-step transitions in the panel survey of potential voters in Erie County, Ohio, 1940, which summarize the voting intentions of the 1940 presidential elections.Although the voter's supportive political party was classified into Republican, Democrat, and Undecided, we regard the voters with 'Undecided' as the middle class which could not decide Republican or Democrat, and give an order like Republican, Undecided, and Democrat. The S-3 model fits these data poorly, yielding G 2 = 229.8with 17 df (Table 7).By using the decompositions for the S-3 model, we shall consider the reason why the S-3 model fits these data poorly. The ME-3 model does not fit the data in Table 6 very well since the value of G 2 is 6.58 (p < 0.05) with 2 df, but it fits much better than any other models (Table 7).In terms of the various decompositions theorems, we can see that the poor fit of the S-3 model may be caused by the influence of the more poor fits of the other models rather than the ME-3 model.decompositions for the S-T model would be useful for seeing the reason for the poor fit when the S-T model fits the data poorly.Moreover, the decomposition for the S-T model into more (three or four) models rather than into two models would be useful for seeing in more details the reason for the poor fit when the S-T model fits the data poorly. Because the S model can be decomposed in at least two ways, one may be interested in which decomposition should one apply.For square tables, from Theorems 2.1 and 2.2, the S model is decomposed into (1) the LDPS and ME models and (2) the ELDPS, ME, and VE models.However, the LDPS model is not equivalent to the ELDPS and VE models holding simultaneously.Therefore both decompositions should be applied for analyzing the data. It may seem to readers that in Examples the decomposed model (e.g., the LDPS and ME models) are tested after the S model is rejected, and the test of the S model can therefore be seen as a preliminary test.However the decomposed models should be applied even if the S model is accepted.Assuming that the LDPS model holds true, the hypothesis that the S model holds, i.e., δ = 1 in the LDPS model, can be tested by the difference between the G 2 values for the S and LDPS models.Even if the S model fits the data well, the structure of complete symmetry may not exist for the data.For the ordinal data, then we are also interested in seeing the structure of asymmetry, e.g., the structure of the LDPS model.The estimate of parameter δ in the LDPS model would be useful for making inferences such as that X 1 is stochastically less than X 2 or vice versa according as the estimated δ is greater than 1 (or less than 1).So, for the ordinal data, the LDPS model would be useful even when the S model fits the data well.The ME and VE models would be useful for seeing the structure of the marginal distributions. It also may seem that the decision procedure consists of a sequence of likelihood ratio tests, and these might be a simultaneous testing problem.However, when we want to see which model of the decomposed models has the more poor fit (e.g., by p-values), we would not need the adjustment of the individual significance levels.If we want to judge whether or not the S model holds by judging whether or not each of decomposed models holds at the given significance level, we had better adjust the individual significance level. Theorem 3. 1 The S-3 model holds if and only if both the LDPS-3 and ME-3 models hold.Theorem 3.2 The S-3 model holds if and only if all the ELDPS-3, ME-3 and VE-3 models hold.Theorem 3.3 The S-3 model holds if and only if all the GLDPS-3, ME-3, VE-3 and CE-3 models hold. Table 3 : Likelihood ratio chi-square values G 2 for models applied to the data in Table2. Table 4 : Hemoglobin concentration at baseline, 4 weeks and 8 weeks in carcinomatous anemia patients from a randomized clinical trial.The parenthesized values are the maximum likelihood estimates of expected frequencies under the GLDPS-3 model. Table 5 : Likelihood ratio chi-square values G 2 for models applied to the data in Table4. Table 6 : Stationary two-step transitions in a panel study of potential voters in Erie County, Table 7 : Likelihood ratio chi-square values G 2 for models applied to the data in Table6.
5,299
2016-04-03T00:00:00.000
[ "Mathematics" ]
Live imaging and biophysical modeling support a button-based mechanism of somatic homolog pairing in Drosophila The spatial configuration of the eukaryotic genome is organized and dynamic, providing the structural basis for regulated gene expression in living cells. In Drosophila melanogaster, 3D genome organization is characterized by the phenomenon of somatic homolog pairing, where homologous chromosomes are intimately paired from end to end. While this organizational principle has been recognized for over 100 years, the process by which homologs identify one another and pair has remained mysterious. Recently, a model was proposed wherein specifically-interacting “buttons” encoded along the lengths of homologous chromosomes drive somatic homolog pairing. Here, we turn this hypothesis into a precise biophysical model to demonstrate that a button-based mechanism can lead to chromosome-wide pairing. We test our model and constrain its free parameters using live-imaging measurements of chromosomal loci tagged with the MS2 and PP7 nascent RNA labeling systems. Our analysis shows strong agreement between model predictions and experiments in the separation dynamics of tagged homologous loci as they transition from unpaired to paired states, and in the percentage of nuclei that become paired as embryonic development proceeds. In sum, our data strongly support a button-based mechanism of somatic homolog pairing in Drosophila and provide a theoretical framework for revealing the molecular identity and regulation of buttons. Introduction Eukaryotic genomes are highly organized within the three-dimensional volume of the nucleus. Advances over the past several decades have revealed a hierarchy of organizational principles, from the large scale of chromosome territories to the smaller-scale patterned folding of chromosomal segments called Topologically Associated Domains (TADs) and the association of active and inactive chromatin into separate compartments (Szabo, Bantignies, and Cavalli 2019) . Disruption of these organizational structures can have dramatic consequences for gene expression and genome stability (Lupiáñez et al. 2015;Kragesteen et al. 2018;Despang et al. 2019;Rosin et al. 2019) , emphasizing the importance of fully understanding underlying mechanisms of three-dimensional genome organization. While many principles of genome organization are common among eukaryotes, differences have been noted among different organisms and cell types. For example, in Drosophila , an additional layer of nuclear organization exists in somatic cells wherein homologous chromosomes are closely juxtaposed from end to end, a phenomenon known as somatic homolog pairing (Joyce, Erceg, and Wu 2016;Stevens 1908) . While similar interchromosomal interactions occur transiently in somatic cells of other species and during early meiotic phases of most sexually-reproducing eukaryotes, the widespread and stable pairing of homologous chromosomes observed in somatic cells of Drosophila appears to be unique to Dipteran flies (King et al. 2019;Joyce, Erceg, and Wu 2016;McKee 2004) . Notably, the close juxtaposition of paired homologs can have a dramatic impact on gene expression through a process known as transvection, where regulatory elements on one chromosome influence chromatin and gene expression on a paired chromosome (Fukaya and Levine 2017;Duncan 2002) . However, although somatic homolog pairing was first described over 100 years ago (Stevens 1908) , the molecular mechanisms by which homologous chromosomes identify one another and pair have yet to be described. Figure 1A) (Fung et al. 1998;Hiraoka et al. 1993) . This model is further supported by recent studies that converge on a "button" model for pairing, which postulates that pairing is initiated at discrete sites along the length of each chromosome ( Figure 1B) (Viets et al. 2019;Rowley et al. 2019) . However, the molecular nature of these hypothesized buttons is as yet unclear, nor is it clear whether this proposed model could lead to de novo pairing in the absence of some unknown active process that can identify and pair homologous loci. In light of these advances, our understanding of the initiation and maintenance of pairing would be greatly facilitated by the establishment of a biophysical model that defines parameters for the activities of pairing buttons, informed by observations of pairing dynamics in living cells. In this paper, we turn the "button" mechanism for somatic homolog pairing into a precise biophysical model that can predict the behaviors of chromosomes over time. Our simulations show that chromosome-wide pairing can be established through random encounters between specifically-interacting buttons that are dispersed across homologous chromosomes at various possible densities using a range of binding energies that are reasonable for protein-protein interactions. Importantly, we find that active processes are not necessary to explain pairing via our model, as all of the interactions necessary for stable pairing are initiated by reversible random encounters that are propagated chromosome-wide. We test our model and constrain its free parameters by assessing its ability to predict pairing dynamics using live-imaging. Our model can successfully predict that, once paired, homologous loci remain together in a highly stable state. Furthermore, the model can also predict the dynamics of pairing through the early development of the embryo as measured by the percentage of nuclei that become paired as development proceeds, and by the dynamic interaction of individual loci as they transition from unpaired to paired states. In sum, our analysis provides the necessary quantitative data to strongly support a button model as the underlying mechanism of somatic homolog pairing and establishes the conceptual infrastructure to uncover the molecular identity, functional underpinnings, and regulation of buttons. 1) FORMALIZING A BUTTON-BASED POLYMER MODEL OF HOMOLOGOUS PAIRING Several prior studies have suggested that somatic homolog pairing in Drosophila may operate via a button mechanism between homologous loci (Viets et al. 2019;Rowley et al. 2019;Gemkow, Verveer, and Arndt-Jovin 1998;AlHaj Abed et al. 2019;Erceg et al. 2019;Fung et al. 1998) . In this model, discrete regions capable of pairing specifically with their corresponding homologous segments are interspersed throughout the chromosome. To quantitatively assess the feasibility of a button mechanism, we implemented a biophysical model of homologous pairing ( Fig. 2A). Briefly, we modeled homologous chromosome arms as polymers whose dynamics are driven by short-range, attractive, specific interactions between homologous loci, the so-called buttons, to account for pairing. These buttons are present at a density along the chromosome and bind specifically to each other with an energy . As E p described in detail in Materials and Methods, we included in our model short-range, non-specific interactions between (peri)centromeric regions to account for the the large-scale HP1-mediated clustering of centromeres that may also impact the global large-scale organization inside nuclei (Strom et al. 2017;Rosin, Nguyen, and Joyce 2018) and thus may interplay with pairing ( Fig. 2 -Supp. Fig. 2D) . As initial conditions for our simulations, we generated chromosome configurations with all centromeres at one pole of the nucleus (a 'Rabl' configuration) (see Supplementary Movies 1 and 2), typical of early embryonic fly nuclei (Csink and Henikoff 1998) . Furthermore, to account for the potential steric hindrance of non-homologous chromosomes that could impede pairing, we simulated two pairs of homologous polymers. Pairing between homologous chromosomes is assumed to be driven by specific, short-range attractive interactions of strength between certain homologous E p regions, named buttons. Each 10kb monomer in the simulation corresponds to one locus. (B) Kymograph of the time-evolution of the distances between homologous regions predicted by the model in one simulated stochastic trajectory for a button density of =65% , an interaction strength of =-1.6k B T, and E p an initial distance . See Figure We systematically investigated the role of the button density along the genome , of the strength of the pairing interaction , and of the initial distance between homologous E p chromosomes in dictating pairing dynamics (Fig. 2D). For a given density, there is a critical d i value of below which no pairing event can be captured independently of the initial conditions E p Fig. 2A) since pairing imposes a huge entropic cost for the polymers and thus requires a sufficient amount of energy to be stabilized. Beyond this critical point, higher strengths of interactions and higher button densities lead to faster and stronger pairing (Fig. 2D left, center). The initial distance is also a crucial parameter. When homologous d i chromosomes are initially far apart, pairing is dramatically slowed down and impaired (Fig. 2D, right) due to the presence of the other simulated chromosomes between them ( Fig. 2 -Sup. Fig. 2B). Altogether, these systematic analyses of model parameters support the view that the homologous button model is compatible with pairing. The key necessary mechanism is the specificity of preferential interactions between homologous regions. Indeed, our model suggests that non-specific interactions between buttons ( Fig. 2 -Sup. Fig. 2C,E) are not compatible with chromosome-wide pairing. These results are consistent with previous works where we showed that the weak, non-specific interactions between epigenomic domains that drive TAD and compartment formation in Drosophila (Ghosh and Jost 2018;Jost et al. 2014) cannot establish and maintain stable pairing (Pal et al. 2019) . 2) LIVE IMAGING REVEALS HOMOLOGOUS PAIRING DYNAMICS The button model presented in Figure 2 makes precise predictions about pairing dynamics at single loci along the chromosome. To inform the parameters of the model and test its predictions, it is necessary to measure pairing dynamics in real time at individual loci of a living embryo. To accomplish this, we employed the MS2/MCP (Bertrand et al. 1998) and PP7/PCP (Chao et al. 2008) systems for labeling nascent transcripts. Here, each locus contains either MS2 or PP7 loops which can be visualized with different colors in living embryos (Fukaya, Lim, and Levine 2016;Lim et al. 2018;Chen et al. 2018;Garcia et al. 2013) . Specifically, we designed transgenes encoding MS2 or PP7 loops under the control of UAS (Brand and Perrimon 1993) , and integrated them at equivalent positions on homologous chromosomes (Fig. 3A). Activation of transcription via a source of GAL4 creates nascent transcripts encoding either MS2 or PP7 stem loops, each of which can be directly visualized by maternally providing We focused on embryos that had completed the maternal-to-zygotic transition and began to undergo gastrulation at approximately 2.5 to 5 hours after embryo fertilization, a time in development when pairing begins to increase significantly (Fung et al. 1998 For each case, we imaged 30-60 minute time windows from multiple embryos, and used custom MATLAB scripts to determine the relative 3D distances between chromosomal loci over time. In embryos with both PP7 and MS2 transgenes integrated at polytene position 38F (Fig. 3B, top), the majority of nuclei could be qualitatively classified into one of two categories. In "unpaired" nuclei, homologous loci were typically separated by >1 µm with large and rapid changes in inter-homolog distances (e.g. Fig. 3C, blue), with a mean distance of 2.2 µm and standard deviation (SD) of 1.2 µm averaged over 30 separate nuclei. The measured mean distance between homologous loci was comparable within error, though systematically smaller, relative to the mean distance measured between loci in the negative control, where transgenes are integrated at nonhomologous positions ( Fig. 3C, red, mean distance = 4.0 µm, SD = 1.3 µm, n = 21 nuclei). In contrast, in "paired" nuclei, homologous loci remained consistently close to one another over time, with smaller dynamic changes in interhomolog distance (Fig. 3D, blue, mean distance = 0.4 µm, SD = 0.3 µm, n = 25). Interestingly, while the diffraction-limited signals produced from homologous loci can occasionally overlap in paired nuclei, their average separation was systematically larger than that of the embryos carrying interlaced MS2 and PP7 loops that served as a positive control for co-localization ( Fig. 3D, red, mean distance = 0.2 µm, SD = 0.1 µm, n=44). This control measurement also constitutes a baseline for the experimental error of our quantification of interhomolog distances (Chen et al. 2018) . Our measurements thus confirm previous observations of transgene pairing in the early embryo showing that signals from paired loci maintain close association but do not completely coincide with one another over time (Lim et al. 2018) . Notably, of 38 nuclei qualitatively scored as having paired homologs, we never observed a transition back to the unpaired state over a combined imaging time of more than 8 hours. Analysis of embryos with PP7 and MS2 transgenes integrated in homologous chromosomes at polytene position 53F showed comparable dynamics of interhomolog distances for nuclei in unpaired and paired states ( Fig. 3 -Sup. Fig. 1A, B). Thus, somatic homolog pairing represents a highly stable state characterized by small dynamic changes in the distance between homologous loci. Our assessment thus far has been based on a qualitative definition of pairing. However, our live-imaging data affords us the ability to devise a stringent quantitative definition of homologous pairing. To make this possible, we measured inter-transgene distances for both homologous loci as well as for the unpaired and paired controls through gastrulation. We also included measurements from older embryos (~11-12 hours after embryo fertilization) using the driver R38A04-GAL4 (Jenett et al. 2012) to express the transgenes in epidermal cells, where pairing is expected to be widespread. In Figure 3E, we plotted the resulting mean distance vs. standard deviation of the inter-transgene distance for each nucleus analyzed over the length of time each nucleus was tracked (~10-50 minutes). From these data, we established a quantitative and dynamic definition of somatic homolog pairing based on a mean distance <1.0 µm and a corresponding standard deviation <0.4 µm (Fig. 3E, shaded region). By this definition, we consider paired 100% of nuclei that we had qualitatively scored as such, but exclude all nuclei scored as unpaired. As expected, this definition also scores 100% (15/15) of the tracked nuclei from older embryos as paired. Further, data for paired nuclei from early vs. late embryos were in close agreement, demonstrating that pairing observed in early embryos is representative of later stages. We next analyzed the progression of pairing over the first 6 hours of development in single embryos carrying MS2 and PP7 transgenes in homologous chromosomes at positions 38F and 53F. To accomplish this, we collected data for short (~10 minute) intervals every 30 minutes from 2.5 to 6 hours of development, and analyzed interhomolog distances as outlined above. We then plotted the mean distance as a function of standard deviation for each nucleus analyzed at each time point to create a dynamic assessment of somatic homolog pairing over developmental time. As expected, we see an overall decrease in mean interhomolog distance and its standard deviation as development progresses (Fig. 4A, Fig. 3 -Sup. Fig. 1C). To directly compare our analysis to prior studies, we then binned nuclei into paired and unpaired states based on their mean and standard deviation values as defined in Figure 3E, and plotted the percentage of paired nuclei at each developmental time point (Fig. 4B). Consistent with previous literature (Fung et al. 1998) , we observe a steady increase in the proportion of paired nuclei; however, by our dynamical definition of pairing, the percentage of nuclei that are paired is systematically lower at most timepoints when compared to results using DNA-FISH ( Fig. 4 -Sup. Fig. 1). This slight disagreement likely reflects differences between the classic, static definition of pairing based on overlapping DNA-FISH signals in the one frame accessible by fixed-tissue measurements as opposed to our definition that demands loci to be paired over several consecutive frames. In sum, we have demonstrated that our system captures the progression of somatic homolog pairing over developmental time, making it possible to contrast theoretical predictions and experimental measurements. PAIRING PROBABILITY Our button model predicts that the fraction of paired loci as a function of time depends on three parameters: the initial separation between homologous chromosomes d i , the density of buttons along the chromosome , and the button-button interaction energy (Fig. 2D) . As an E p initial test of our model, and to constrain the values of its parameters, we sought to compare model predictions to direct measurements of the fraction of paired loci over developmental time. Due to the still unknown molecular identity of the buttons, it was impossible to perform a direct measurement of the button density and the button-button interaction energy. However, the initial separation between chromosomes d i can be directly estimated using chromosome painting (Ried et al. 1998;Beliveau et al. 2012) . To make this possible, we used Oligopaint probes (Beliveau et al. 2012) we inferred, for each button density, the strength of interaction that best fits the data (Fig. 4C). Interestingly, the goodness of fit is mainly independent of the button density ( Fig. 4 -Sup. Fig. 3C): denser buttons require less strength of interaction to reach the same best fit (black line in Each data point represents a single nucleus over a ten-minute time window, with different colors indicating time after fertilization. Data are separated into three plots for ease of visualization. (B) Nuclei from each timepoint were scored as "paired" if they fell within the shaded box in (A). Data were taken from three different embryos each for transgenes at 38F (red) and 53F (blue). For each button density , we fitted the experimental pairing dynamics (Fig. 4 -Sup. Fig. 2A). Grey shading provides the envelope of the best predictions obtained for each (dark grey) and its standard deviation (light grey). (C) Phase diagram representing, as a function of , the value of (black line) that leads to the best fit between predicted and experimental E p developmental pairing dynamics. The predicted pairing strength is weaker than observed in the parameter space above the line, and stronger than observed below the line. As illustrated in Figure 4B, we observed that the infe rred developmental dynamics quantitatively recapitulates the experimental observations for both investigated loci for any choice of parameters given by the curve shown in Figure 4C. Note, however, that our simulations do not predict the large increase of pairing observed for 38F between 5.5 and 6 hours (Fig. 4B). This disagreement may be a consequence of the proximity of 38F to the highly paired Histone Locus Body (Hiraoka et al. 1993;Fung et al. 1998) . In sum, the button model can recapitulate the observed average pairing dynamics for a wide range of possible button densities coupled with interaction energies that are consistent with protein-DNA interactions. 4) PARAMETER-FREE PREDICTION OF INDIVIDUAL PAIRING DYNAMICS The fit of our button model to the fraction of paired loci during development in living embryos (Fig. 4B) revealed a dependency between the interaction strength and the button E p density (Fig. 4C). As a critical test of the model's predictive power, we sought to go beyond averaged pairing dynamics and use the model to compute the pairing dynamics of individual loci. As can be seen qualitatively in the kymographs predicted by the model (Fig. 2B and Fig. 2 -Sup. Fig. 1), pairing spreads rapidly within tens of minutes from the buttons that constitute the initial points of contact along the chromosome. As a result, the button-model predicts that homologous loci will undergo a rapid transition to the paired state as the zipping mechanism of pairing progression moves across the chromosome. inter-homolog distances decreasing rapidly from 1-2 µm to below 0.65 µm at an accelerating rate over the course of 10-20 minutes. The independence of this first period of the pairing dynamics with respect to and suggests that these dynamics are mainly diffusion-limited. E p In contrast to the initial pairing dynamics, varying model parameter values had a clear effect on the distance dynamics that followed the pairing event. Specifically, simulations with a weak (Fig. 5C, red) led to a slow increase in inter-homolog distances as time progressed, E p consistent with unstable pairing events. Conversely, simulations with a strong (Fig. 5D, E p green) were associated with tight pairing of homologous loci following the pairing event, with inter-homolog distances stably maintained around 130 nm, close to the spatial resolution of the model. Notably, the values of and determined to best fit the averaged temporal evolution of E p the fraction of paired loci over development ( Fig. 4 and 5B) all led to similar predictions for the median interhomolog distance dynamics associated with pairing events. As shown in Figures 5E and F, these traces converged to a stable long-term median inter-homolog distance of around 0.5 µm, that is nearly identical to the experimentally-determined distance of around 0.44 µm between homologous loci in stably paired nuclei (compare the colored and black lines in Fig. 5 E,F). Our results thus suggest that the slow dynamics of the pairing probability observed during development (Fig. 4) and the dynamics of inter-homolog distance after a pairing event (Fig. 5) are strongly correlated. We then compared our simulated traces to experimental observations of pairing events in living nuclei. Among the many movies that we monitored, we captured 14 pairing events matching the criteria of initial large inter-homolog distances that drop below 0.65 µm for at least same approach as with the simulated data described above, and calculated the median dynamics of inter-homolog distances around the pairing event ( Fig. 5C-F, black lines). As shown in the figures, we observed that the pre-pairing dynamics are fully compatible with model predictions, with a rapid decrease in inter-homolog distances over the course of 10-20 minutes. Furthermore, Figure 5F shows that the experimental post-pairing dynamics in interhomolog distance are closely recapitulated by the predictions made using parameters that best fit the pairing probability over developmental time presented in Figure 5B. Two caveats may be considered in interpreting our analysis. First, our method of tracking homologous loci in living embryos relies on visualizing nascent RNAs generated from transgenes rather than direct observation of DNA or DNA-binding proteins. While nascent RNAs provide a robust and convenient signal for the position of the underlying DNA (Lim et al. 2018;Chen et al. 2018) , the method limits us to examining the behavior of transcriptionally active loci, which could behave differently from silent chromatin. In addition, it is possible that our analysis could overestimate interhomolog distances in paired nuclei if, for example, nascent RNA molecules from separate chromosomes are prevented from intermixing (Fay and Anderson 2018) . Secondly, our simulations do not account for complex behaviors of the genome that take place during development and that may also influence pairing dynamics and stability, including cell cycle progression and mitosis (Foe 1989) , establishment of chromatin types and associated nuclear compartments (Sexton et al. 2012;Yuan and O'Farrell 2016;Hug et al. 2017;Ogiyama et al. 2018) , and additional nuclear organelles such as the Histone Locus Body (White et al. 2011;Liu et al. 2006) . Further testing and refinement of our understanding of somatic homolog pairing will require new approaches to incorporate the potential influences of these genomic behaviors in a developmental context. A previous analysis of pairing and transvection in living embryos focused on the blastoderm phase, coinciding with the earliest developmental timepoints in our analysis, and found that interhomolog interactions were generally unstable at that time (Lim et al. 2018) . Thus, the embryo appears to transition from an early state that antagonizes stable pairing prior to cellularization to one that supports stable pairing at later time points of development. Prior studies have postulated changes in cell cycle dynamics (Fung et al. 1998;Gemkow, Verveer, and Arndt-Jovin 1998) In the future, we anticipate that our model will be instrumental in identifying and characterizing candidate button loci, and in determining how these parameters are modulated in the various mutant backgrounds that have been found to affect pairing Joyce et al. 2012;Hartl, Smith, and Bosco 2008;Gemkow, Verveer, and Arndt-Jovin 1998) . Thus, our study significantly advances our understanding of the century-old mystery of somatic homolog pairing and provides a theory-guided path for uncovering its molecular underpinnings. DNA constructs and fly lines Flies expressing a nuclear MCP-NLS-mCherry under the control of the nanos promoter were previously described (Bothma et al. 2018) . To create flies expressing PCP-NoNLS-GFP, the plasmid pCASPER4-pNOS-eGFP-PCP-ɑTub3'UTR was constructed by replacing the MCP coding region of pCASPER4-pNOS-NoNLS-eGFP-MCP-ɑTub3'UTR (Garcia et al. 2013) with the coding region of PCP (Larson et al. 2011) . Transgenic lines were established via standard P-element transgenesis (Spradling and Rubin 1982) . To create flies expressing MS2 or PP7 loops under the control of UAS, we started from plasmids piB-hbP2-P2P-lacZ-MS2-24x-αTub3'UTR (Garcia et al. 2013) and piB-hbP2-P2P-lacZ-PP7-24x-αTub3'UTR, the latter of which was created by replacing the MS2 sequence of the former with the PP7 stem loop sequence (Larson et al. 2011) . The hunchback P2P promoter was removed from these plasmids and replaced by 10 copies of the UAS upstream activator sequences and the Drosophila Synthetic Core Promoter (DSCP) (Pfeiffer et al. 2010) . Recombinase Mediated Cassette Exchange (RMCE) (Bateman, Lee, and Wu 2006) was then used to place each construct at two landing sites in polytene positions 38F and 53F (Bateman and Wu 2008a;Bateman, Johnson, and Locke 2012) . Flies carrying the GAL4 driver nullo-GAL4, which drives expression in all somatic cells during the cellular blastoderm stage of cell cycle 14, were a gift from Jason Palladino and Barbara Mellone. Flies carrying the GAL4 driver R38A04-GAL4 , which drives expression in epidermal cells in germband-extended embryos (Jenett et al. 2012) , were acquired from the Bloomington Drosophila Stock Center. Finally, the interlaced MS2 and PP7 loops under the control of the hunchback P2 enhancer and promoter (P2P-MS2/PP7-lacZ) were based on a previously described sequence (Wu, Chen, and Singer 2014) . The resulting embryos are loaded with MCP-mCherry-NLS and PCP-GFP proteins due to maternal expression via the nanos promoter, and zygotic expression of nullo-GAL4 drives transcription of MS2 and PP7 loops in all somatic cells starting approximately 30 minutes into cell cycle 14 (cellular blastoderm.) For pairing analysis, both MS2 and PP7 transgenes were in the same genomic location, either position 38F or 53F, whereas for a negative control, MS2 loops were located at 38F and PP7 loops were located at 53F. To visualize pairing at later time in development, the mothers indicated above were instead crossed to males of genotype 10XUAS-DSCP-PP7; R38A04-GAL4 , where both MS2 and PP7 loops were located at position 38F. Finally, to visualize MS2 and PP7 loops derived from the same genomic location, mothers of genotype MCP-mCherry-NLS, PCP-GFP were crossed to P2P-MS2/PP7-lacZ located at position 38F. Embryo preparation and image acquisition Embryos were collected at 25 ० C on apple juice plates and prepared for imaging as previously described (Garcia et al. 2013) . Mounted embryos were imaged using a Leica SP8 confocal microscope, with fluorescence from mCherry and eGFP collected sequentially to minimize channel crosstalk. For each movie, the imaging window was 54.3 x 54.3 µm at a resolution of 768 x 768 pixels, with slices in each z-series separated by 0.4 µm. Z-stacks were collected through either 10 or 12 µm in the z plane (26 or 31 images per stack), resulting in a time resolution of approximately 27 or 31 seconds per stack using a scanning speed of 800 Hz and a bidirectional scan head with no averaging. For the pairing data described in Figure 3, the imaging window was centered on a dorsal view of the embryonic head region covering mitotic domains 18 and 20 (Foe 1989) , which show minimal movements during gastrulation and germ band extension relative to other regions of the embryo. We compared pairing levels in these cells at 6 hours of development to that of cells in a posterior abdominal segment at the same time point and found them to be near identical (75.0% paired, n=16 for anterior cells vs 73.7%, n=19 in posterior cells according to the definition of pairing established in Figure 3E), supporting that cells from different regions of the embryo are roughly equivalent for pairing dynamics at this stage. For positive control embryos with interlaced PP7 and MS2 loops driven by the hunchback promoter, embryos were imaged during cell cycle 13 and early cell cycle 14, and the imaging window was positioned laterally as previously described (Garcia et al. 2013) . To assess pairing in late stage embryos using the R38A04-GAL4 driver, embryos were aged to approximately 11-12 hours and the imaging window was positioned laterally over an abdominal segment. For the developmental time course movies described in Figure 3, imaging centered on mitotic domains 18 and 20 when these cells were in interphase. During timepoints when these domains were undergoing mitosis, an adjacent mitotic domain in interphase was imaged. Image analysis All images were first run through the ImageJ plug-in Trainable Weka Segmentation (Arganda-Carreras et al. 2017) and filtered with custom classifiers to generate two separate channels of 3D segmented images that isolated fluorescent spots. These segmented spots were then fitted to a Gaussian with a nonlinear least squares regression to find the 2D center. Image z-stacks were then searched for any spots tracked for 3 or more contiguous z-slices and the rest were discarded. Additional manual curation was employed to confirm the accuracy of segmented images and to add in any additional spots that were missed. An initial estimate of the center of each spot was set based on the z-slice in which the spot had the greatest maximum intensity within a predefined radius from its 2D center. These initial estimates were then used to seed a 3D Gaussian fit for each spot, the center of which was used for all distance calculation. This granted us not only sub-pixel resolution in x-y but also sub z-slice resolution, allowing for more precision in the z coordinate that would otherwise be limited by the 0.4 µm spacing between consecutive stacked images created by confocal imaging. Raw image z-stacks for each time frame were also maximum projected in the channel containing nuclearly localized MCP-mCherry to create 2D maps of all the nuclei in frame. These nuclear projections were then segmented and tracked in Matlab, followed by manual curation to ensure that each nucleus was consistently followed. One tracked particle lineage from each channel was then assigned a distinct nucleus based on its proximity to that nucleus in the 2D map and the particles in each channel were considered homologous chromosomes of one another. Since absolute coordinates of assigned particles were not possible to obtain due to cellular rotation and motion, all distance calculations were done with the relative coordinates of each locus from its homolog as any cellular rotation or motion was assumed to be conserved between loci in the same cell. For the data presented in Figure 3, we qualitatively scored each nucleus based on the measured distances between red and green signals over the time the signals were observed: "paired" nuclei showed small distances and little variation over time, and "unpaired" nuclei showed larger distances and greater variation over time. Nuclei that showed a transition from large distances and variation at earlier timepoints to smaller distances and variation at later timepoints were scored as "pairing" traces, and were not included in Figure 3 (see Figure 5). In assessing the stability of the paired state, we included both "paired" (n=25) and "pairing" (n=13) nuclei in the total number of nuclei (n=38) assessed. In this analysis, we conservatively only included the observation time of "paired" nuclei (> 8 hours of observation with no transition back to the unpaired state), although "pairing" nuclei also remained in the paired state throughout the remaining observation time once they became paired. To align the traces presented in Figure 5 based on a timepoint when the loci become paired, we manually aligned all traces that had been qualitatively assessed as "pairing" traces according to several different values of threshold distance and consecutive frames below that threshold. We then optimized this exploration for values that provided qualitatively good alignment of traces but that excluded as few traces as possible in order to maximize the data available for analysis. The same criteria were applied to identify and align pairing traces from simulations. All image analysis was done using custom scripts in Matlab 2019b unless otherwise stated. Chromosome painting Embryos of genotype w 1118 were aged to 2-3 hours after embryo deposition, fixed, and subjected to DNA Fluorescence In Situ Hybridization (DNA-FISH) using 400 pmol of Oligopaint probes (Beliveau et al. 2012) targeting 2L and 2R (200 pmol of each probe) as previously described (Bateman and Wu 2008b) . Oligopaint probes targeting chromosome arms 2L and 2R are described in (Rosin, Nguyen, and Joyce 2018) . Hybridized embryos were mounted in Vectashield mounting media with DAPI (Vector Laboratories), and three-dimensional images were collected using a Leica SP8 confocal microscope. To establish initial inter-homolog distances, an image from an embryo in early interphase 14 (as judged by nuclear elongation (Fung et al. 1998) ) and with high signal-to-noise was analyzed using the TANGO image analysis plugin for ImageJ (Ollion et al. 2013(Ollion et al. , 2015Belevich et al. 2016) . After segmentation and assignment of each painted territory to a parent nucleus, distances between territories were measured from centroid to centroid in 3D. Since homologous chromosomes are labeled with the same color, when territories produce a continuous region of fluorescence, a distance of zero was assigned. A total of 48 nuclei were analyzed for each of 2L and 2R. The homologous button model We modeled two pairs of homologous chromosome arms as semi-flexible self-avoiding polymers. Each chromosome consists of N =3,200 beads, with each bead containing 10 kbp and being of size b nm. The 4 polymers moved on a face-centered-cubic lattice of size L x x L y x L z under periodic boundary conditions to account for confinement by other chromosomes. Previously, we showed that TAD and compartment formation may be quantitatively explained by epigenetic-driven interactions between loci sharing the same local chromatin state (Jost et al. 2014;Ghosh and Jost 2018) . However, such weak interactions cannot lead to global homologous pairing (Pal et al. 2019). Here, to simplify our model, we neglect these types of interactions (whose effects are mainly at the TAD-scale) to focus on the effect of homolog -specific interactions. We do consider, however, HP1-mediated interactions between (peri)centromeric regions that are thought to impact the global large-scale organization inside nuclei (Fig. 2 -Supp. Fig. 2D) (Strom et al. 2017) . Homologous pairing was modeled as contact interactions between some homologous monomers, the so-called buttons ( Fig. 2A). For each pair of homologous chromosomes, positions along the genome were randomly selected as buttons with a probability . Each 10kbp bead i of chromosome chr is therefore characterized by a state p chr,i with p chr,i =1 if it is a button (=0 otherwise) and p chr,i = p chr',i =1 if chr and chr' are homologous. In addition, the first 1,000 monomers of each chromosome were modeled as self-attracting centromeric and pericentromeric regions, the rest as neutral euchromatic regions. The energy of a given configuration was then given by , (S1) H = ∑ The dynamics of the chains followed a simple kinetic Monte-Carlo scheme with local moves using a Metropolis criterion applied to H . The values of ) and L z ( =4 µm) were fixed using the coarse-graining and time-mapping strategies developed in (Ghosh and Jost 2018) for a 10-nm fiber model and a volumic density=0.009 bp/nm 3 typical of Drosophila nuclei. For every set of remaining parameters (the button density and the strength of pairing interaction E p ), 100 independent trajectories were simulated starting from compact, knot-free, Rabl-like initial configurations (Csink and Henikoff 1998) To constrain model parameters, we compared the measured pairing dynamics (Fig. 4B) to the model prediction. Specifically, for each parameter set, we computed a chi2-score between the predicted dynamics and experimental timepoints , ) as a μm ≤ 1 function of the initial distance between homologous chromosomes at 1h, 4h and 10h for =65% and d i E p =-1.6kT in presence ( , full lines) or absence ( , dashed lines) of interactions between − .1kT E c = 0 kT E c = 0 (peri)centromeric regions. Even for large initial distances ( ), we observe a weak but significant .5μm d i ≥ 2 amount of pairing in the euchromatic regions . These predictions are consistent with DNA FISH experiments at different loci suggesting an average higher pairing probability in heterochromatic loci during embryogenesis (see Fig.1 and Sup. Fig 2 in ) . (E) The non-specific button model. To verify whether a non-specific button model can lead to global pairing, we relaxed the homologous model including specific attraction only between homologous loci and generated models where buttons may interact with any other buttons in the nucleus (left). In this model, the energy of a given configuration was described by We varied from 0.1 to 1 and from -0.025 to -4kT, one realization of which is shown here (right), E p without observing any global pairing of homologous chromosomes. Other parameters were as in the homologous button model (see Materials and Methods of the main text). Contacts preferentially form between buttons belonging to the same chromosome, or more weakly, between buttons of different chromosomes but not necessarily between homologous loci. Inter-homolog distances were determined by segmenting painted regions in 3D using ImageJ and measuring center-to-center distances (inset; see Materials and Methods for details). Chromosome arm 2R, carrying transgene location 53F, was sampled in the same field of cells using a different fluorescent tag (not shown). (B) Distribution of inter-homologous arm distances measured from 48 nuclei in the image in (A) (red curve). Measurements from chromosome arms 2L and 2R were completely overlapping and therefore were combined. The curve is nicely fitted by a Gaussian distribution (black curve, mean=1.9 , SD=0.9). defined as in Equation S1 and the number of common binding sites between buttons (chr,i) and m chr,i;chr ,j ′ (chr',j) , and the strength of interaction between binding sites bound to the same architectural E a < 0 proteins. For example, the case n site =1, n archi =1 ( =1, ) corresponds to the non-specific m chr,i;chr ,j ′ E a = E p button model described in Fig. 4 -Sup.Fig. 2E. To simplify and avoid potential issues arising from periodic boundary conditions, we focused on favorable situations for pairing where homologs are initially aligned and close to each other (~640nm between their respective centers of mass) (see inset in A) and where the monomers evolve in a close box (rigid wall conditions). Using this model, we first investigated how specific a button should be in order to lead to pairing. We fixed n site =1 and varied n archi and for a button E a density of 60%. In these cases, n archi represents the number of different button types. In (B), we plotted, for each n archi , the time evolution of the average pairing probability between homologous sites (paired if distance ) for the -value that leads to maximal pairing. As a point of comparison, we also plotted μm ≤ 1 E a the corresponding curves in the absence of buttons (black line) and for the homologous button model investigated in the main text (red line) for the -value (-1.5kT) consistent with experiments at the E p corresponding button density. We observed that as n archi increases, i.e. as the buttons become more specific, the pairing efficiency increases. The full specific model (red) becomes well approximated by our combinatorial button model for n archi >200. This suggests that pairing needs a significant degree of specificity via a large number of button types but each button type may be present in a small amount. However, it is unlikely that there exist enough different architectural proteins to reach such single-site specificity. A possibility to increase specificity from a small number of proteins is to allow more than one binding site per button ( n site >1). The number of different buttons is then ( n site )!/[( n archi -n site )!( n site )!]. In (C), we fixed n archi =50, an upper maximal number of architectural proteins, and varied n site . For each n site , we plotted the pairing probability for the -value that leads to maximal pairing. We observed that there E a exists an optimal number of sites (here~5) where the pairing is close to the one obtained with the homologous button model. This corresponds to a value that leads to a large diversity of buttons while maintaining a low number of spurious interactions between non-homologous buttons which is of the order of n site / n archi~0 .1. Supplementary Movies Supplementary Movie 1 . Polymer simulations of homologous pairing. Example of a 4 hours numerical simulation of the homologous button model ( =60% , =-1.6 k B T) with frames taken every 30 seconds. The E p movie focuses on one pair of homologs (red and blue polymers). Orange and cyan parts of these chains represent their (peri)centromeric regions. Surrounding transparent light-grey chains represent the periodic boundary images of the simulated chains. The black bar measures 1 micron. Homologous loci in closed contact (distance <200nm) are colored in green. Supplementary Movie 2 . Polymer simulations of homologous pairing. Example of a 4 hours numerical simulation of the homologous button model ( =60% , =-1.6 k B T) with frames taken every 30 seconds (different E p from the Supplementary Movie 1). The two pairs of homologs are highlighted (red/blue for one pair; purple/dark blue). Orange/cyan and pink/light blue parts of these chains represent their (peri)centromeric regions. Surrounding transparent light-grey chains represent the periodic boundary images of the simulated chains. The black bar measures 1 micron. Homologous loci in closed contact (distance <200nm) are colored in green. Supplementary Movie 3 . Representative confocal movie of a live Drosophila embryo (cell cycle 14 to gastrulation) in which MS2 and PP7 loops are integrated at equivalent positions on homologous chromosomes. Examples of nuclei are highlighted whose loci display characteristic dynamics, including loci that do not pair ("Unpaired"), loci that are already paired ("Paired"), and loci that are observed transitioning from the unpaired to the paired state ("Pairing"). Image stacks were taken roughly every 30 seconds and max-projected for 2D viewing. Supplementary Movie 4 . Representative confocal movie of a live Drosophila embryo (roughly 4.5 hours old) in which MS2 and PP7 loops are integrated at equivalent positions on homologous chromosomes. Image stacks were taken roughly every 30 seconds and max-projected for 2D viewing. Supplementary Movie 5 . Representative confocal movie of a live Drosophila embryo (roughly 5.5 hours old) in which MS2 and PP7 loops are integrated at different positions on homologous chromosomes (MS2 at position 38F and PP7 at position 53F) where we expect no pairing between transgenes. Image stacks were taken roughly every 30 seconds and max-projected for 2D viewing. Supplementary Movie 6 . Representative confocal movie of a live Drosophila embryo (cell cycle 14) in which MS2 and PP7 loops were interlaced in a single transgene on one chromosome at polytene position 38F to act as a positive control for pairing. Both GFP and mCherry are co-localized to the same locus in all transcriptional loci. Image stacks were taken roughly every 30 seconds and max-projected for 2D viewing.
9,965.2
2020-08-31T00:00:00.000
[ "Biology" ]
Student Creativity In Performing Problem-Solving Experiments In Physics Development Through Innovative Technologies This article discusses the role and importance of innovative technologies in developing students` creativity on problematic experiments on physics. It has been shown that innovative technologies help students independently find sources of theoretical knowledge, independently read, analyze data, and even develop their creativity while being able to draw conclusions. The features of innovative technologies and engineering methods are analyzed in the formulation of problematic experiments in physics. The content of the application of each interactive method in carrying out problematic experiments in physics is explained. INTRODUCTION Innovative technologies teach students to search for sources of theoretical knowledge, to perform independent tasks, to draw analytical conclusions from teaching materials. In this process, the teacher plays the role of creating and managing pedagogical conditions for the The teacher correctly chooses the necessary laboratory equipment for problem-solving experiments in physics, takes into account the level of knowledge of students, correctly defines the purpose and objectives of the lesson, correctly designs the lesson plan, modern pedagogical, information and communication appropriate use of technology is an important factor in the development of student creativity. Laboratory classes are one of the forms of lessons aimed at forming, developing thinking and worldview of teachers and students in the implementation of cooperation in the implementation of the experiment. The absence of these indicators in the performance of laboratory classes leads to the emergence in the minds of students of such conditions as boredom, despair and depression. According to the results of pedagogical experiments, it was observed that most students understand the problematic experiment in physics instead of simply performing it. In our opinion, such notions of students are absolutely wrong. Innovative technologies are implemented in the educational process through the teacher's knowledge, and the student's effective assimilation of innovations in learning. It was studied that the specific features of the use of innovative technologies and interactive methods in performing problem-solving experiments in physics can be:  Students are not indifferent to problemsolving experiments, focus on independent thinking and creative research;  Ensuring the continuity of the student's interest in performing the problem experiment;  Development of teacher-student cooperation in problem-solving. In designing the performance of a problembased experiment in physics, it would be appropriate for the teacher to be structured taking into account the nature of the physics, the fluency of the student's thinking, and the ability to be flexible. In lieu of proof of our above opinion, we will focus below on the content of some innovative technologies and interactive methods used in the process of performing a problem experiment in physics: "Networks" ("Cluster") methods. This method serves to expand the scope of logical thinking of students, the formation of skills and competence skills of independent use of educational literature, increase student motivation. "3х4" methods . This method is aimed at teaching students to think freely and independently, to create solid new ideas, to analyze the problem, to draw conclusions, to describe, to work creatively in small groups. "Interview" methods. This method is aimed at developing the student's ability to ask questions, listen, answer correctly, formulate the question correctly. Below we will consider the method of using Boomerang technology in performing a problematic experiment in physics. Phase I. Students are divided into small groups of 2 people. Phase II. The teacher provides each group member with a separate written handout on independent learning, thinking, and performing a new challenging experiment being studied to memorize certain physical quantities. Written handouts consist of a Phase III. Each member of the group learns and remembers the task of performing a problem experiment in physics individually, and then the group members discuss the content of the problem experiment in a group based on mutual questions and answers. Depending on the size and content of the problem experiment, it will take 5.10 minutes to complete. Phase IV. The teacher asks the students to take one of the pre-prepared, numbered sheets (the number of sheets should be equal to the number of students in the group. The numbers indicate the group numbers). The teacher suggests forming groups by numbers. Phase V. Each member of the newly formed group assumes the role of both teacher and student. Each member of the group is required to teach the group the content of the problem experiment he / she performed in the previous group, and in turn the group members master the content of the problem experiment. In doing so, each member of the group should share the content of the problematic experience he or she has performed with others. This will take 8-10 minutes. As a result, new groups formed by numbers are able to master all the materials for performing a problem experiment in a general description. Phase VI. The group members tell each other the content of performing an independent problem experiment. To check how the completed problem experiment has been mastered, the teacher explains to the group members that they will ask each other questions based on the content of the problem experiment they have performed, and that there is internal control within the group in this way. This helps the group to identify and reinforce to each other how the content of the problematic experience they have performed independently has been assimilated by others. Phase VII. The teacher asks all students to return to their previous groups. All students return to their original groups. Phase VIII. The teacher says that given that all students are fully acquainted with the content of the problem-solving materials, they can ask any student in the class questions related to the problem-solving, and the answers to the questions will be evaluated. If the answers to the questions are complete, a grade of "5"; if added, a grade of "4"; a grade of "3" if he / she expresses an opinion while sitting; If the answer is no, a grade of "2" is given. A student is assigned to calculate the evaluation of group members 'responses. Phase IX. The teacher checks the students 'mastery in the groups based on their answers to the test questions in the handouts prepared for the problem-solving experiment. OCLC -1121105668 answer himself or herself and receive an additional grade. Phase XII. The final stage is the assessment of the students in the groups on the performance of the problem experiment, the opinions of the students in the group are taken into account in the calculation of the final grades. Often, there are cases where the correct choice of interactive methods is not sufficiently understood when performing a problematic experiment from Physics. The optimality of this or that method is assessed not by the fact that its name has become commonplace, but by its relevance to the content of the problem experiment to be performed, to the student's ability. Properly selected methods allow to solve the problem in a positive way at the appointed time. The teacher will be familiar with the instructions for performing a problem experiment to plan a problem experiment in physics, and will define the goals and objectives of the problem. In performing a problem experiment, the student knows in advance what basic concepts, events, processes, laws, theories, what he can solve, the expected results. Allocate sufficient time for each problem experiment to plan the content and tasks of that problem experiment, using the interdisciplinary connections provided for in the program; Selects effective educational technologies and methods in performing problem-solving experiments; Selects a mutually appropriate combination of forms of organizing the student to perform a problem experiment in general, in small groups and individually; Makes the necessary adjustments to the forms and methods of performing the problematic experiment, using the option selected during the course. Student creativity is manifested in the design activity of the process of performing a problem experiment in physics. It is important to motivate every student who has done a challenging experiment well in physics. Because in some cases, students face some difficulties in performing a problem experiment, and the skill of organizing games and learning discussions in performing a problem experiment is not yet sufficiently formed. In the performance of a problem experiment, the functions of teaching, educating and developing the lesson process are performed. Using interactive methods in doing this during a challenging experiment will help you to pass the lesson closely. Directing the problem problem performed directly to the student through interaction, using interactive methods, is the basis for performing the problem experiment, in which the teacher does not give ready-made knowledge, but encourages them to search independently. One of the main requirements for problemsolving is that the methods of problem-solving are closely linked, on this basis to increase the effectiveness of problem-solving, all didactic tasks are solved in the classroom, homework is a logical continuation of theoretical knowledge acquired in class. When performing a problem experiment using interactive methods in accordance with these requirements, the teacher should know the following: The American Journal of Social Science and Education Innovations (ISSN -2689-100x) The essence of interactive methods; The role and importance of methods; Principles of application of interdisciplinary interactive methods; Business games; Non-traditional methods; Forms and ways of organizing, providing creative activity of the student; Tools and opportunities to improve the student's ability to perform a problematic experience independently. The use of interactive methods in performing a problem experiment allows the student to acquire practical skills and competencies such as interaction, exchange of ideas, understanding of a specific task, feeling the need to perform it.
2,219
2021-05-31T00:00:00.000
[ "Physics", "Education", "Engineering" ]
Hyperopt-sklearn: Automatic Hyperparameter Configuration for Scikit-learn ! Abstract—Hyperopt-sklearn is a new software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyper-parameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-Newsgroups, Convex Shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and Convex Shapes. Introduction The size of data sets and the speed of computers have increased to the point where it is often easier to fit complex functions to data using statistical estimation techniques than it is to design them by hand.The fitting of such functions (training machine learning algorithms) remains a relatively arcane art, typically mastered in the course of a graduate degree and years of experience.Recently however, techniques for automatic algorithm configuration based on Regression Trees [Hut11], Gaussian Processes [Moc78], [Sno12], and density-estimation techniques [Ber11] have emerged as viable alternatives to hand-tuning by domain specialists. Hyperparameter optimization of machine learning systems was first applied to neural networks, where the number of parameters can be overwhelming.For example, [Ber11] tuned Deep Belief Networks (DBNs) with up to 32 hyperparameters, and [Ber13a] showed that similar methods could search a 238-dimensional configuration space describing multi-layer convolutional networks (convnets) for image classification. Relative to DBNs and convnets, algorithms such as Support Vector Machines (SVMs) and Random Forests (RFs) have a smallenough number of hyperparameters that manual tuning and grid or random search provides satisfactory results.Taking a step back though, there is often no particular reason to use either an SVM or an RF when they are both computationally viable.A modelagnostic practitioner may simply prefer to go with the one that provides greater accuracy.In this light, the choice of classifier can be seen as hyperparameter alongside the C-value in the SVM and the max-tree-depth of the RF.Indeed the choice and configuration of preprocessing components may likewise be seen as part of the model selection / hyperparameter optimization problem. The Auto-Weka project [Tho13] was the first to show that an entire library of machine learning approaches (Weka [Hal09] ) can be searched within the scope of a single run of hyperparameter tuning.However, Weka is a GPL-licensed Java library, and was not written with scalability in mind, so we feel there is a need for alternatives to Auto-Weka.Scikit-learn [Ped11] is another library of machine learning algorithms.Is written in Python (with many modules in C for greater speed), and is BSD-licensed.Scikit-learn is widely used in the scientific Python community and supports many machine learning application areas. With this paper we introduce Hyperopt-Sklearn: a project that brings the benefits of automatic algorithm configuration to users of Python and scikit-learn.Hyperopt-Sklearn uses Hyperopt [Ber13b] to describe a search space over possible configurations of Scikit-Learn components, including preprocessing and classification modules.Section 2 describes our configuration space of 6 classifiers and 5 preprocessing modules that encompasses a strong set of classification systems for dense and sparse feature classification (of images and text).Section 3 presents experimental evidence that search over this space is viable, meaningful, and effective.Section 4 presents a discussion of the results, and directions for future work. Background: Hyperopt for Optimization The Hyperopt library [Ber13b] offers optimization algorithms for search spaces that arise in algorithm configuration.These spaces are characterized by a variety of types of variables (continuous, ordinal, categorical), different sensitivity profiles (e.g.uniform vs. log scaling), and conditional structure (when there is a choice between two classifiers, the parameters of one classifier are irrelevant when the other classifier is chosen).To use Hyperopt, a user must define/choose three things: 1) a search domain, 2) an objective function, 3) an optimization algorithm. The search domain is specified via random variables, whose distributions should be chosen so that the most promising combinations have high prior probability.The search domain can include Python operators and functions that combine random variables into more convenient data structures for the objective function.The objective function maps a joint sampling of these random variables to a scalar-valued score that the optimization algorithm will try to minimize.Having chosen a search domain, an objective function, and an optimization algorithm, Hyperopt's fmin function carries out the optimization, and stores results of the search to a database (e.g.either a simple Python list or a MongoDB instance).The fmin call carries out the simple analysis of finding the best-performing configuration, and returns that to the caller.The fmin call can use multiple workers when using the MongoDB backend, to implement parallel model selection on a compute cluster. Scikit-Learn Model Selection as a Search Problem Model selection is the process of estimating which machine learning model performs best from among a possibly infinite set of possibilities.As an optimization problem, the search domain is the set of valid assignments to the configuration parameters (hyperparameters) of the machine learning model, and the objective function is typically cross-validation, the negative degree of success on held-out examples.Practitioners usually address this optimization by hand, by grid search, or by random search.In this paper we discuss solving it with the Hyperopt optimization library.The basic approach is to set up a search space with random variable hyperparameters, use scikit-learn to implement the objective function that performs model training and model validation, and use Hyperopt to optimize the hyperparamters. Scikit-learn includes many algorithms for classification (classifiers), as well as many algorithms for preprocessing data into the vectors expected by classification algorithms.Classifiers include for example, K-Neighbors, SVM, and RF algorithms.Preprocessing algorithms include things like component-wise Z-scaling (Normalizer) and Principle Components Analysis (PCA).A full classification algorithm typically includes a series of preprocessing steps followed by a classifier.For this reason, scikitlearn provides a pipeline data structure to represent and use a sequence of preprocessing steps and a classifier as if they were just one component (typically with an API similar to the classifier).Although hyperopt-sklearn does not formally use scikit-learn's pipeline object, it provides related functionality.Hyperopt-sklearn provides a parameterization of a search space over pipelines, that is, of sequences of preprocessing steps and classifiers. Although the total number of hyperparameters is large, the number of active hyperparameters describing any one model is much smaller: a model consisting of PCA and a RandomForest for example, would have only 12 active hyperparameters (1 for the choice of preprocessing, 2 internal to PCA, 1 for the choice of classifier and 8 internal to the RF).Hyperopt description language allows us to differentiate between conditional hyperparameters (which must always be assigned) and non-conditional hyperparameters (which may remain unassigned when they would be unused).We make use of this mechanism extensively so that Hyperopt's search algorithms do not waste time learning by trial and error that e.g.RF hyperparameters have no effect on SVM performance.Even internally within classifiers, there are instances of conditional parameters: KNN has conditional parameters depending on the distance metric, and LinearSVC has 3 binary parameters ( loss , penalty , and dual ) that admit only 4 valid joint assignments.We also included a blacklist of (preprocessing, classifier) pairs that did not work together, e.g.PCA and MinMaxScaler were incompatible with MultinomialNB, TF-IDF could only be used for text data, and the tree-based classifiers were not compatible with the sparse features produced by the TF-IDF preprocessor.Allowing for a 10-way discretization of real-valued hyperparameters, and taking these conditional hyperparameters into account, a grid search of our search space would still require an infeasible number of evalutions (on the order of 10 12 ). Finally, the search space becomes an optimization problem when we also define a scalar-valued search objective.Hyperoptsklearn uses scikit-learn's score method on validation data to define the search criterion.For classifiers, this is the so-called "Zero-One Loss": the number of correct label predictions among data that has been withheld from the data set used for training (and also from the data used for testing after the model selection search process). Example Usage Following Scikit-learn's convention, hyperopt-sklearn provides an Estimator class with a fit method and a predict method.The fit method of this class performs hyperparameter optimization, and after it has completed, the predict method applies the best model to test data.Each evaluation during optimization performs training on a large fraction of the training set, estimates test set accuracy on a validation set, and returns that validation set score to the optimizer.At the end of search, the best configuration is retrained on the whole data set to produce the classifier that handles subsequent predict calls. One of the important goals of hyperopt-sklearn is that it is easy to learn and to use.To facilitate this, the syntax for fitting a classifier to data and making predictions is very similar to scikitlearn.Here is the simplest example of using this software.The HyperoptEstimator object contains the information of what space to search as well as how to search it.It can be configured to use a variety of hyperparameter search algorithms and also supports using a combination of algorithms.Any algorithm that supports the same interface as the algorithms in hyperopt can be used here.This is also where you, the user, can specify the maximum number of function evaluations you would like to be run as well as a timeout (in seconds) for each run. from hpsklearn import HyperoptEstimator from hyperopt import tpe estim = HyperoptEstimator(algo=tpe.suggest,max_evals=150, trial_timeout=60) Each search algorithm can bring its own bias to the search space, and it may not be clear that one particular strategy is the best in all cases.Sometimes it can be helpful to use a mixture of search algorithms.Searching effectively over the entire space of classifiers available in scikit-learn can use a lot of time and computational resources.Sometimes you might have a particular subspace of models that they are more interested in.With hyperopt-sklearn it is possible to specify a more narrow search space to allow it to be be explored in greater depth. from hpsklearn import HyperoptEstimator, svc # limit the search to only models a SVC estim = HyperoptEstimator(classifier=svc('my_svc')) Combinations of different spaces can also be used.The support vector machine provided by scikit-learn has a number of different kernels that can be used (linear, rbf, poly, sigmoid).Changing the kernel can have a large effect on the performance of the model, and each kernel has its own unique hyperparameters.To account for this, hyperopt-sklearn treats each kernel choice as a unique model in the search space.If you already know which kernel works best for your data, or you are just interested in exploring models with a particular kernel, you may specify it directly rather than going through the svc. from hpsklearn import HyperoptEstimator, svc_rbf estim = HyperoptEstimator( classifier=svc_rbf('my_svc')) It is also possible to specify which kernels you are interested in by passing a list to the svc.In a similar manner to classifiers, the space of preprocessing modules can be fine tuned.Multiple successive stages of preprocessing can be specified by putting them in a list.An empty list means that no preprocessing will be done on the data. from hpsklearn import HyperoptEstimator, tfidf, pca from hyperopt import hp preproc = hp.choice('my_name',[[pca('my_name.pca')], [pca('my_name.pca'), normalizer('my_name.norm')] [standard_scaler('my_name.std_scaler')], []]) estim = HyperoptEstimator( preprocessing=preproc ) Some types of preprocessing will only work on specific types of data.For example, the TfidfVectorizer that scikit-learn provides is designed to work with text data and would not be appropriate for other types of data.To address this, hyperopt-sklearn comes with a few pre-defined spaces of classifiers and preprocessing tailored to specific data types.It is also possible to specify ranges of individual parameters.This is done using the standard hyperopt syntax.These will override the defaults defined within hyperopt-sklearn.All of the components available to the user can be found in the components.pyfile.A complete working example of using hyperopt-sklearn to find a model for the 20 newsgroups data set is shown below. Experiments We conducted experiments on three data sets to establish that hyperopt-sklearn can find accurate models on a range of data sets in a reasonable amount of time.Results were collected on three data sets: MNIST, 20-Newsgroups, and Convex Shapes.MNIST is a well-known data set of 70K 28x28 greyscale images of handdrawn digits [Lec98].20-Newsgroups is a 20-way classification data set of 20K newsgroup messages ( [Mit96] , we did not remove the headers for our experiments).Convex Shapes is a binary classification task of distinguishing pictures of convex white-colored regions in small (32x32) black-and-white images [Lar07]. Figure 2 shows that there was no penalty for searching broadly.We performed optimization runs of up to 300 function evaluations searching the entire space, and compared the quality of solution with specialized searches of specific classifier types (including best known classifiers). Figure 3 shows that search could find different, good models.This figure was constructed by running hyperopt-sklearn with different initial conditions (number of evaluations, choice of optimization algorithm, and random number seed) and keeping track of what final model was chosen after each run.Although support vector machines were always among the best, the parameters of best SVMs looked very different across data sets.For example, on the image data sets (MNIST and Convex) the SVMs chosen never had a sigmoid or linear kernel, while on 20 newsgroups the linear and sigmoid kernel were often best.TABLE 1: Hyperopt-sklearn scores relative to selections from literature on the three data sets used in our experiments.On MNIST, hyperoptsklearn is one of the best-scoring methods that does not use image-specific domain knowledge (these scores and others may be found at http:// yann.lecun.com/exdb/ mnist/ ).On 20 Newsgroups, hyperopt-sklearn is competitive with similar approaches from the literature (scores taken from [Gua09] ).In the 20 Newsgroups data set, the score reported for hyperopt-sklearn is the weighted-average F1 score provided by sklearn.The other approaches shown here use the macro-average F1 score.On Convex Shapes, hyperopt-sklearn outperforms previous automatic algorithm configuration approaches [Egg13] and manual tuning [Lar07] .learn.The previously best known model in the scikit-learn search space is a radial-basis SVM on centered data that scores 98.6%, and hyperopt-sklearn matches that performance [MNIST]. Discussion and Future Work The CFC model that performed quite well on the 20 newsgroups document classification data set is a Class-Feature-Centroid classifier.Centroid approaches are typically inferior to an SVM, due to the centroids found during training being far from the optimal location.The CFC method reported here uses a centroid built from the inter-class term index and the inner-class term index.It uses a novel combination of these indices along with a denormalized cosine measure to calculate the similarity score between the centroid and a text vector [Gua09].This style of model is not currently implemented in hyperopt-sklearn, and our experiments suggest that existing hyperopt-sklearn components cannot be assembled to match its level of performance.Perhaps when it is implemented, Hyperopt may find a set of parameters that provides even greater classification accuracy. On the Convex Shapes data set, our Hyperopt-sklearn experiments revealed a more accurate model than was previously believed to exist in any search space, let alone a search space of such standard components.This result underscores the difficulty and importance of hyperparameter search. Hyperopt-sklearn provides many opportunities for future work: more classifiers and preprocessing modules could be included in the search space, and there are more ways to combine even the existing components.Other types of data require different preprocessing, and other prediction problems exist beyond classification.In expanding the search space, care must be taken to ensure that the benefits of new models outweigh the greater difficulty of searching a larger space.There are some parameters that scikit-learn exposes that are more implementation details than actual hyperparameters that affect the fit (such as algorithm and leaf_size in the KNN model).Care should be taken to identify these parameters in each model and they may need to be treated differently during exploration. It is possible for a user to add their own classifier to the search space as long as it fits the scikit-learn interface.This currently requires some understanding of how hyperopt-sklearn's code is structured and it would be nice to improve the support for this so minimal effort is required by the user.We also plan to allow the user to specify alternate scoring methods besides just accuracy and F-measure, as there can be cases where these are not best suited to the particular problem. We have shown here that Hyperopt's random search, annealing search, and TPE algorithms make Hyperopt-sklearn viable, but the slow convergence in e.g. Figure 4 and 5 suggests that other optimization algorithms might be more call-efficient.The development of Bayesian optimization algorithms is an active research area, and we look forward to looking at how other search algorithms interact with hyperopt-sklearn's search spaces.Hyperparameter optimization opens up a new art of matching the parameterization of search spaces to the strengths of search algorithms. Computational wall time spent on search is of great practical importance, and hyperopt-sklearn currently spends a significant amount of time evaluating points that are un-promising.Techniques for recognizing bad performers early could speed up search enormously [Swe14], [Dom14].Relatedly, hyperopt-sklearn currently lacks support for K-fold cross-validation.In that setting, it will be crucial to follow SMAC in the use of racing algorithms to skip un-necessary folds. Conclusions We have introduced Hyperopt-sklearn, a Python package for automatic algorithm configuration of standard machine learning algorithms provided by Scikit-Learn.Hyperopt-sklearn provides a unified view of 6 possible preprocessing modules and 6 possible classifiers, yet with the help of Hyperopt's optimization functions it is able to both rival and surpass human experts in algorithm configuration.We hope that it provides practitioners with a useful tool for the development of machine learning systems, and automatic machine learning researchers with benchmarks for future work in algorithm configuration. from hpsklearn import HyperoptEstimator # Load data ({train,test}_{data,label}) # Create the estimator object estim = HyperoptEstimator() # Search the space of classifiers and preprocessing # steps and their respective hyperparameters in Fig. 1 : Fig.1: Hyeropt-sklearn's full search space ("Any Classifier") consists of a (preprocessing, classsifier) pair.There are 6 possible preprocessing modules and 6 possible classifiers.Choosing a model within this configuration space means choosing paths in an ancestral sampling process.The highlighted green edges and nodes represent a (PCA, K-Nearest Neighbor) model.The number of active hyperparameters in a model is the sum of parenthetical numbers in the selected boxes.For the PCA+KNN combination, 7 hyperparameters are activated. Fig. 2 : Fig. 2: For each data set, searching the full configuration space ("Any Classifier") delivered performance approximately on par with a search that was restricted to the best classifier type.(Best viewed in color.) Fig. 3 : Fig.3: Looking at the best models from all optimization runs performed on the full search space (using different initial conditions, and different optimization algorithms) we see that different data sets are handled best by different classifiers.SVC was the only classifier ever chosen as the best model for Convex Shapes, and was often found to be best on MNIST and 20 Newsgroups, however the best SVC parameters were very different across data sets. Fig. 4 : Fig.4: Using Hyperopt's Anneal search algorithm, increasing the number of function evaluations from 150 to 2400 lead to a modest improvement in accuracy on 20 Newsgroups and MNIST, and a more dramatic improvement on Convex Shapes.We capped evaluations to 5 minutes each so 300 evaluations took between 12 and 24 hours of wall time. Fig. 5 : Fig. 5: Right: TPE makes gradual progress on 20 Newsgroups over 300 iterations and gives no indication of convergence. So far in all of these examples, every hyperparameter available to the model is being searched over.It is also possible for you to specify the values of specific hyperparameters, and those parameters will remain constant during the search.This could be useful, for example, if you knew you wanted to use whitened PCA data and a degree-3 polynomial kernel SVM. Table1lists the test set scores of the best models found by crossvalidation, as well as some points of reference from previous work.
4,622.8
2014-01-01T00:00:00.000
[ "Computer Science" ]
N-jettiness beam functions at N3LO We present the first complete calculation for the quark and gluon $N$-jettiness ($\Tau_N$) beam functions at next-to-next-to-next-to-leading order (N$^3$LO) in perturbative QCD. Our calculation is based on an expansion of the differential Higgs boson and Drell-Yan production cross sections about their collinear limit. This method allows us to employ cutting edge techniques for the computation of cross sections to extract the universal building blocks in question. The class of functions appearing in the matching coefficents for all channels includes iterated integrals with non-rational kernels, thus going beyond the one of harmonic polylogarithms. Our results are a key step in extending the $\Tau_N$ subtraction methods to N$^3$LO, and to resum $\Tau_N$ distributions at N$^3$LL$^\prime$ accuracy both for quark as well as for gluon initiated processes. Introduction Experimental measurements at the LHC have provided remarkably precise measurements for a multitude of observables, most notably weak gauge boson production, an important benchmark for the Standard Model which has been measured at percent level accuracy [1][2][3][4]. Strong constraints on physics beyond the Standard Model are also provided by precision measurements of Higgs boson production and diboson processes [5][6][7][8][9]. To make full use of these results, it is crucial to confront them with equally-precise theory predictions, which in particular requires to include higher-order corrections in QCD. So far, only inclusive Drell-Yan and Higgs production have been calculated at next-tonext-to-next-to-leading order (N 3 LO) in QCD [10][11][12][13][14][15][16][17], while significant progress is being made to reach the same precision for differential distributions [18,19]. A key challenge for such calculations is the cancellation of infrared divergences between real and virtual corrections, and hence a necessary prerequisite is a profound understanding of the infrared singular structure at three loops. N -jettiness (T N ) is an infrared-sensitive N -jet resolution observable and thus provides a way to study the singular structure of QCD [20,21]. Its simplest manifestation T 0 , also referred to as beam thrust, is defined as where the sum rums over all momenta k i in the hadronic final state, q a,b are the momenta of the incoming partons projected onto the Born kinematics, and the measures Q a,b distinguish different definitions of T 0 [22,23]. A key feature of T N is that its singular structure as T N → 0 is fully captured by a factorization theorem, as shown in refs. [20,21] using softcollinear effective theory (SCET) [24][25][26][27][28]. In the simplest case, namely the production of a color-singlet final state h, the appropriate factorization theorem reads Here, Q 2 and Y are the invariant mass and rapidity of h, respectively, and we normalize by the Born partonic cross section σ 0 . In eq. (1.2), the full process dependence is given in terms of the hard function H ab , which encodes virtual corrections to the underlying hard process ab → h. The beam functions B a,b encode radiation collinear to the incoming partons. The soft function S c encodes soft radiation and only depends on the color channel c ∈ {gg, qq}, but is independent of quark flavors. Both beam and soft functions are universal and process independent. Since they are defined as gauge-invariant matrix elements in SCET, calculating them at higher orders also provides a well-defined means of separately studying the collinear and soft limits of QCD themselves. The beam functions B a,b not only appear in the factorization theorem for all T N , but also arise in the factorization theorem for the generalized threshold inclusive color-singlet production in hadronic collisions [29], and are thus of particular interest on their own. Since eq. (1.2) fully captures the singular limit of QCD, it can be employed as a subtraction scheme for higher-order calculations [30,31], in analogy to the q T subtraction method based on a similar factorization for the transverse-momentum distribution [32]. For both methods, extensions to N 3 LO have been recently proposed [18,33]. The O(T 0 /Q) corrections to eq. (1.2) have also been studied in the context of T N subtractions [34][35][36][37][38][39]. 1 These calculations are also interesting on their own as they provide insights into the infrared structure of QCD beyond leading power. T 0 subtractions are also the basis of combining NNLO calculations with a parton shower in GENEVA [41,42]. Our computation is based on a method of expanding cross sections around the kinematic limit in which all final state radiation becomes collinear to one of the scattering hadrons [65]. This method allows one to efficiently connect technology for the computation of scattering cross sections to universal building blocks of perturbative QFT. In particular, we perform a collinear expansion of the Drell-Yan and gluon fusion Higgs boson production cross section at N 3 LO. Subsequently, we employ the framework of reverse unitarity [66][67][68][69][70], integration-by-part (IBP) identities [71,72] and the method of differential equations [73][74][75][76][77] to obtain the collinear limit of the cross sections differential in the rapidity and transverse momentum of the colorless final states. Using the connection of this limit to the desired beam functions we extract the desired perturbative matching kernels as discussed in ref. [65]. This paper is structured as follows. In section 2, we discuss our setup for calculating the beam functions based on the collinear expansion of ref. [65]. In section 3, we briefly present our results, before concluding in section 4. Our results are also provided in the form of ancillary files with this submission. Beam functions from the collinear limit of cross sections Since the T N beam function is independent of N , we calculate it from the simplest case T 0 by considering the production of a colorless hard probe h and an additional hadronic state X in a proton-proton collision, where the incoming protons are aligned along the directions n µ = (1, 0, 0, 1) ,n µ = (1, 0, 0, −1) (2.2) and carry the momenta P 1 and P 2 with the center of mass energy S = (P 1 + P 2 ) 2 . The hard probe h carries the momentum p h , and the total momentum of the hadronic final state is denoted as k. We parameterize these momenta in terms of where Q 2 and Y are the invariant mass and rapidity of the hard probe h, respectively. Eq. (2.1) receives contributions from the partonic process where i and j are the flavors of the incoming partons which carry the momenta p 1 and p 2 , and X n is a hadronic final state consisting of n partons with the momenta {−p 3 , . . . , −p n+2 }, and n = 0 at tree level. The cross section for the partonic process in eq. (2.4), differential in the variables defined in eq. (2.3), is then defined as Here, we normalize by the partonic Born cross section σ 0 , dΦ h+n is the phase space measure of the h + X n state, and |M ij→h+Xn | 2 is the squared matrix element for the process in eq. (2.4), summed over the colors and helicities of all particles, with N ij accounting for the color and helicity average of the incoming particles. Explicit expressions for N ij and dΦ h+n can be found in ref. [65]. The partonic cross section in eq. (2.5) is closely related to the beam function we are interested in. For perturbative values of T N , one can match the beam functions onto the PDFs as [20,43] (2.6) Here, I ij is a perturbative matching kernel, and t = T 0 Q a , see eq. (1.2). As shown by us in ref. [65], I ij is precisely given by the strict n-collinear limit of eq. (2.5), where all loop and real momenta are treated as being collinear to n-direction, and we refer to ref. [65] for details on how to calculate this limit: Here, we have regulated both UV and IR divergences by working in d = 4 − 2 dimensions. The renormalized matching kernel is then given by [43,44,65] Here,Ẑ αs implements the standard UV renormalization by renormalizing the bare coupling constant α b s in the MS scheme, and the convolution with the PDF counterterm Γ jk cancels infrared divergences. Explicit expressions for these ingredients are collected in appendix A.3. The remaining poles in are canceled by the convolution with the beam function counter term Z B , which in the formulation of the beam function within SCET arises as an additional UV counter term in effective theory. Results In this section we report on our results for the matching kernels through N 3 LO. Our computation is based on the collinear expansion of the cross sections for the production of a Higgs boson via gluon fusion and for the production of off-shell photon (Drell-Yan) in hadron collisions. We compute the Higgs boson production cross section in the heavy top quark effective theory where the degrees of freedom of the top quark were integrated out and the Higgs boson couples directly to gluons [78][79][80][81][82][83][84][85]. We begin by computing all required matrix elements with at least one final state parton to obtain N 3 LO cross sections. All partonic cross sections corresponding to matrix elements with exactly one parton in the final state were obtained in full kinematics for the purpose of refs. [17,[86][87][88] and are in part based on refs. [89][90][91]. In order to obtain the strict collinear limit we simply expand the existing results and select the required components. To compute partonic cross sections with more than one final state parton we generate the necessary Feynman diagrams using QGRAF [92] and perform spinor and color algebra in a private code. Subsequently, we perform the strict collinear expansion of this matrix elements as outlined in ref. [65]. We make use of the framework of reverse unitarity [66][67][68][69][70] in order to integrate over loop and phase space momenta. We apply integration-by-part (IBP) identities [71,72] in order to re-express our expanded cross section in terms of collinear master integrals depending on the variables introduced in eq. (2.3). We then compute the required master integrals using the method of differential equations [73][74][75][76][77]. In order to fix all boundary conditions for the differential equations we expand the collinear master integrals further around the soft limit and integrate over phase space. The result of this procedure is then easily matched to the soft integrals that were obtained for the purpose of refs. [10,15,[93][94][95]. This yields all required ingredients for the bare partonic cross section expanded in the strict collinear limit of eq. (2.5). This part of the computation is the same as for the results of ref. [96]. Next, we perform the Fourier transform over t and make use of eq. (2.7) to obtain the matching kernel through N 3 LO in QCD perturbation theory. We will elaborate on the details of the computation of the matching kernels in ref. [97]. Finally, we subtract poles in as given in eq. (2.8) to obtain the renormalized matching kernel I ij (t, z, µ) through N 3 LO in QCD perturbation theory. This is carried out in Fourier (y) space, where the convolution in t becomes a simple product, and the Fourier-transformed counter term Z B can be easily predicted from the known renormalization group equation (RGE) of the beam function. We collect the required formulas in appendix A.2. It straightforward to Fourier transform back to t space after the UV renormalization, and we will provide results in both spaces. We express the perturbative matching kernels in terms of harmonic polylogarithms [98] and Goncharov polylogarithms [99] as well as a set of iterated integrals. We define the iterated integrals recursively via J a 1 (z), a 2 (z), . . . , a n (z) = z 0 dx a 1 (x) J a 2 (x), . . . , a n (x) , (3.1) with the prescription to regularize logarithmic singularities as We refer to the arguments of the iterated integrals as letters. The explicit end point of the iterated integration used for our iterated integrals is always the variablez = 1 − z. In order to express our matching kernels we require the following set of letters (or alphabet): It is possible to rationalise the square root in A by introducing the variable transformation z → (y + 1) 2 /y as noted in ref. [49] and to rewrite the iterated integrals in terms of Goncharov polylogarithms using well known techniques, see for example refs. [100][101][102][103]. Studying the letters of our alphabet and the singularities appearing in our matching kernels we see that they contain logarithmic singularities at the boundaries of the physical interval z ∈ [0, 1]. In order to provide a representation of our perturbative matching kernels that is suitable for numeric evaluation we perform a generalised power series expansion around two different points z = 0 and z = 1 up to 50 terms in the expansion. Both power series are formally convergent within the entire unit interval but converge of course faster if the respective expansion parameter is smaller. We provide both power series for all matching kernels as well as the analytic solution in ancillary files together with the arXiv submission of this article. We have calculated the matching kernel in Fourier (y) space, where its renormalization becomes simpler. As it is more commonly used in momentum (t) space, we provide results in both spaces. The corresponding kernels are expanded in powers of α s /π, The coefficientsĨ ij can be further expanded as where the logarithm L y and the distribution L m are defined as where the [· · · ] + denotes the standard plus distribution. Note that there is no one-to-one correspondence between theĨ ( ,m) ij (z) and I ( ,m) ij (z), as the Fourier transform induces a nontrivial mixing. For explicit relations for the Fourier transform, see e.g. ref. [104]. The logarithmic terms in eq. (3.5) encode the scale dependence of the beam function, and thus their structure is fully determined by its renormalization group equation (see appendix A.1) in terms of its anomalous dimensions and lower-order ingredients. The genuinely new three-loop results calculated by us are the nonlogarithmic boundary terms I ij (z). We performed several checks on our results. Firstly, we verified that all poles in cancel after applying UV renormalization and IR subtraction as given in eq. (2.8), where the beam function counterterm was predicted from its RGE as shown in appendix A.2. To check that our results obey the beam function RGE, we verified all logarithmic terms in eq. (3.5) against those predicted in ref. [33] by solving the beam function RGE. We also checked that our results for I ij (z) agree with the eikonal limit lim z→1 I (3) ij (z) that was predicted in ref. [33] using a consistency relation with the threshold soft function [29], and that our results agree with the generalized large-n c approximation n c ∼ n f 1 obtained in ref. [49]. Furthermore, we checked that the first four terms in the soft expansion of the Higgs boson production cross section reproduce correctly the collinear limit of the threshold expansion of the partonic cross section obtained for the purpose of refs. [19,88]. The inclusive cross section at N 3 LO for Drell-Yan and Higgs boson production was obtained in refs. [10,11,14,17,94]. We confirm that we can reproduce the first term of the threshold expansion of all partonic initial states contributing to the collinear limit of the partonic cross sections using the collinear partonic coefficient functions obtained here after integration over phase space. To illustrate our results, figure 1 shows the beam function boundary terms I ij (z) relevant for the quark beam function (left) and gluon beam function (right) as a function of z. For the purpose of this plot, we replace the occurring distributions as δ(1 − z) → 0 and L n (1 − z) → ln n (1 − z)/(1 − z). Since the different channels give rise to very different shapes and magnitudes, they are rescaled as indicated for illustration purposes only. To study the impact of our calculation on the beam function itself, we consider the cumulative beam function where we distinguish both quantities only by their arguments. As indicated, this always involves the sum over all flavors j contribution the desired beam function of flavor i. We use the MMHT2014nnlo68cl PDF set from ref. [105] with α s (m Z ) = 0.118, and evaluate eq. (3.7) through an implementation of our results in SCETlib [106]. In figure 2, we compare the u-quark beam function (left) and gluon beam function (right) at LO (gray, dot-dashed), NLO (green, dotted), NNLO (blue, dashed) and N 3 LO (red, solid) as a function of z. We fix t cut = (10 GeV) 2 and µ = 100 GeV and rescale the beam functions by z. Note that the LO result corresponds to the PDF itself, and thus illustrates the different shape of the beam function compared to the PDF. While we observe a notable effect of the N 3 LO corrections, the beam functions show good convergence overall. To judge the impact of the new three-loop boundary term I (3) ij on resummed predictions, it is more useful to show the beam function B i (t cut , z, µ) at its canonical scale µ = √ t cut , where all distributions L m in eq. (3.5) vanish and only the boundary term I (3) ij contributes. In figure 3, we show the cumulative beam functions at the canonical scale with µ = √ t cut = 30 GeV, showing the relative difference of the u-quark beam function (left) and the gluon beam function (right) at NLO (green, dotted), NNLO (blue, dashed) and N 3 LO (red, solid) to the corresponding PDF itself. We observe that the shape of the beam functions differ significantly from the shape of the PDF for large z, but tend to converge towards the PDF for small z 10 −1 . As before, we see good convergence at N 3 LO, but still a notable effect of the N 3 LO corrections itself. Finally, in figure 4 we show the K-factor of the N 3 LO beam function, which we define as the ratio of the N 3 LO beam to the NNLO beam function. As before, we choose the canonical scales µ = √ t cut = 30 GeV as relevant for a resummed calculation, We show the K factor for u quarks (red, solid), d quarks (blue, dashed) and gluons (green, dotted). In all cases, we see corrections of ∼ 1 − 2% with a sizable dependence on z. For completeness, we also show the high-energy limit z → 0 of the kernels I ij (z) in appendix B. This limit is for example interesting because the small-T 1 region is known to grow at small z in deep inelastic scattering [107,108]. Conclusions We have calculated the perturbative matching kernel relating N -jettiness beam functions with lightcone PDFs in all partonic channels for the first time at N 3 LO in QCD. Our calculation is based on a method recently developed by us to expand hadronic collinear cross sections [65], demonstrating its usefulness for the calculation of universal ingredients arising in the collinear limit of QCD. We provide our results in the form of ancillary file with this submission, where we include the renormalized N -jettiness beam function in both momentum (t) and Fourier (y) space. For the t space result, we also provide its expansions around z = 0 and z = 1 through 50 orders in the expansion. In contrast to the TMD beam functions, which are based on the same collinear limit and at N 3 LO can be entirely expressed in terms of harmonic polylogarithms up to weight 5 [96,109], the T N beam functions have a much richer structure of the appearing functions and are expressed in terms of Goncharov polylogarithm, as well as iterated integrals with letters that involve square roots. It will be interesting to better understand the source of this difference. Our results have various phenomenological applications. Firstly, we provide a key ingredient to extend the N -jettiness subtraction method [30,31] to N 3 LO, which can be used to obtain exact fully-differential cross sections at this order. They are also crucial to extend the resummation of T N to N 3 LL and N 4 LL accuracy, and for matching N 3 LO calculations to parton showers based on T 0 resummation [41,42]. Acknowledgments We thank Johannes Michel, Iain Stewart and Frank Tackmann for useful discussions. A Ingredients for the calculation of the beam function In this appendix, we provide more details on the regularization and renormalization of the beam function kernels. Details of the calculation of all required integrals will be presented in ref. [97]. A.1 Renormalization group equations In t space, the beam function B i (t, z, µ) obeys the RGE [20,43] where the anomalous dimension γ i B has the all-order form Here, Γ i cusp (α s ) and γ i B (α s ) are the cusp and beam noncusp anomalous dimensions, which both depend on the color representation i = q or i = g only, but are independent of the quark flavor. The RGE for the matching kernel follows from eqs. (2.6) and (A.1) and the DGLAP equation It is given by [43] µ d dµ (A.4) A.2 Structure of the beam function counterterm We define the Fourier transformation of a function f as f (y, · · · ) = dt e −ity f (t, · · · ) , f (t, · · · ) = dy 2π e ityf (y, · · · ) . (A.5) The Fourier transform of the bare kernel I ij (t, z, ) can be conveniently evaluated using Here, L y is the canonical logarithm in Fourier space, and γ E is the Euler-Mascheroni constant. In Fourier space, the renormalization of the bare matching kernel in eq. (2.8) becomes multiplicative in y, and the countertermZ i B follows from the RG eq. (A.2) in y space, Solving eq. (A.8), we can predict the all-order pole structure ofZ i B as (see also ref. [120]) where β(α s , ) = −2 α s + β(α s ) is the QCD beta function in d = 4 − 2 dimensions. Expanding eq. (A.9) systematically in α, we obtain the result through three loops as Here, the γ n are the coefficients of the corresponding anomalous dimensions at O[(α s /4π) n ]. Explicit expressions for all anomalous dimensions in the convention of eq. (A.10) are collected in ref. [33]. The required three-loop results for Γ cusp and β were calculated in refs. [121][122][123] and refs. [124,125], respectively. The beam noncusp anomalous dimension were originally determined in refs. [43,44], see also refs. [63,64]. A.3 α s renormalization and IR counterterms The bare strong coupling constant is renormalised as The mass factorisation counter term can be expressed in terms of the splitting functions P ij [122,123] as Here, we suppress the argument z of the splitting functions on the right hand side and keep the summation over repeated flavor indices implicit. The convolution in eq. (A.12) is defined as (A.13) B High-energy limit of the beam function kernels Here, we present the high-energy limit z → 0 of the beam function I C 3 A ln 5 (z) + ln 4 (z) C 2 A C F ln 5 (z) + ln 4 (z) Here, the color factors C A and C F are only used for compactness of the result and should be replaced with their expressions in terms of n c . Note that the expressions for the high energy limit z → 0 up to O(z 50 ), as well as that for the threshold limit z → 1 up to O((1 − z) 50 ), can be found in electronic form in the ancillary files.
5,612.6
2020-06-04T00:00:00.000
[ "Physics" ]
All Regular-Solid Varieties of Idempotent Semirings Abstract The lattice of all regular-solid varieties of semirings splits in two complete sublattices: the sublattice of all idempotent regular-solid varieties of semirings and the sublattice of all normal regular-solid varieties of semirings. In this paper, we discuss the idempotent part. Introduction Varieties of semirings are varieties of algebras of type (2,2), where both binary operations are associative and satisfy the two usual distributive laws. Single semirings as well as classes of semirings form important structures in Automata and Formal Languages Theories [5]. To get more insight into the complete lattice of all varieties of semirings, all solid and all pre-solid varieties of semirings were determined [1,2]. Now, we are interested in the complete lattice of all regular-solid varieties of semirings by characterizing all regular-solid varieties of idempotent semirings. To achieve our aim, we recall some basic concepts. Let F and G be the both binary operation symbols and let W (2,2) (X 2 ) be the set of all binary terms of type (2,2) built up by variables from the alphabet X 2 = {x, y}. Hypersubstitutions of type (2, 2) are mappings σ : {F, G} → W (2,2) (X 2 ). H. Hounnon The set of all hypersubstitutions of type (2, 2) will be denoted by Hyp. A hypersubstitution σ ∈ Hyp can be extended on the set W (2,2) (X) of all terms of type (2,2), where X is an arbitrary countably infinite alphabet of variables, by the following steps: where σ(f ) can be interpreted as the term operation σ(f ) F (2,2) (X) induced by the term σ(f ) on the free algebra F (2,2) (X) := (W (2,2) (X); (F , G)) with f : It is easy to prove that the algebra (Hyp; • h , σ id ), is a monoid with • h (where σ 1 • h σ 2 :=σ 1 • σ 2 and • is the usual mapping composition) as binary operation and σ id , defined by σ id (f ) := f (x, y) for all f ∈ {F, G}, as an identity element. Hypersubstitutions can be applied to algebras as follows: given an algebra A = (A; (F A , G A )) of type (2, 2) and a hypersubstitution σ ∈ Hyp, one defines the algebra σ(A) := (A; (σ(F) A , σ(G) A )). This algebra of type (2, 2) is called the derived algebra by A and σ. The hypersubstitution σ ∈ Hyp such that σ(F ) = t and σ(G) = s will be denoted by σ t,s . For all variables u and v, the term F (u, v) and G(u, v) will be denoted by u + v and uv, respectively. A hypersubstitution σ ∈ Hyp is called a regular hypersubstitution if σ maps both F and G to binary terms containing both variables x and y. It is easy to verify that the set Reg of all regular hypersubstitutions of type (2, 2) forms a submonoid of the monoid Hyp. An identity s ≈ t in a variety V of semirings is called a regular hyperidentity if for every σ ∈ Reg, the equationσ[s] ≈σ[t] belongs to the set IdV of all identities satisfied in V . A variety V of semirings is called regular-solid if all identities in V are satisfied as regular hyperidentities. For more information about hypersubstitutions and varieties of algebras see in [3,7]. In the next section, we will provide some necessary conditions for a variety of semirings to be a regular-solid one. This leads to a description of the lattice of all regular-solid varieties of semirings. The last section will be devoted to the determination of the lattice of all regular-solid varieties of idempotent semirings. An equation s ≈ t is called normal if either both terms s and t are equal to the same variable or none of them is a variable, that is, if s = t or the complexity (number of occurrences of operation symbols) of both terms s and t is greater or equal to 1. A variety in which all identities are normal is called a normal variety. Now, we can derive some necessary conditions for varieties of semirings to be regular-solid. Proposition 1. Let V be a regular-solid variety of semirings. The following properties are: Some Properties (1) V is medial, distributive and satisfies the identities: (2) V is either idempotent or normal. Proof. (1) It is clear that the usual distributive laws are satisfied in V . The application of the regualar hypersubstitutions σ xy,x+y to them gives the other distributive laws since V is a regular-solid variety of semirings. Moreover, applying the regular hypersubstitutions σ xy,xy and σ yx,yx to the distributive law of V , we get the identities xyz ≈ xyxz and zyx ≈ zxyx, respectively, in V. It is folklore that the identities xyz ≈ xyxz ≈ xzyz imply the medial law xyzu ≈ xzyu and the identities xyz ≈ x 2 yz ≈ xy 2 z ≈ xyz 2 . The application of the regular hypersubstitution σ xy,x+y to these identities gives the remaining identities. (2) Suppose that t ≈ x is an identity in V which is not normal. This provides x k ≈ x ∈ IdV for some k ≥ 2 (by using the regular hypersubstitution σ xy,xy and identifying all variables with x). From the identity x 2 yz ≈ xyz ∈ IdV , we get x 4 ≈ x 3 ∈ IdV and together with x k ≈ x ∈ IdV , we obtain the idempotent law x 2 ≈ x ∈ IdV . Therefore, V is idempotent by using the regular hypersubstitution σ xy,x+y . Proposition 1 (2), leads to a description of the complete lattice Reg(Sr) of all regular-solid varieties of semirings. Denoting by L(2, 2) the lattice of all varieties of type (2, 2), we have: Proof. The lattice L N (2, 2) of all normal varieties of type (2, 2) and the lattice L Idem (2, 2) of all idempotent varieties of type (2,2) are complete sublattices of L(2, 2) (see [4,7]). Therefore, since Reg N (Sr) = Reg(Sr) ∩ L N (2, 2) (the intersection of two complete sublattices) and since Reg Idem (Sr) = Reg(Sr) ∩ L Idem (2, 2) (the intersection of two complete sublattices), it arises that both lattices Reg Idem (Sr) and Reg N (Sr) are complete sublattices. By Proposition 1 (2) the lattices Reg Idem (Sr) and Reg N (Sr) are disjoint and their union is Reg(Sr). All Regular-Solid Varieties of Idempotent Semirings In this section, the lattice of all regular-solid varieties of idempotent semirings will be determined. An equation s ≈ t is outermost if the terms s and t start with the same variable (we write lef tmost(s) = lef tmost(t)) and end also with the same variable (we write rightmost(s) = rightmost(t)). A variety V is called outermost if all equations in IdV are outermost. A variety V of semirings is commutative if x + y ≈ y + x ∈ IdV and xy ≈ yx ∈ IdV . The following result gives a description of idempotent regular-solid varieties of semirings. Proposition 3. Each idempotent regular-solid variety of semirings is either outermost or commutative. Proof. Let V be an idempotent regular-solid variety of semirings. Assume that V is not outermost. We will show that V is commutative. Since V is not outermost, without loss of generality, we can assume that there exists an equation s ≈ t in IdV such that lef tmost(s) = x = y = lef tmost(t). Applying the regular hypersubstitution σ xy,xy to the identity s ≈ t ∈ IdV , we get the following identity s 1 ≈ t 1 in V (where lef tmost(s 1 ) = x = y = lef tmost(t 1 )). Let us consider the function h : X → W (2,2) (X), w → x if w = x y otherwise. It is well known that this function can be uniquely extended to an endomorphism h on F (∈,∈) (X ). Then, h(s 1 ) ≈ h(t 1 ) ∈ IdV and h(s 1 )yx ≈ h(t 1 )yx ∈ IdV , so xyx ≈ yx ∈ IdV because of the idempotent law. Applying the regular hypersubstitution σ yx,yx to the latter identity, the following equations xy ≈ xyx ≈ yx hold in V as identities. The application of σ xy,x+y to xy ≈ yx shows that V is commutative. Now, we determine the commutative part of Reg Idem (Sr). Proposition 1 (1) shows that every regular-solid variety of idempotent semirings is a subvariety of the variety V M ID of all medial idempotent and distributive semirings. But the subvariety lattice of V M ID is fully described by Pastjin in [6] as follows: Let us consider the two-element algebras (using the same notations as in [6]): The algebra J generates the variety DL of all distributive lattices and L generates the variety SL of bi-semilattices. Then we have Lemma 4 [6]. The subvariety lattice of the variety V M ID of all medial idempotent and distributive semirings is a Boolean lattice with 10 atoms and 10 dual atoms, i.e., with 2 10 elements. The atoms are exactly the varieties Proof. Let V be a regular-solid variety of commutative and idempotent semirings. By Proposition 1 (1), the variety V is a commutative subvariety of V M ID . So V is either trivial or a join of some commutative atoms listed in Lemma 4. This means that either V is trivial or V ∈ {SL, DL, SL ∨ DL}. But the varieties DL and SL ∨ DL are not regular-solid. Indeed, the application of σ x+xy,x+xy to the commutative identity xy ≈ yx gives the identity x+xy ≈ y +yx which cannot be satisfied in DL because of the absorption laws. IdSL is the set of all regular identities of type (2,2). It is clear that applying regular hypersubstitution to any regular identity, one gets a regular identity. So SL is regular-solid. We are now interested in the outermost part of Reg Idem (Sr). Some definitions and facts will be referred. Definition. A variety V of semirings is s-outermost if for any identity s ≈ t ∈ IdV , the equations s ≈ t as well asσ x+y,yx [s] ≈σ x+y,yx [t] are outermost. This definition coincides with that one given in [1] and it is clear that every outermost regular-solid variety of semirings is s-outermost since the hypersubstitution σ x+y,yx is regular. A variety V of semirings is said to be a solid variety if for all s ≈ t ∈ IdV and for all σ ∈ Hyp, we getσ[s] ≈σ[t] ∈ IdV . It is well known that the variety RA (2,2) generated by all projection algebras of type (2, 2) is a variety of semirings and it is defined by RA (2,2) [1]. It is already proved: Lemma 6 [1]. The lattice of all solid varieties of semirings is the four-element chain represented by T ⊂ RA (2,2) Now, we can prove: Lemma 8. Let V be an outermost regular-solid variety of idempotent semirings. If V is different from RA (2,2) then V is regular i.e all equations in IdV are regular. Proof. We will prove that if V is not regular then V = RA (2,2) . Since V is outermost regular-solid variety of semirings, V is s-outermost and we have RA (2,2) ⊆ V (Lemma 7). It left to prove that V ⊆ RA (2,2) i.e Id(RA (2,2) ) ⊆ IdV . Since V is not regular, there exists an identity s ≈ t in IdV such that, without loss of generality, a variable x i occurs in s but not in t. Applying σ xy,xy to s ≈ t and identifying all variables different from x i with x, we get xx i x ≈ x ∈ IdV because V is outermost and idempotent. Therefore, xyz ≈ xz ∈ IdV . The application of σ xy,x+y to this identity gives x + y + z ≈ x + z ∈ IdV . Moreover, using the previous identity, the distributivity and the idempotency, the basis identities of RA (2,2) are also identities in V . This finishes the proof of Id(RA (2,2) ) ⊆ IdV . Now, we have all tools to prove our main result: Theorem 9. The lattice of all regular-solid varieties of idempotent semirings is the lattice Proof. Let V be a regular-solid variety of idempotent semirings. Then V is either commutative or outermost (Proposition 3). If V is commutative then V ∈ {T , SL} (Theorem 5). Otherwise, V is outermost. Then V = RA (2,2) or V is regular (Lemma 8). Therefore, V = RA (2,2) or SL ⊆ V since Id(SL) is the set of all regular identities of type (2, 2). Moreover, V is s-outermost and thus RA (2,2) ⊆ V (Lemma 7). Altogether, we have V = RA (2,2) or RA (2,2) ∨ SL ⊆ V .
3,057.6
2017-06-01T00:00:00.000
[ "Mathematics" ]
A fractional order nonlinear model of the love story of Layla and Majnun In this study, a fractional order mathematical model using the romantic relations of the Layla and Majnun is numerically simulated by the Levenberg–Marquardt backpropagation neural networks. The fractional order derivatives provide more realistic solutions as compared to integer order derivatives of the mathematical model based on the romantic relationship of the Layla and Majnun. The mathematical formulation of this model has four categories that are based on the system of nonlinear equations. The exactness of the stochastic scheme is observed for solving the romantic mathematical system using the comparison of attained and Adam results. The data for testing, authorization, and training is provided as 15%, 75% and 10%, along with the twelve numbers of hidden neurons. Furthermore, the reducible value of the absolute error improves the accuracy of the designed stochastic solver. To prove the reliability of scheme, the numerical measures are presented using correlations, error histograms, state transitions, and regression. was unconscious that the Layla faced same injury. They scarified at the same time, and they set a true love story that will never die. Almost all religions preached love in the holy books, like Quran, Torat, Geeta or Bible. The famous personalities also set the stories of love, even it can be Zuleka, Iqbal, Bulay Shah, Khusro, Ghalib, or Iqbal. The complex variable forms describe the romantic relations. This model contains two dynamics, which provides the time variation with the feelings of two personalities in a romantic way. For instance, the feelings of two individuals can exist with different thoughts and feelings for each other and do not like the other thing. However, several complex variable models have been defined in different areas, such as high-energy accelerators 8 , plasma physics 9 , rotor dynamics 10 , optical systems 11 and a few other science areas [12][13][14][15] . The current research aims to provide the simulations of the fractional order mathematical system using the love relations of Layla and Majnun. The stochastic procedures are applied based on the Levenberg-Marquardt backpropagation neural networks to solve the Layla and Majnun system. The derivative form of the time-fractional has several applications to label different conditions, which displays the commemoration based on the dynamical systems, like the mathematical coronavirus model using the epidemic in India with control and transmission dynamics 16 , huanglongbing spread in a citrus tree 17 , diffusion system under external force 18 , HIV infection with CD4+ T-cells using the antiviral drug therapy impacts 19 , Typhoid fever system 20 , couple Ramani equations 21 , generalized integral equations 22 , impulsive hybrid nonlinear system 23 , controllability of Hilfer fractional derivative with non-dense domain 24 , integro-differential delay inclusions 25 , differential equations with infinite delay via measures of noncompactness 26 , and neutral differential inclusions of Clarke subdifferential type 27 . Some novel features of this work are presented as: • The stochastic performances based on the Levenberg-Marquardt backpropagation neural networks have never been executed before for the fractional order Layla and Majnun system. • The design of the stochastic scheme is presented successfully to solve the mathematical model using the love relations of Layla and Majnun. • Three cases of the fractional order Caputo derivative have been provided for the nonlinear Layla and Majnun model. • The fractional order derivative values are taken between 0 and 1 for solving the model. • The perfection of the scheme is accomplished based on the performances through the comparison of results. • The small absolute error values validate the precision, accuracy, and correctness of the stochastic procedure. • The error histograms, regression, correlation, and state transitions authorize the consistency of the designed scheme for the Layla and Majnun model. The remaining parts of this work are organized as: The model of the story of Layla and Majnun is shown in "Fractional Layla and Majnun system". A summary of stochastic procedures is reported in "An overview: Stochastic operators". The stochastic procedure is derived in "Designed methodology". Simulations are presented in "Numerical performances". The conclusions are provided in "Numerical performances". Fractional Layla and Majnun system The romantic relationships between the Layla and Majnun have been presented in this section. The simplest form of the nonlinear system with two complex variables is given as 28-31 : where β a > 0 , β c < 0 , β d < 0 and β b < 0 . The variables L(x) and M(x) present the feelings of Layla and Majnun. The constant parameters β b and β a , indicate the environmental effects on their feelings. The fixed β a > 0 value indicates that everyone had sympathy for Majnun. Subsequently, the environmental effects were hopeful for Majnun. Whereas β b < 0 shows the unkindness behavior on Layla that has the society and her family. The terms M 2 and L 2 indicate the extreme love and any indicator of kindness from the other inspired them broadly. The motive to fix the values βc < 0 and βd < 0 represent that they have true love, reacting totally to the feelings of the other, but blank of self-hood and seduction. After providing the database of the model given in Eq. (1), authors expanded the system in the complex plane by selecting M = M r + iM i and L = L r + iL i , given as: where, M i , M r are the Majnun's feelings and L i , L r are the Layla's feelings based on the imaginary and real parts. This study shows the fractional order mathematical model using the love relations of Layla and Majnun. The fractional order form of this romantic mathematical model is given as: www.nature.com/scientificreports/ where α is the Caputo fractional order derivative, i 1 , i 2 , i 3 and i 4 are the initial conditions in the above system (1) and (2). The fractional order derivative has been taken in the interval between [0, 1]. The fractional derivatives have been applied to present the specific performances. The fractional types of models present the minute specifics through the superfast/super slow transition. The system dynamics features using the fractional calculus are considered difficult to interpret by taking the integer orders. The dynamics of the system are accomplished through the fractional form of the derivatives that provide better performances instead of integer derivatives. The fractional form of the derivatives is applied to substantiate the performance of the model based on various real applications [32][33][34][35][36][37] . These derivatives have extensive applications in mathematics, control systems, physical and engineering fields. The fractional calculus studied widely during the last 2 or 3 decades based on the considerable operators, e.g., Riemann-Liouville 38 , Grnwald-Letnikov 39 , Weyl-Riesz 40 , Erdlyi-Kober 41 and Caputo 42 . All these mentioned operators have their individual significance and value. However, a famous Caputo derivative operator is applied to the homogeneous/non-homogeneous conditions. The implementation of the Caputo derivatives is easy and simple instead of other derivatives. However, in this study the Caputo derivative is used for the numerical performances of the model. An overview: stochastic operators The designed stochastic procedure is applied to solve the above system (Eq. 3). The stochastic performances using the global/local search operators have been presented for the stiff, grim, complicated, and nonlinear differential systems 43 . Few famous applications of these solvers are coronavirus differential models 44 , food chain models 45,46 , transmission of heat in radiative, convective, and moving rod using the thermal conductivity 47 , longitudinal porous fin of trapezoidal 48 , wireless channels 49 , HIV systems 50 , delayed differential model 51,52 , thermal explosion models 53 and third order nonlinear singular system 54 . Designed methodology In this section, the proposed stochastic process is classified in two steps for the fractional order differential nonlinear system using the romantic relationships between Layla and Majnun. First, the necessary steps are described using the stochastic procedure. Second, the execution performances are explained to solve the model. The optimization through the multi-layer process is demonstrated in Fig. 1. In the first part of the Fig. 1, the mathematical form of the fractional Layla and Majnun model is presented, the proposed scheme based on the stochastic computing solver is given in second part of Fig. 1, the optimization procedure is illustrated in the third part of Fig. 1, while some result graphs have been presented din the last part of Fig. 1. The procedure using the Matlab command 'nftool' is provided as 15%, 75% and 10% for testing, authorization, and training. The implementation procedures based on the numerical results are provided using default parameters values to generate the dataset. Twelve numbers are taken using the data performances for testing, authorization, and training. The supervised neural networks process is implemented with complexity, overfitting, premature convergence along with the underfitting variations. These network parameters have been adjusted with exhaustive simulation investigations, knowledge, experience, care along with small dissimilarities. Figure 2 shows the stochastic process through the generic perception for single neuron. These procedures have been programmatic in the 'Matlab' (nftool command) to achieve the appropriate performances based on the learning approaches, hidden neurons, verification, and testing statics. The execution of the stochastic performances and the parameter settings for the fractional Layla and Majnun are given in Table 1. The training of the network is executed through the stochastic procedure for the Layla and Majnun model, while the backpropagation process is used to indicate the Jacobian based on mean square error, weights, and bias. The disparity or alteration is implemented using the Levenberg-Marquardt backpropagation is provided as: here ε is error and I shows the unit matrix. The parameter set is shown in Table 1 using the minor alteration, which shows the premature convergence (poor performances of the results). Hence, these settings should be combined with general attention, after producing various understanding and investigations. Numerical performances In this section, three different cases are provided using the fractional order derivatives to represent the obtained solutions for the differential model using the romantic relationships between Layla and Majnun, given as: Case 1: Suppose the fractional Layla and Majnun model with α = 0.5, β b = β c = β d = −1 , β c = −1 and β a = 1 is given as: www.nature.com/scientificreports/ Case 2: Consider the nonlinear differential system using the romantic relationships between Layla and Majnun with α = 0.7, α = 0.5, β b = β c = β d = −1 , β c = −1 and β a = 1 is given as: www.nature.com/scientificreports/ Case 3: Consider the nonlinear differential system using the romantic relationships between Layla and Majnun with α = 0.9, α = 0.5, β b = β c = β d = −1 , β c = −1 and β a = 1 is given as: The simulations through the stochastic procedures are presented for the fractional order differential system of the romantic relationships between Layla and Majnun using 12 neurons with the data selection as 15%, 75% and 10% for testing, authorization, and training. The neuron structure based on the romantic system is provided in Fig. 3. www.nature.com/scientificreports/ The graphic representations through the stochastic scheme for the fractional order nonlinear differential system based romantic relationships between Layla and Majnun are illustrated in Figs. 4, 5, 6, 7, 8, 9, 10, 11 and 12. For the performances and state transitions, the capable numerical representations for each variation are given in Figs. 4 and 5. Figure 4 depicts the convergence curves using the mean square error based on transitions, training, authentication, and best curve. Figure 4a presents that by increasing the Epochs, the training, authentication, and www.nature.com/scientificreports/ testing curves leads to the position of steady state with the computing performance up to 10 -09 . Likewise, Fig. 4b, c also achieved the level of convergence as 10 -09 and 10 -08 . The best performances of the differential system using the romantic relationships between Layla and Majnun have been calculated at epochs 133, 70 and 38, which are found in the ranges of 1.0383 × 10 09, 3.5323 × 10 09 and 8.2352 × 10 08, respectively. The values of the error gradient shows the direction as well as magnitude, which is performed during the proposed neural network training and applied to update the weights of the network in the right amount and direction. In the process of neural network fitting, backpropagation calculates the loss function gradient using the network weights based on the single input/output and perform competently. Mu shows the process of training, and it shows the momentum parameter constant or momentum that includes the expressions of updated weights to avoid the issue of local minima and convergence. Figure 5 is drawn based on the gradient operators are calculate as 9.9903 × 10 -08 , 7.6016 × 10 -07 and 2.686 × 10 -06 . These representations authenticate the accuracy of the stochastic scheme for www.nature.com/scientificreports/ solving the nonlinear fractional order model. Mu represents the algorithm's control parameter that is applied in neural network training. Mu range is taken between 0 and 1. The negligible Mu values presents the consistent network's convergence for solving the model. For each variation, the fitting curve is provided to solve the love story mathematical model in Figs. 6, 7 and 8. The error plots with the authentication, and testing/training for the stochastic procedure are given to solve the fractional order nonlinear differential model using the romantic relationships between Layla and Majnun. The error plots are represented in Fig. 9a-c and regression is plotted in Figs. 10, 11 and 12 for the fractional order nonlinear differential model using the romantic relationships between Layla and Majnun. These error values are found around 3.92 × 10 -05 , 1.68 × 10 -05 and −9.1 × 10 -10 for case 1, 2 and 3. One can perceive that the values of the correlation are calculated 1 for the nonlinear differential model using www.nature.com/scientificreports/ the romantic relationships between Layla and Majnun. The training, testing and substantiation plots designate the exactness of the scheme. The illustrations of the error histogram are performed to authenticate the errors using the target and predicted performances to train the designed procedure based on the artificial neural network. These error presents that the difference between predicted and targeted performances. The mean square convergence values for training, epochs, validation, backpropagation, and complexity soundings are derived in Table 2 for the fractional order differential model using the romantic relationships between Layla and Majnun. The results (obtained and reference) comparison and the values of the absolute error are provided in Figs. 13 and 14. The outcomes of each category of the nonlinear differential model using the romantic relationships between Layla and Majnun are illustrated in Fig. 13a-d. One can prove the accuracy of the stochastic procedures through the overlapping of the solutions for the romantic relationships between Layla and Majnun. The absolute error are derived in Fig. 14 for the classes M r (x) , M i (x) , L r (x) and L i (x) . It is authenticated in Fig. 14a, the graphs of absolute error for M r (x) found as 10 -04 to 10 -05 , 10 -04 to 10 -06 and 10 -03 to 10 -05 for case 1 to 3. Figure 14b indicates the AE for M i (x) lie as 10 -04 to 10 -07 , 10 -04 to 10 -06 and 10 -03 to 10 -06 . Figure 14c signifies the absolute error for L r (x) , which is calculated 10 -04 to 10 -05 , 10 -03 to 10 -06 and 10 -03 to 10 -05 for case 1, 2 and 3. Figure 14d implies that the AE for L i (x) is calculated 10 -05 to 10 -06 , 10 -05 to 10 -07 and 10 -04 to 10 -07 for case 1, 2 and 3. These AE values represents the correctness of the scheme for the differential Layla and Majnun model. Concluding remarks The current research is related to present the solutions of the romantic relations of the Layla and Majnun. The stochastic computing paradigms for solving the Layla and Majnun model is first time presented in this study. Therefore, this nonlinear model has been numerically simulated by using the Levenberg-Marquardt backpropagation neural networks. The fractional kind of derivatives makes this mathematical system more realistic with the use of such dynamics. The romantic relationship between the Layla and Majnun indicates a nonlinear, noninteger order mathematical system. The Layla and Majnun fractional order model have been categorized into four dynamics. The correctness and exactness of the stochastic approach for the system is presented using the comparison of reference and obtained solutions. Twelve numbers of neurons have been provided throughout the study using the data performances as 15%, 75% and 10% for testing, authorization, and training. The gradient values using the step size are proficient for the differential model using the romantic relationships between the Layla and Majnun. The absolute error represents the precision of proposed procedure. The validity, consistency, competence, ability, and correctness of the proposed stochastic procedure are authenticated using different statistical procedures.
4,009.6
2023-04-03T00:00:00.000
[ "Mathematics" ]
Demonstrating a Filter-Free Wavelength Sensor with Double-Well Structure and Its Application This study proposed a filter-free wavelength sensor with a double-well structure for detecting fluorescence without an optical filter. The impurity concentration was optimized and simulated to form a double-well-structured sensor, of which the result was consistent with the fabricated sensor. Furthermore, we proposed a novel wavelength detection method using the current ratio based on the silicon absorption coefficient. The results showed that the proposed method successfully detected single wavelengths in the 460–800 nm range. Additionally, we confirmed that quantification was possible using the current ratio of the sensor for a relatively wide band wavelength, such as fluorescence. Finally, the fluorescence that was emitted from the reagents ALEXA488, 594, and 680 was successfully identified and quantified. The proposed sensor can detect wavelengths without optical filters, which can be used in various applications in the biofield, such as POCT as a miniaturized wavelength detection sensor. Introduction Optical detection techniques can predict, diagnose, and analyze diseases as a compact system by detecting the optical properties of a substance, such as absorption [1], fluorescence [2], and luminescence [3], and hence, they are widely used as measurement and analysis equipment in various fields, such as in medicine [4], the environment [5], in chemicals [6], in food [7], in biology [8,9], and in the military [10]. In general, silicon-based photodiodes detect light in various applications, but high-performance photodetectors are required to obtain accurate and fast information. Therefore, by integrating silicon and specific materials, photodetectors which have a broad detection wavelength range from ultraviolet to infrared rays, a high quantum efficiency, and fast response characteristics have been reported [11,12]. Among these, the fluorescence detection method is the most helpful one because it contains a large amount of information, and it is easy to handle. Detection methods such as fluorescence intensity (FI) [13], fluorescence resonance energy transfer (FRET) [14], fluorescence polarization (FP) [15], and time-resolved fluorescence (TRF) [16] are applied using fluorescence with high selectivity and sensitivity to the detection target. The fluorescence wavelength is selectively detected using a monochromator or an optical filter. These components were applied to a spectrofluorometer, which detects a specific wavelength by dispersing the light that is emitted from a sample using a diffraction grating or prism, and a fluorescence microscope, which can detect fluorescence with high sensitivity using an optical filter. Because simultaneous detection of multiple wavelengths is possible in a relatively wide band, it is advantageous for quantitative and qualitative analyses [17,18]. However, because the wavelength is detected by dispersing light at one point, it becomes challenging to image the sample. In addition, it is possible to selectively detect a relatively large area of fluorescence that is emitted from a sample [19,20]. However, since the detected fluorescence wavelength depends on the optical filter, various wavelengths cannot be detected simultaneously. Although these fluorescence detection devices have high sensitivity and selectivity, they are expensive and bulky owing to the integration of various optical filters and components, which makes applying them to point-of-care testing (POCT) a challenge as this requires it to be portable [21,22]. Therefore, various studies have been reported to realize the miniaturization, low cost, and high performance of the fluorescence detection systems. On-chip fluorescence detection devices have been reported to integrate the interference or absorption filters into CMOS image sensors with high selectivity and sensitivity. Owing to these advantages, Ohta et al. developed an in vivo fluorescence detection device and successfully detected in vivo images of rats [23]. Additionally, a hybrid filter, where the absorption and interference filters are integrated devices, is reported, as shown in Figure 1a. Because it is possible to detect multiple wavelengths simultaneously, their applications in biofields such as resonance energy transfer (FRET) and multiplex fluorescence imaging analysis are expected [24]. However, because the optical filter is integrated into the CMOS image sensor, the detection of different wavelength changes in the fluorescent reagent is challenging. The fluorescence wavelength is selectively detected using a monochromator or an optical filter. These components were applied to a spectrofluorometer, which detects a specific wavelength by dispersing the light that is emitted from a sample using a diffraction grating or prism, and a fluorescence microscope, which can detect fluorescence with high sensitivity using an optical filter. Because simultaneous detection of multiple wavelengths is possible in a relatively wide band, it is advantageous for quantitative and qualitative analyses [17,18]. However, because the wavelength is detected by dispersing light at one point, it becomes challenging to image the sample. In addition, it is possible to selectively detect a relatively large area of fluorescence that is emitted from a sample [19,20]. However, since the detected fluorescence wavelength depends on the optical filter, various wavelengths cannot be detected simultaneously. Although these fluorescence detection devices have high sensitivity and selectivity, they are expensive and bulky owing to the integration of various optical filters and components, which makes applying them to point-of-care testing (POCT) a challenge as this requires it to be portable [21,22]. Therefore, various studies have been reported to realize the miniaturization, low cost, and high performance of the fluorescence detection systems. On-chip fluorescence detection devices have been reported to integrate the interference or absorption filters into CMOS image sensors with high selectivity and sensitivity. Owing to these advantages, Ohta et al. developed an in vivo fluorescence detection device and successfully detected in vivo images of rats [23]. Additionally, a hybrid filter, where the absorption and interference filters are integrated devices, is reported, as shown in Figure 1a. Because it is possible to detect multiple wavelengths simultaneously, their applications in biofields such as resonance energy transfer (FRET) and multiplex fluorescence imaging analysis are expected [24]. However, because the optical filter is integrated into the CMOS image sensor, the detection of different wavelength changes in the fluorescent reagent is challenging. Figure 1b shows a method of multi-wavelength analysis using a single pixel and a CMOS buried quad p-n junction photodiode structure [25,26]. Because the light wavelength has a different absorption depth depending on the silicon absorption coefficient, it is possible to separate the wavelength by measuring the current that is generated at each p-n junction. Furthermore, because the structure of such a buried multi-p-n junction can detect a wavelength in a single pixel, it has a higher fill factor than a CMOS image sensor with an integrated RGB filter does. Therefore, it provides a high-resolution fluorescence image in the biofield. However, the wavelength and band numbers were fixed according to the buried p-n junction numbers and their depth. Previously, we reported a filter-free fluorescence sensor with a photogate structure to detect the light intensity of multiple wavelengths without using optical components, even when the excitation and fluorescence wavelengths are changed, as shown in Figure 1c [27,28]. The sensor with a single-well structure on an n-type silicon substrate adjusts the potential depth W multiple times by controlling the photogate (PG) voltage, and it Figure 1b shows a method of multi-wavelength analysis using a single pixel and a CMOS buried quad p-n junction photodiode structure [25,26]. Because the light wavelength has a different absorption depth depending on the silicon absorption coefficient, it is possible to separate the wavelength by measuring the current that is generated at each p-n junction. Furthermore, because the structure of such a buried multi-p-n junction can detect a wavelength in a single pixel, it has a higher fill factor than a CMOS image sensor with an integrated RGB filter does. Therefore, it provides a high-resolution fluorescence image in the biofield. However, the wavelength and band numbers were fixed according to the buried p-n junction numbers and their depth. Previously, we reported a filter-free fluorescence sensor with a photogate structure to detect the light intensity of multiple wavelengths without using optical components, even when the excitation and fluorescence wavelengths are changed, as shown in Figure 1c [27,28]. The sensor with a single-well structure on an n-type silicon substrate adjusts the potential depth W multiple times by controlling the photogate (PG) voltage, and it detects only electrons that move toward the surface side from the adjacent electrodes I PG . However, because the electrons cannot be detected toward the substrate at depth W, the light reception sensitivity may decrease. Moreover, an error occurred in the measured value according to the change in the full width at half maximum (FWHM) of the incident light. Because the wavelength information of the excitation light and fluorescence was required to obtain the fluorescence intensity, it was impossible to detect the intensity of the unknown fluorescence. In addition, it was necessary to separate the independent wells to configure the peripheral circuit. This study proposes an improved filter-free wavelength sensor with a double-well structure that can detect unknown wavelengths and integrate the peripheral circuits. The silicon-based impurity concentration conditions were simulated and evaluated to fabricate a filter-free wavelength sensor with a double-well structure. We report the experimental results of the wavelength dependence, light intensity dependence, and FWHM dependence of the fabricated sensor. Lastly, we report the measurement results for three fluorescent reagents with different wavelengths as an application experiment using the proposed sensor. Figure 1c shows a previously reported filter-free fluorescence sensor on n-type silicon substrate. A p-well layer was formed on the silicon substrate and an n + diffusion layer was arranged to be adjacent to the photogate as an electrode. A photogate was placed in the sensing area, and the applied positive voltage bent the potential distribution on the surface. The p-well was set at the ground level and a positive bias was applied to the n-sub to form a potential distribution. The photoelectrons that were generated on the surface side of potential depth W were collected on the surface and detected as an electric current from the readout electrode. Although an n-type sensor detects the wavelength intensity by measuring the current on the surface side, the characteristics of the peripheral circuits changed owing to the voltage that was applied to the n-substrate, making it a challenge for us to array the sensor. Additionally, the quantum efficiency may decrease considering that the photocurrent beyond the saddle point depth W cannot be detected. Figure 2a shows a schematic of the proposed filter-free wavelength sensor that was obtained by changing the substrate from an n-type to a p-type silicon substrate in the double-well structure. The sensor proposed a three-layer structure where a deep n-well and p-well were formed on a p-sub silicon substrate to measure the electrons generated by light passing through W. A photogate structure was adopted as in the conventional structure, and an n + diffusion layer was deposited on the deep n-well to detect the electrons generated at a position that was deeper than W. Therefore, the peripheral circuit characteristics do not change by providing the n-well in a region that is different from the deep n-well, and the sensor and peripheral circuit can be integrated. Because measuring the light that is passing through the saddle point W with a photocurrent is also possible, a quantum efficiency that is higher than that of an n-type silicon substrate sensor can be expected. Furthermore, because the potential depth position can be freely moved while maintaining the potential distribution steeply by the photogate voltage and body biasing, a high sensitivity can also be expected by optimizing the wavelength that is to be detected. Design and Principle The filter-free wavelength sensor measures the light intensity using the absorption coefficient α of silicon according to the wavelength of the light instead of removing the optical filter. The light irradiated on the silicon surface is absorbed inside the silicon to generate electron-hole pairs, and the output photocurrent can be measured. Equation (1) shows the photocurrent I PG that is generated based on the depth W, which is expressed as: The filter-free wavelength sensor measures the light intensity using the absorption coefficient α of silicon according to the wavelength of the light instead of removing the optical filter. The light irradiated on the silicon surface is absorbed inside the silicon to generate electron-hole pairs, and the output photocurrent can be measured. Equation (1) shows the photocurrent IPG that is generated based on the depth W, which is expressed as: The light intensity ϕ0 on the silicon surface was calculated by substituting the measured photocurrent. The photocurrent in the well that passes through the depth W is expressed as: where h is Planck's constant, c is the speed of light in vacuum, and λ is the light wavelength, q is the elementary charge, Wpn is the junction depth between the p-well and the deep n-well, and S is the sensing area. By calculating the ratio of Equations (1) and (2), we obtain: Equation (3) indicates that the ratio of IPG-to-In-well does not depend on the light intensity, but on the potential depth W and silicon absorption coefficient α. In other words, because the current ratio does not depend on the light intensity, it is possible to detect the wavelength by calculating the ratio of the currents IPG and In-well. Device Simulation A three-dimensional device simulation was conducted using SPECTRA (Link Research, Japan) to evaluate the current characteristics and effectiveness of the sensor with the proposed structure. The light source conditions that were irradiated to the sensing area had a diameter of 100 μm, a light intensity of 1 mW/cm 2 , and a wavelength in the 450-750 nm range. Figure 3a shows the change in the IPG and In-well current and its current ratio according to the wavelength. As the wavelength increased, the penetration depth of the light irradiated onto the silicon substrate increased. Therefore, the surface-side current IPG decreased and the substrate current In-well increased. The simulation results indicate that the current ratio in the 450-750 nm wavelength range changed from 0.09 to 3.96. The light intensity φ 0 on the silicon surface was calculated by substituting the measured photocurrent. The photocurrent in the well that passes through the depth W is expressed as: where h is Planck's constant, c is the speed of light in vacuum, and λ is the light wavelength, q is the elementary charge, W pn is the junction depth between the p-well and the deep n-well, and S is the sensing area. By calculating the ratio of Equations (1) and (2), we obtain: Equation (3) indicates that the ratio of I PG -to-I n-well does not depend on the light intensity, but on the potential depth W and silicon absorption coefficient α. In other words, because the current ratio does not depend on the light intensity, it is possible to detect the wavelength by calculating the ratio of the currents I PG and I n-well . Device Simulation A three-dimensional device simulation was conducted using SPECTRA (Link Research, Japan) to evaluate the current characteristics and effectiveness of the sensor with the proposed structure. The light source conditions that were irradiated to the sensing area had a diameter of 100 µm, a light intensity of 1 mW/cm 2 , and a wavelength in the 450-750 nm range. Figure 3a shows the change in the I PG and I n-well current and its current ratio according to the wavelength. As the wavelength increased, the penetration depth of the light irradiated onto the silicon substrate increased. Therefore, the surface-side current I PG decreased and the substrate current I n-well increased. The simulation results indicate that the current ratio in the 450-750 nm wavelength range changed from 0.09 to 3.96. Consequently, we were able to identify the wavelengths in the visible light region by calculating the respective current ratios. Consequently, we were able to identify the wavelengths in the visible light region by calculating the respective current ratios. Process Simulation and Fabrication The semiconductor process conditions were determined using the process simulator TCAD (Taurus TSUPREM-4) to fabricate a filter-free wavelength sensor with a doublewell structure. To form a double-well structure, the impurity concentration must be in the order of p-well > deep n-well > p-substrate. Because the p-sub silicon that was used was 2.24 × 10 14 cm −3 , the impurity concentrations of the deep n-well and p-well were aimed at 10 15 and 10 16 cm −3 , respectively. Table 1 lists the ion implantation conditions of the fabricated sensor according to these requirements. The proposed double-well structure requires a deep n-well junction depth to prevent the p-well and p-sub junctions. The parameters affecting the impurity concentration in the process include the dose amount, the acceleration voltage, the implantation angle as the ion implantation conditions, and the time and temperature as the drive-in conditions. Among them, the parameters that affect the n-well junction depth are the acceleration voltage, time, and temperature. Because the deep n-well aimed to form a depth of 7 μm, the ion implantation was performed with an accelerating voltage of 150 keV and a dose of 1.0 × 10 12 cm −2 . The drive-in was performed at 1150 °C for 1530 min. Because the p-well also aimed to form a depth of 2.5 μm, the ion implantation was performed with an accelerating voltage of 80 keV and a dose of 2.0 × 10 12 cm −2 . The drive-in was performed at 1150 °C for 270 min. The analysis was performed using secondary ion mass spectrometry (SIMS) to determine the impurity concentration in the fabricated sensor. Figure 3b shows the simulation results of the TCAD and SIMS analyses. The values of phosphorus and boron which were obtained by SIMS analysis were approximately identical to those that were assumed by TCAD, and a double-diffusion well structure was fabricated to be identical to the simulation data. Figure 4 shows the fabrication process of the sensor. The proposed double-well structure sensor was fabricated using the 1-polysilicon, 2-metal process at the LSI facility in the Toyohashi University of Technology, Japan. The drawing rule used 5 μm, and the wafer used a 4-in p-silicon substrate (P100, 60 Ω/cm, 2.24 × 10 14 cm −3 ). Figure 5 shows the processed wafer and microscope images of the sensor (300 × 300 μm). Process Simulation and Fabrication The semiconductor process conditions were determined using the process simulator TCAD (Taurus TSUPREM-4) to fabricate a filter-free wavelength sensor with a double-well structure. To form a double-well structure, the impurity concentration must be in the order of p-well > deep n-well > p-substrate. Because the p-sub silicon that was used was 2.24 × 10 14 cm −3 , the impurity concentrations of the deep n-well and p-well were aimed at 10 15 and 10 16 cm −3 , respectively. Table 1 lists the ion implantation conditions of the fabricated sensor according to these requirements. The proposed double-well structure requires a deep n-well junction depth to prevent the p-well and p-sub junctions. The parameters affecting the impurity concentration in the process include the dose amount, the acceleration voltage, the implantation angle as the ion implantation conditions, and the time and temperature as the drive-in conditions. Among them, the parameters that affect the n-well junction depth are the acceleration voltage, time, and temperature. Because the deep n-well aimed to form a depth of 7 µm, the ion implantation was performed with an accelerating voltage of 150 keV and a dose of 1.0 × 10 12 cm −2 . The drive-in was performed at 1150 • C for 1530 min. Because the p-well also aimed to form a depth of 2.5 µm, the ion implantation was performed with an accelerating voltage of 80 keV and a dose of 2.0 × 10 12 cm −2 . The drive-in was performed at 1150 • C for 270 min. The analysis was performed using secondary ion mass spectrometry (SIMS) to determine the impurity concentration in the fabricated sensor. Figure 3b shows the simulation results of the TCAD and SIMS analyses. The values of phosphorus and boron which were obtained by SIMS analysis were approximately identical to those that were assumed by TCAD, and a double-diffusion well structure was fabricated to be identical to the simulation data. Figure 4 shows the fabrication process of the sensor. The proposed double-well structure sensor was fabricated using the 1-polysilicon, 2-metal process at the LSI facility in the Toyohashi University of Technology, Japan. The drawing rule used 5 µm, and the wafer used a 4-in p-silicon substrate (P100, 60 Ω/cm, 2.24 × 10 14 cm −3 ). Figure 5 shows the processed wafer and microscope images of the sensor (300 × 300 µm). Single Wavelength To evaluate the fabricated sensor, we measured its wavelength resolution, wavelength detection range, light intensity dependence, and response characteristics. A voltage Single Wavelength To evaluate the fabricated sensor, we measured its wavelength resolution, wavelength detection range, light intensity dependence, and response characteristics. A voltage of 3 V was applied to each output electrode to detect the I PG and I n-well currents. Furthermore, a PG voltage of 3 V was applied to form a potential distribution of the sensor, and the p-well and p-type silicon substrates were set at the ground level. Each wavelength was irradiated using a laser-driven tunable light source (LDTLS; Tokyo Instruments, Japan). The FWHM of each wavelength range was 5-10 nm. The LED light source was passed through a 400 µm optical fiber (M28L01, Thorlabs, Newton, NJ, USA) with a 20× objective lens (SLMPlan, Olympus, Japan), and the sensing area was irradiated with a 20 µm light source. The current measurements and the control of the sensor were performed using a semiconductor parameter analyzer (B1500A, Keysight, Santa Rosa, CA, USA) The experiments were conducted in a dark room at room temperature. To measure the wavelength resolution of the sensor, the current ratios of I PG and I n-well were measured when 0.1 nm increments changed the incident wavelength from 550 nm and 650 nm, as shown in Figure 6a,b, respectively. As the wavelength shifted by 1 nm, the current ratios of 0.0083 and 0.0167 changed at 550 nm and 650 nm, respectively. Additionally, the coefficient of determination was 0.999. Because the change in the current ratio according to the noise of the sensor measurement system occurred at a decimal point of fewer than four digits, the wavelength resolution of the sensor was expected to be 0.1 nm or more. A programmable light source (OSVISX, OneLight Spectra, Vancouvar, BC, Canada) was used to evaluate the current ratio owing to the FWHM change of a single wavelength. The current ratio was measured by irradiating three light sources with central wavelengths of 450, 500, and 550 nm and a light intensity of 15 mW/cm 2 . Figure 7 shows the result of measuring the FWHM of the light source five times every 5 nm from 10 nm to 30 nm. Based on the current ratio at a wavelength of 500 nm, which was used as a reference, when the FWHM increased from 10 nm to 30 nm, the current ratio changed by −0.004, and the error rate was 1.48%. In a previous study on a single-well structure, the error rate was approximately 5.11%, and it was reduced by approximately 3.63% [28]. This result confirmed that the proposed sensor has low dependence on the change in FWHM when it is compared to the single-well structure sensor. Figure 6c shows the dependence of the current I PG and I n-well ratios on the wavelength. The current ratio changed from 0.081 to 8.033, depending on the wavelength from 460 nm to 800 nm respectively. Furthermore, the ratio of I PG -to-I n-well changed depending on the absorption coefficient of the silicon, as described in Equation (3) [29]. In other words, a proportional relationship was confirmed with the light absorption depth (1/e), depending on the wavelength. The dependence of the current ratio on the light intensity was evaluated by changing the light intensities to approximately −20 and −40 dB using the neutral density (ND) filters (ndk01, Thorlabs, Newton, NJ, USA). Because the light absorption depth of silicon was constant, the current ratio did not change, even if the light intensity changed at the same wavelength. This indicates that the proposed double-well structure sensor enables the detection of a single wavelength under changing light intensity conditions. Figure 6d shows the response characteristics of the sensor depending on the light intensity. The I PG current occurring from the depth W to the surface side was calculated to be 0.05, 0.08, 0.04, and 0.07 A/W at the 490, 530, 590, and 690 nm wavelengths, respectively. This value has the same response characteristics as those of the previously reported single-well-structured sensor [28]. Because the proposed wavelength detection method simultaneously measured the surface-side current I PG and the substrate-side current I n-well , improved current response characteristics can be expected. The response characteristics by the measured current I PG and I n-well were 0.07, 0.13, 0.17, and 0.31 A/W, and the sensitivity was 1.39, 1.64, 2.5, and 4.18 times higher than that of the previous sensor, respectively. The increased sensitivity had the same value as the current ratio of the sensor according to the wavelength, as shown in Figure 6c. Therefore, it was confirmed that the response characteristics were improved by 0.081-8.033 times when they compared to that of the conventional sensor in the 460-800 nm wavelength range. Each datum was measured ten times at a single wavelength, and the average standard deviation of the current ratio was calculated as 0.00018. A programmable light source (OSVISX, OneLight Spectra, Vancouvar, BC, Canada) was used to evaluate the current ratio owing to the FWHM change of a single wavelength. The current ratio was measured by irradiating three light sources with central wavelengths of 450, 500, and 550 nm and a light intensity of 15 mW/cm 2 . Figure 7 shows the result of measuring the FWHM of the light source five times every 5 nm from 10 nm to 30 nm. Based on the current ratio at a wavelength of 500 nm, which was used as a reference, when the FWHM increased from 10 nm to 30 nm, the current ratio changed by −0.004, and the error rate was 1.48%. In a previous study on a single-well structure, the error rate was approximately 5.11%, and it was reduced by approximately 3.63% [28]. This result confirmed that the proposed sensor has low dependence on the change in FWHM when it is compared to the single-well structure sensor. A programmable light source (OSVISX, OneLight Spectra, Vancouvar, BC, C was used to evaluate the current ratio owing to the FWHM change of a single wav The current ratio was measured by irradiating three light sources with centra lengths of 450, 500, and 550 nm and a light intensity of 15 mW/cm 2 . Figure 7 sh result of measuring the FWHM of the light source five times every 5 nm from 10 n nm. Based on the current ratio at a wavelength of 500 nm, which was used as a re when the FWHM increased from 10 nm to 30 nm, the current ratio changed by −0. the error rate was 1.48%. In a previous study on a single-well structure, the error r approximately 5.11%, and it was reduced by approximately 3.63% [28]. This res firmed that the proposed sensor has low dependence on the change in FWHM w compared to the single-well structure sensor. Multiple Wavelength A general fluorescence detection method measures the intensity of the fluorescence passing through an optical filter by irradiating the detection target with excitation light. Because the proposed sensor does not use an optical filter, it was necessary to simultaneously detect the wavelengths of the excitation light and fluorescence. An LED light source with two wavelengths was irradiated using a sensor and a spectrometer for a comparative analysis to examine the applicability of the fluorescence detection. Figure 8a shows the spectral results of the 490 nm LED light source as the excitation light and the 530 nm or 590 nm LED light source as the fluorescent light as detected by the spectrometer. Because the spectra of the wavelengths of 490 nm and 530 nm are relatively close, as the intensity of the 530 nm wavelength increases, the light intensity of the 490 nm wavelength which was used as the excitation light also increases simultaneously. Because the spectra at wavelengths of 490 nm and 590 nm did not overlap, the spectra were distributed independently. In general, to quantify a spectrum with multiple peaks, the centroid wavelength λ c is calculated by a weighted mean of a spectral, as shown in Equation (4) [30]: In the case of the localized surface plasmon resonance sensors and nanohole biosensors, the centroid wavelength was adopted and quantified to detect the spectrum of the passing light which was changed by the molecular adsorption with high sensitivity [31,32]. Therefore, the centroid wavelength of the measured spectrum was calculated, and a comparative analysis was performed using the current ratio of the proposed sensor. Figure 8b shows the data comparing the centroid wavelength and the current ratio of the sensor. According to the LED light sources, the centroid wavelength was changed from 497.7 nm to 535.2 nm, and the current ratio with PG 1V was simultaneously applied, and it had changed from 0.446 to 0.863. The proposed method had a high coefficient of determination value of 0.9997. This confirmed the possibility of measuring two different wavelengths by using the current ratio of the proposed sensor, as well as measuring the fluorescence with a relatively wide FWHM wavelength. In the case of a fluorescence experiment, the intensity of the emitted light (fluorescence) is weaker than the excitation light is. We evaluated the wavelength separation ability of the sensor using two light sources (λ: 490, 530 nm). The wavelength separation In the case of the localized surface plasmon resonance sensors and nanohole biosensors, the centroid wavelength was adopted and quantified to detect the spectrum of the passing light which was changed by the molecular adsorption with high sensitivity [31,32]. Therefore, the centroid wavelength of the measured spectrum was calculated, and a comparative analysis was performed using the current ratio of the proposed sensor. Figure 8b shows the data comparing the centroid wavelength and the current ratio of the sensor. According to the LED light sources, the centroid wavelength was changed from 497.7 nm to 535.2 nm, and the current ratio with PG 1V was simultaneously applied, and it had changed from 0.446 to 0.863. The proposed method had a high coefficient of determination value of 0.9997. This confirmed the possibility of measuring two different wavelengths by using the current ratio of the proposed sensor, as well as measuring the fluorescence with a relatively wide FWHM wavelength. In the case of a fluorescence experiment, the intensity of the emitted light (fluorescence) is weaker than the excitation light is. We evaluated the wavelength separation ability of the sensor using two light sources (λ: 490, 530 nm). The wavelength separation ability was calculated by detecting the change in the current ratio according to the wavelength. The intensity of the exciting light (490 nm) was fixed at 7.714 µW, and only the fluorescence light source (530 nm) was reduced from 7.709 µW to 0.0008 µW. The standard deviation average value of the current ratio was calculated to be 0.00015 from the results of each measurement which were taken ten times. Therefore, we considered that the practical measurement value of the current ratio is a measurable ratio of up to four decimal places. Figure 9 shows the current ratio change according to the light intensity. It is considered that the separation ability of 490 nm and 530 nm can be measured up to 1977.95: 1 if up to four decimal places are assumed as significant digits. This result has a higher sensitivity than the 1300:1 measured value of the wavelength separation ability of the presented sensor with an n-type silicon substrate [28]. of each measurement which were taken ten times. Therefore, we considered that the practical measurement value of the current ratio is a measurable ratio of up to four decimal places. Figure 9 shows the current ratio change according to the light intensity. It is considered that the separation ability of 490 nm and 530 nm can be measured up to 1977.95: 1 if up to four decimal places are assumed as significant digits. This result has a higher sensitivity than the 1300:1 measured value of the wavelength separation ability of the presented sensor with an n-type silicon substrate [28]. Measurement Configuration This study detected various fluorescent reagents without optical filters using a single-pixel sensor. The fluorescent reagents Alexa Fluor 488, 594, and 680 (AF488, AF594, and AF680, Thermo Fisher, Waltham, MA, USA) were used to evaluate the fluorescence wavelength detection ability of the sensor. The reagent was maximally excited at wavelengths of 490, 590, and 679 nm, and the emitted fluorescence at the wavelengths of 525, 612, and 702 nm, respectively. Figure 10 shows a schematic of the experimental system which was built to perform a quantitative evaluation. The peak wavelengths of the LED light source (M490F3, M590F3, M680F3, Thorlabs, Newton, NJ, USA) that were used as excitation light sources were 490, 590, and 680 nm (FWHM: 26, 16, and 22 nm), and the light intensities were 21.28, 15.05, and 13.60 μW/cm 2 , respectively. The reagent that was to be analyzed was 2.5 mL of deionized water (DIW) and seven types of fluorescent reagents (10, 5, 2, 1, 0.5, 0.2, and 0.1 μM) in a standard quartz cell (T-5-UV-10, TOSOH, Japan). The excitation light and fluorescence passing through the quartz cell were irradiated onto the sensor and spectrometer (OCEAN HDX, Ocean Photonics, Tokyo, Japan) using a twobranch optical fiber (BIF600-UV/VIS, Ocean Photonics, Japan), and the current and spectral characteristics of the sensor were measured simultaneously. The fluorescent reagent measurements were conducted in a dark room at room temperature. Measurement Configuration This study detected various fluorescent reagents without optical filters using a singlepixel sensor. The fluorescent reagents Alexa Fluor 488, 594, and 680 (AF488, AF594, and AF680, Thermo Fisher, Waltham, MA, USA) were used to evaluate the fluorescence wavelength detection ability of the sensor. The reagent was maximally excited at wavelengths of 490, 590, and 679 nm, and the emitted fluorescence at the wavelengths of 525, 612, and 702 nm, respectively. Figure 10 shows a schematic of the experimental system which was built to perform a quantitative evaluation. The peak wavelengths of the LED light source (M490F3, M590F3, M680F3, Thorlabs, Newton, NJ, USA) that were used as excitation light sources were 490, 590, and 680 nm (FWHM: 26, 16, and 22 nm), and the light intensities were 21.28, 15.05, and 13.60 µW/cm 2 , respectively. The reagent that was to be analyzed was 2.5 mL of deionized water (DIW) and seven types of fluorescent reagents (10, 5, 2, 1, 0.5, 0.2, and 0.1 µM) in a standard quartz cell (T-5-UV-10, TOSOH, Japan). The excitation light and fluorescence passing through the quartz cell were irradiated onto the sensor and spectrometer (OCEAN HDX, Ocean Photonics, Tokyo, Japan) using a two-branch optical fiber (BIF600-UV/VIS, Ocean Photonics, Japan), and the current and spectral characteristics of the sensor were measured simultaneously. The fluorescent reagent measurements were conducted in a dark room at room temperature. Measurement Results A typical fluorescence sensor detected the light intensity of the fluorescence that passes through the optical filters to quantify and identify the target. However, the proposed sensor quantified the fluorescence by detecting the current ratio according to wavelength change when the excitation light and fluorescence were simultaneously irradiated Therefore, before we evaluated the concentration dependence of the fluorescence reagent we experimented with the following three assumptions: (i) When the optimal excitation wavelength is irradiated to the fluorescent reagent, the higher the concentration of the fluorescent reagent is, then the more the excitation light is sufficiently absorbed, and a Measurement Results A typical fluorescence sensor detected the light intensity of the fluorescence that passes through the optical filters to quantify and identify the target. However, the proposed sensor quantified the fluorescence by detecting the current ratio according to wavelength change when the excitation light and fluorescence were simultaneously irradiated. Therefore, before we evaluated the concentration dependence of the fluorescence reagent, we experi-mented with the following three assumptions: (i) When the optimal excitation wavelength is irradiated to the fluorescent reagent, the higher the concentration of the fluorescent reagent is, then the more the excitation light is sufficiently absorbed, and a strong fluorescence is emitted. It was assumed that the current ratio of the sensor increased as the concentration increased; (ii) when a fluorescent reagent is irradiated with a shorter wavelength than the optimized excitation wavelength, the absorption rate of the excitation light increases as the concentration of the fluorescent reagent increases. However, the current ratio decreases as the concentration decreases because the absorption rate on the long wavelength side of the irradiated excitation light spectrum is high, and the fluorescence intensity is weak; (iii) when a fluorescent reagent is irradiated with excitation light with a wavelength that is longer than the fluorescence wavelength, most of the excitation wavelengths pass through the fluorescent reagent. Therefore, the fluorescence is not emitted, and the excitation light is transmitted. Even if the concentration of the reagent is high, the current ratio is not expected to change, considering that most of the excitation light passes through the fluorescent reagent. Figure 11 shows the current ratio of the sensor by the irradiation of each LED light source (λ = 490, 590, and 680 nm) to each fluorescent reagent (AF488, AF594, and AF680), and the simultaneously measured spectra are shown in Figure 12. The result of irradiating each LED light source to the DIW was used as a reference (red dotted line), and the fluorescence reagent was identified and quantified using the current ratio for each concentration. The detection limit of the sensor is considered to be above 0.1 µM because the measured current ratio varies linearly up to 0.1 µM concentration for all of the reagents. As shown in Figure 11a, the current ratio increased from 0.31 to 0.36 by reagent concentration being from 0.1 to 10 µM. Because AF488 was irradiated with a light source at 490 nm, the optimal excitation wavelength, most of the excitation light was absorbed by the reagent, and long-wavelength fluorescence was emitted. Because the normal distribution of the spectrum passing through the reagent shifts to a longer wavelength, the current ratio increases. Additionally, the spectrum was shifted to a longer wavelength as the reagent concentration changed from 0.1 to 10 µM, as shown in Figure 12a. Therefore, it is possible to identify and quantify a fluorescent reagent by calculating the current ratio at the optimal excitation wavelength. Figure 11e,i shows a similar result as that in Figure 11a for the optimized excitation wavelength, which is related to assumption (i). The current ratio increased from 1.39 to 1.42 in AF594 and 3.22 to 3.45 in AF680. Moreover, the spectral characteristics in Figure 12e,i have confirmed that the normal distribution shifted toward a longer wavelength side rather than an excitation wavelength. Figure 11b shows that the current ratio was consistent with the current ratio of the DIW owing to the concentration change. Because a wavelength that was longer than the optimal excitation wavelength was irradiated, most of the excitation light was passed to the reagent. The slight change in the current ratio at high concentrations (2-10 µM) was caused by the absorption of the short wavelength side in the excitation light. Furthermore, the absorption on the short wavelength side was confirmed in the result, as shown in Figure 12b. As shown in the result of Figure 11c,d, the change in the current ratio was insignificant since most of the excitation wavelengths passed through the reagent, as shown in Figure 11b, and its relevance to assumption (iii) can be confirmed. Figure 11d shows the current ratio of the 470 nm LED irradiation with a wavelength that was shorter than the optimal excitation wavelength of AF594. In the spectrum of the irradiated 470 nm LED excitation light, a relatively large amount of the wavelength on the long wavelength side was absorbed, and hence, the decrease in the current ratio was confirmed as the concentration of the reagent increased. Furthermore, the absorption on the long wavelength side was confirmed in the spectral results in Figure 12d. As shown in the result of Figure 11g,h, the long wavelength component was absorbed by the short excitation wavelength, and its relationship with assumption (ii) can be confirmed in Figure 11d. Figure 13 shows the relationship between the centroid wavelength ( Figure 12) and the current ratio ( Figure 11) depending on the fluorescent reagent concentration. In Section 4.1.2, the centroid wavelength of the spectral curve was proportional to the sensor ratio for a wavelength with a relatively wide FWHM. Similarly, because the normal distribution of the spectral curve shifts according to the change in the concentration of the fluorescent reagent, the current ratio which was measured by the sensor was proportional to the centroid wavelength. The measurement results can be divided into three primary patterns using the LEDs with three different excitation wavelengths. Therefore, we showed that the current ratio of the proposed sensor can be used to detect the fluorescent reagent and its concentration. Figure 13 shows the relationship between the centroid wavelength ( Figure 12) and the current ratio ( Figure 11) depending on the fluorescent reagent concentration. In Section 4.1.2, the centroid wavelength of the spectral curve was proportional to the sensor ratio for a wavelength with a relatively wide FWHM. Similarly, because the normal distribution of the spectral curve shifts according to the change in the concentration of the fluorescent reagent, the current ratio which was measured by the sensor was proportional to the centroid wavelength. The measurement results can be divided into three primary patterns using the LEDs with three different excitation wavelengths. Therefore, we showed that the current ratio of the proposed sensor can be used to detect the fluorescent reagent and its concentration. Conclusions In this study, we proposed a filter-free wavelength sensor with a double-diffusion well structure and evaluated a new wavelength detection method. The proposed structure was simulated using SPECTRA, which showed the possibility to achieve wavelength identification in the proposed structure. To fabricate the sensor, TCAD was used to optimize the impurity concentration and design the fabrication process. The impurity concentration of the sensor which was fabricated using the SIMS analysis results was consistent with the simulation results. The proposed double-well structure sensor was fabricated using the 1-polysilicon, 2-metal process at the LSI facility of Toyohashi University of Technology, Japan. The current ratio was obtained according to the wavelength by measuring the I PG and I n-well of the manufactured sensor. The result confirmed that this ratio depended on the absorption depth of the silicon at each wavelength, and it did not depend on the light intensity. The low FWHM dependence of the wavelength in the proposed structure confirmed the possibility to achieve fluorescence detection with a broad wavelength. A spectrum with two peak wavelengths was calculated as the centroid wavelength, and it was compared with the current ratio of the sensor, which showed high linearity. Therefore, it is possible to quantify the wavelength with a relatively wide FWHM using the proposed sensor. As an application experiment, a quantitative evaluation was performed using three types of fluorescent reagents. The fluorescent reagents were irradiated with three excitations of three types of LED light to evaluate the reagent concentration dependence and the spectral properties, simultaneously. Furthermore, the current ratio of the sensor was detected by the excitation light and the fluorescence emitted from the reagent, and they were compared with the spectral characteristics. Additionally, the ratio change was 0.31 to 0.36 in AF488, 1.39 to 1.42 in AF594, and 3.22 to 3.45 in AF680, respectively, depending on the concentration of the reagent. This indicated that the concentration of the reagent by the fluorescence can be detected from the current ratio, thereby suggesting that various fluorescence signals can be detected. The proposed sensor can be applied in biofields such as POCT as a miniaturized wavelength detection sensor that does not use optical components. In the future, the development of a miniaturized optical detection system that is capable of imaging wavelength information by arranging the proposed single-pixel structure sensor is expected to be conducted.
10,311
2022-11-01T00:00:00.000
[ "Physics", "Engineering", "Materials Science" ]
Location of Solutions of Fredholm–Nemytskii Integral Equations from a Whittaker-Type Operator We analyse the global convergence of a Whittaker-type iterative method and obtain restricted global convergence domains, so that we can locate and separate solutions of Fredholm–Nemytskii nonlinear integral equations by means of balls. For this, we use two techniques, one based on the well-known fixed point theorem and the other on a system of recurrence relations. In both techniques, we use the Whittaker-type operator involved and auxiliary functions. Introduction We consider Fredholm-Nemytskii integral equations in the form [2,12,15,17] where where H : R → R, and φ(x) is the unknown function to be determined. Generally, we cannot exactly solve integral equations of type (1), so that we can use a numerical method to approximate a solution of the operator This research was partially supported by Ministerio de Ciencia, Innovación y Universidades under grant PGC2018-095896-B-C21. It is clear that a solution φ * of the operator equation (2) is a solution of the integral equation (1). To approximate such a solution we can apply the wellknown method of successive approximations [16], which is known as Picard's method [6,18] when it comes to approximate a solution of the equation G(φ) = 0 and defined by φ n+1 = φ n − G(φ n ), n ≥ 0, with φ 0 given in C [a, b]. A generalization of Picard's method when it is applied to a scalar equation g(t) = 0 is known as Whittaker's method [1,14], given by t n+1 = t n − μg(t n ), n ≥ 0, μ ∈ R, with t 0 given. To solve G(φ) = 0, we follow this generalization and consider a constant operator A, such that A : C[a, b] → C [a, b] with A(φ)(x) = kφ(x) and k ∈ R, and define then the following Whittaker-type iterative method φ n+1 = W (φ n ) = φ n − AG(φ n ), n ≥ 0, with φ 0 given in C [a, b]. (3) Thus, we are now interested in lim n φ n = φ * , where φ * is a solution of (2) and, therefore, a solution of the integral equation (1). It is well known that the convergence of the sequence {φ n } can be established in different ways: semilocal convergence, local convergence and global convergence. Global convergence guarantees the convergence of an iterative method starting at any point in a previously located domain and once the existence of a solution of the operator has been proved in the domain. Notice that only conditions on the operator involved are required. One of the aims of this work is to obtain a result of global convergence for the iterative methods (3). It is well known that the fixed point theorem is a procedure to obtain results of global convergence. So, a fixed point φ * of the operator is a solution of the equation (1) and we can then use the fixed point theorem to approximate φ * . The fixed point theorem says [3]: If the operator T : C[a, b] → C[a, b] is a contraction, then T has a unique fixed point φ * in C[a, b] that can be approximated by the method of successive approximations, , is satisfied to see that T is a contraction. Then, the method of successive approximations can be applied to approximate a fixed point φ * of the operator T . However, this result can be applied only if if the operator T has a unique fixed point φ * in the full space C [a, b] and, in addition, it is not necessary to separate it from other possible fixed points. If we want to consider other possible situations in which the operator T has more fixed points, we must apply a restricted fixed point theorem, as for example: If D is a convex and compact set of C[a, b] and the operator T : D → D is a contraction, then T has a unique fixed point φ * in D that can be approximated by the method of successive approximations, φ n+1 = T (φ n ), n ≥ 0, with φ 0 given in D. In this case, the first problem is obviously to locate a domain D that contains a fixed point of the operator T . For this, we need some information about the possible fixed points of the operator T , which may not be possible in all situations, as we can see in Sect. 2. As an approximate zero of the operator (2) is a fixed point of the operator W given in (3) (called Whittaker-type operator), we can study the location of fixed points of the operator (4) and the separation between then from the operator W . So, the main aim of this study focuses on the qualitative properties of the location of the fixed points of the operator (4) and the separation between them. In Sect. 3, we do this study from the application of the restricted Fixed Point Theorem given above to the operator W and using a technique based on auxiliary points [10][11][12]. Moreover, we approximate a fixed point φ * of the operator (4) by the iterative method (3) starting at any function of the considered domain. Therefore, we obtain domains of global convergence for (3), as for the method of successive approximations when the fixed point theorem is applied. In Sect. 4, we develop also a technique to obtain domains of global convergence that is not based on the fixed point theorem, but on a system of recurrence relations. This technique also uses auxiliary points. Finally, in Sect. 5, we can find the conclusions of the work. Throughout the paper, we denote Motivation We start by considering a first simple example, where the linear integral equation is involved. The corresponding integral operator is then so that Now, we make a change in the linearity of the integral equation (5), choose λ = 1 5 and consider then the following integral equation whose solutions φ 1 (x) = x 3 +(0.0122 . . .)e −x and φ 2 (x) = x 3 +(15.4354 . . .)e −x are easily calculable. To study the integral equation (6), we take the operator As the operator T 2 is not a contraction in the full space C[a, b]. To find a convex and compact domain where we can apply the restricted fixed point theorem mentioned above, we locate previously the possible fixed points. For this, if φ * is a possible fixed point of the operator T 2 , we have from (6) the following condition Therefore, the integral equation (6) has a unique solution φ * in B(0, 2) that can be approximated by the method of successive approximations, φ n+1 = T 2 (φ n ), n ≥ 0, starting at any φ 0 in B(0, 2). After that, we make a small modification in the above integral by substituting λ = 1 2 for λ = 1 5 , so that we take the following integral operator whose fixed points are φ 1 (x) = x 3 + (0.0318 . . .)e −x and φ 2 (x) = x 3 + (5.9442 . . .)e −x . Obviously, as the previous operator T 2 , the operator T 3 is not a contraction in the full space C[0, 1]. Moreover, it is easy to check that it is not possible to locate previously a fixed point. So, we look for a convex and compact set of the form B(0, r) in C[0, 1], so that the operator this is not possible. Therefore, there are not domains of the form B(0, r) where we can apply the restricted fixed point theorem given above. As we have just seen, from the restricted fixed point theorem and the pre-location of fixed points, we can sometimes consider finding domains of the form B(0, r) that contain a fixed point and separating it from other possible fixed points. However, as we have seen for the last integral equation, it is not always possible to apply this technique. Moreover, it is clear that such domains do not correctly locate a fixed point or separate it from another possible fixed point. In this work, we consider auxiliary points to be able to locate a fixed point in a domain of the form B( φ, R), where φ is an auxiliary point given in C [a, b], which allows us to obtain a better location and also separate the fixed point from other possible ones with greater accuracy. Whittaker-Type Operator From the restricted fixed point theorem given in the introduction, we see that the method of successive approximations, To prove that T is a contraction, we can consider that the Nemystkii operator is Lipschitz continuous in some In particular, we consider domains of the form D = B( φ, R) and, as we indicate in Sect. 2, an auxiliary function φ ∈ C[a, b] to locate a fixed point in B( φ, R) which allows us to obtain a better location and separation of the fixed point. From these two ideas, we establish the following convergence result for iterative method (3), which is also a result on existence and uniqueness of solution. Theorem 1. Suppose that the Nemystkii operator H is Lipschitz continuous in C[a, b], namely H satisfies the condition (8). Let φ ∈ C[a, b] and consider Then, the operator W : B( φ, R) → B( φ, R) has a unique fixed point φ * and the iterative method (3) starting at any φ 0 ∈ B( φ, R) converges to φ * . Third, by applying the restricted fixed point theorem given in the introduction to the operator W in B( φ, R), the proof is complete. Next, we illustrate Theorem 1 with the integral equations (6) and (7), seeing that in both cases we can improve the results obtained in Sect. 2. Therefore, from Theorem 1, we can guarantee the existence of a unique solution of the integral equation (6) Note that the location and separation of a solution of (6) are fixed if k ∈ (0, 1), since the solution is unique in B(x 3 , R) with R < 0.1801 . . . But, for k ∈ [1, 1.2254 . . .), it is not. We show in Table 1 some values of Table 1. Radii of the balls B(x 3 , R) where the existence of a unique solution of (6) is guaranteed from Theorem 1 with where the existence of a unique fixed point is guaranteed and then a solution of the integral equation (6). Observe that the best location of the fixed point, which is in the ball B(x 3 , R 1 ), and the best separation of other possible ones, which is in the ball B(x 3 , R 2 ), are better the closer k is to 1. If we choose the most favorable situation, which is k = 1, we obtain that the existence domain of solution is B(x 3 , 0.1801 . . .) and the uniqueness domain of solution is B(x 3 , 2.7747 . . .), so that we considerably improve the domains obtained by means of the restricted fixed point theorem, since it is B(0, 2) in both cases, as can be seen in the Figs. 1, 2. Example 3. If we consider the integral equation (7), we cannot previously locate a solution. We cannot find either a convex and compact domain of the form B(0, r) in C[0, 1] in which T 3 is a contraction in such a way that we could apply the restricted fixed point theorem given in the introduction. However, we can locate a solution of (7) by Theorem 1. For this, we choose the auxiliary function φ(x) = x 3 . In addition, As M = e−1 e = 0.6321 . . ., the conditions (9) and (10) Note that the location and separation of a solution of (7) are fixed if k ∈ (0, 1), since the solution is unique in B(x 3 , R) with R < 0.5819 . . . But, for k ∈ [1, 1.2254 . . .), it is not. We use in Table 2 some values of k ∈ [1, 1.2254 . . .) and the corresponding radii of the balls B(x 3 , R) with R ∈ [R 1 , R 2 ), where the existence of a unique fixed point is guaranteed and then a solution of the integral equation (7). Observe that the best location of the fixed point, which is in the ball B(x 3 , R 1 ), and the best separation of other possible ones, which is in the ball B(x 3 , R 2 ), are better the closer k is to 1. On the other hand, we know that the operator T is also a contraction if T is derivable and such that T (u) < 1, for all u ∈ C[a, b]. Taking into account this fact, we can consider that the derivative of Nemytskii operator H satisfies where ω 1 : [0, +∞) → R is a nondecreasing continuous function such that ω 1 (0) ≥ 0 ([4,7,13]), or, once φ ∈ C[a, b] is fixed, where ω 2 : [0, +∞) → R is a nondecreasing continuous function such that ω 2 (0) = 0 ( [4,8,9]). Under both conditions for H , we can prove that the operator T is a contraction, as we can see in the following result. We first observe that (a) Under the condition (11), suppose R > 0 such that Then, the operator W : B( φ, R) → B( φ, R) has a unique fixed point φ * and the iterative method (3) starting at any φ 0 ∈ B( φ, R) converges linearly to φ * . Proof. Prove item (a). From After that, by the restricted fixed point theorem applied to the operator W , we complete the proof of item (a). Item (b) follows in an analogous way to item (a) without more than taking into account now that Next, we present a new result of uniqueness of the fixed point under the conditions (11) or (12), where a technique of functional analysis is used. (a) Under the hypothesis (a) of Theorem 4, suppose that the real equation in z given by where g 1 (z) = where g 2 (z) = z 0 ω 2 (u) du, has at least one positive real solution r such that r > R. B( φ, r). Then, the fixed point of W is unique in Proof. Suppose that φ * is a fixed point of W in B( φ, R) and there exists another fixed point ϕ * ∈ B( φ, r), with r > R. Now, from the approximation For the last, we prove equivalently that there exists the operator P −1 . So, from Now, for item (a) and taking into account (16), it follows and, by the Banach lemma on invertible operators, we obtain that the operator P −1 exists. (18) and, by the Banach lemma on invertible operators, we obtain that the operator P −1 exists. After that, we illustrate Theorems 4 and 5. Example 6. Consider the integral equation (6) and φ(x) = x 3 . Then, φ = 1 and, since H (φ) = 2φ, we have ω 1 (z) = 2z, for all z ∈ R + . From item (a) of Theorem 4, we can guarantee the existence of a unique solution of (6) in B(x 3 , R) with R satisfying (14), which is reduced respectively to In Table 3, we see some values of k and the corresponding radii of the balls B(x 3 , R) where the existence of a unique fixed point is guaranteed from Theorem 4 (a) and, therefore, a solution of (6). Note that the values of R 1 and R 2 are the same for all k ∈ [0, 1]. On the other hand, we illustrate Theorem 5 (a) with Table 4, where the values of R are the values of R 2 obtained from Theorem 4 (a), see Table 3, and the values of r are the real solutions of the corresponding real equation which Table 4. Radii of the balls B(x 3 , r) from which the domains of uniqueness of solution of equation (6) We end by noting that exactly the same results are obtained from item (b) of Theorem 4 and item (b) of Theorem 5. We can also give a result of uniqueness of solution from the operator G instead of the operator W , so that the new solution uniqueness result is independent of k. (a) Under the hypothesis (a) of Theorem 4, suppose that the real equation in z given by where g 1 (z) = where g 2 (z) = z 0 ω 2 (u) du, has at least one positive real solution ρ such that ρ > R. B( φ, ρ). Then, the fixed point of W is unique in Proof. As a fixed point of W is a solution φ * of the operator (2), we then suppose that φ * is a solution of G in B( φ, R) For the last, we prove equivalently that there exists the operator Q −1 . So, from Now, for item (a) and taking into account (19), we have and, by the Banach lemma on invertible operators, we obtain that the operator Q −1 exists. For item (b), we provide as for item (a) and taking into account (19), we have and, by the Banach lemma on invertible operators, we obtain that the operator Q −1 exists. Note that the uniqueness of the fixed point φ * in B( φ, R) follows from (17) with ρ = R, provided that Global Convergence from Auxiliary Functions In this section, we present an alternative technique to the previous one, that was based on the restricted fixed point theorem, which also allows us to obtain domains of global convergence, B( φ, R) with φ ∈ C[a, b], for the iterative method (3), using auxiliary functions, so that we can locate solutions of (1). This technique was first developed for Newton's method in [10]. For this, we consider the following conditions [5]: After that, we present a property that we use later. Lemma 10. Suppose condition (C2). Then, Proof. From Taylor's series we have From Lemma 10, we analyze the first iteration of the iterative method (3), what leads us to the convergence of the method. If φ 0 ∈ B( φ, R), then provided that the condition (C1) holds, and Now, if we suppose that where for all n ≥ 2, and provided that condition (23) holds, it follows in the same way that so that (24) and (25) are true for all positive integers n by mathematical induction. Next, we can establish the following result. Proof. From (24) and γ < 1, we have φ n+1 − φ n < φ n − φ n−1 , for all n ∈ N, so that sequence { φ n+1 −φ n } is strictly decreasing for all n ∈ N and, therefore, sequence {φ n } is convergent. If φ * = lim n→∞ φ n , then G(φ * ) = 0 by the continuity of G and AG(φ n ) → 0 when n → ∞, since k ∈ (0, 2). Example 12. Now, we apply Theorem 11 to the integral equation (6). For this, we consider again φ(x) = x 3 , so that α = 2(1+R), since 1]. Furthermore, (23) and γ < 1 are satisfied at the same time by different values of k ∈ (0, 2) in B( φ, R). In Table 5 we see some values of k and the corresponding radii of the balls B( φ, R) where the existence of a solution of the integral equation (6) is guaranteed from Theorem 11. Besides, it seems clear that the conditions of Theorem 11 are not satisfied if k ∈ (1.72, 2). In addition, the higher the value of k, the smaller the value of R, so that the best location of a solution is when k is the largest possible value. Finally, by Theorem 11, we extend, with respect to the study presented in Sect. 3, the application of the iterative method (3) to obtain domains of existence of solution for the integral equation (6). Conclusions The fixed point theorem is a known result to obtain results of global convergence. But, from the use of this theorem and the previous location of fixed points, it is not always possible to find domains of global convergence, so that they contain fixed points that can be separated from others (for this, see Sect. 2). In addition to not correctly locating a fixed point or separating it from another possible fixed point. In this work, we locate fixed point in balls of the form B( φ, R), where φ is an auxiliary function, which also allows us to separate them from others with great accuracy. For this, we use two techniques: one based on a restricted fixed point theorem and the other on a system of recurrence relations. In both techniques, we use auxiliary functions and illustrate the study with a Whittaker-type iterative method and Fredholm-Nemytskii nonlinear integral equations. Funding Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. Open Access. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creativecommons.org/licenses/by/4.0/. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
5,238.8
2022-01-30T00:00:00.000
[ "Mathematics" ]
An Optimal Subgradient Algorithm with Subspace Search for Costly Convex Optimization Problems This paper presents an acceleration of the optimal subgradient algorithm OSGA (Neumaier in Math Program 158(1–2):1–21, 2016) for solving structured convex optimization problems, where the objective function involves costly affine and cheap nonlinear terms. We combine OSGA with a multidimensional subspace search technique, which leads to a low-dimensional auxiliary problem that can be solved efficiently. Numerical results concerning some applications are reported. A software package implementing the new method is available. Introduction Over the past few decades, solving convex optimization with smooth or nonsmooth objectives has received much attention due to many applications in the fields of applied sciences and engineering, cf. [15,50]. For smooth problems, first-and second-order information is typically available and many first-and second-order methods exist, see [37,44]. However, for nonsmooth problems, usually only first-order information is available. Solving nonsmooth problems is commonly harder than solving smooth problems; however, there are many nonsmooth problems with nice structure such that this structure can be used to design efficient methodologies for them. Because of the low memory requirement, first-order methods are especially important for problems with a large number of variables. Subgradient methods constitute a class of first-order methods that have been developed since 1960 to solve convex nonsmooth optimization problems, see, e.g., [46,49]. In general, they only need function values and subgradients, have low memory requirement, and can be used for solving convex optimization problems with several millions of variables. However, too many iterations are needed to attain a very accurate solution. The low convergence speed of subgradient methods corresponds to their complexity (the number of iterations required to attain an ε-solution for a given ε > 0). In 1983, Nemirovski and Yudin [36] proved that the worst-case complexity bound to achieve an ε-solution of problems with a Lipschitz continuous convex nonsmooth objective by first-order methods is O(ε −2 ), while it is O(ε −1/2 ) for smooth problems with Lipschitz continuous gradients. Algorithms attaining the optimal worst-case complexity bound for a class of problems are called optimal. Historically, optimal first-order methods for smooth convex optimization date back to Nesterov [38] in 1983. He later in [40,41] proposed two gradient-type methods for minimizing a sum of two functions (composite problems) with the optimal complexity, where, for the first method, the smooth part of the objective needs to have Lipschitz continuous gradients and, for the second one, the smooth part of the objective needs to have Hölder continuous gradients. Since 1983 many researchers have studied optimal first-order methods; see, e.g., Auslander and Teboulle [7], Beck and Teboulle [11], Devolder et al. [22], Gonzaga et al. [26,27], Lan [30], Lan et al. [31], Nesterov [37,39,40], and Tseng [52]. Moreover, Nemirovski and Yudin in [36] showed that the subgradient, subgradient projection, and mirror descent methods attain the complexity O(ε −2 ) for Lipschitz continuous nonsmooth objectives, so that they are optimal for this class of problems. Recently, Neumaier in [42] proposed a subgradient algorithm called OSGA, which attains both the optimal complexity O(ε −1/2 ) for smooth problems with Lipschitz continuous gradients and the optimal complexity O(ε −2 ) for Lipschitz continuous nonsmooth problems. It is notable that OSGA does not need to know about global information of objective functions such as Lipschitz constants and behaves well for problems arising in applications, see Ahookhosh and Neumaier [1,[4][5][6]. A multidimensional subspace search scheme is a generalization of line search techniques, which are one-dimensional search schemes for finding a step-size along a specific direction. Hence, in multidimensional subspace search, one searches a vector of step-sizes allowing the best combination of several search directions for optimizing an objective function. Generally, subspace search techniques form a class of descent methods, where they can be used independently or employed as an accelerator inside of iterative schemes to attain a faster convergence. The pioneering work of subspace optimization was proposed in 1969 for smooth problems by Miele and Cantrell [33] and Cragg and Levy [21] who defined a memory gradient technique based on a subspace of the form S = span{−g k , d k−1 }, where g k denotes the gradient of the function at x k and d k−1 is the last available direction. Since then, many subspace search schemes have been proposed by selecting various search directions, see, e.g., [18,20,51] and references therein. Depending on the selected search directions used for constructing a subspace, two classes of subspace methods are distinguished, namely, gradient-type techniques [21,23,34] and Newton-type schemes [28,32,53,54]. Content In this paper we propose an accelerated version of OSGA (called OSGA-S) for solving convex optimization problems involving costly linear operators and cheap nonlinear terms. Our new method is a two-stage method that solves unconstrained nonsmooth convex optimization problems of the form where for i = 1, . . . , p ( p n), f i : U i → R is a (non)smooth, proper, and convex function, and A i : V → U i is a linear operator, for real finite-dimensional vector spaces V, U i . Solving (1) with OSGA involves two key steps, namely, providing the first-order information and solving an auxiliary high-dimensional subproblem (Eq. (5) below). Since the problem (1) is unconstrained, the exact solution of the corresponding auxiliary problem is given in a closed form, cf. [1,42]. In many applications involving overdetermined systems of equations and classification with support vector machine (see Sects. 4.1 and 4.2), the objective function has the form (1) involving costly affine but cheap nonlinear terms. Hence the most costly parts in computing function values and subgradients are related to applying forward and adjoint operators. We therefore try to improve in each iteration the current best point by an inner iteration solving a low-dimensional version of the original problem, using a subspace composed from the best point and the last few iterations. We emphasize that applying the subspace search involves no additional costly forward and adjoint operators. Therefore, the subspace search stage does not impose a significant cost to the outer scheme OSGA while improves its performance considerably. As proved in [42], this does not affect the worst-case complexity of the algorithm if done in the right place. However, our numerical results show that it successfully reduces the number of iterations and the running time needed in practice. Similar to OSGA, OSGA-S needs to know about no global information except the strong convexity parameter μ (μ = 0 if it is not available), and it only requires the first-order information; however, the main advantage of OSGA-S is being able to handle problems with complex structure of the form (1) involving composition of several functions and linear operators. Such structured problems have received much attention due to increase of interest in using mixed regularization terms, e.g., [8]. However, if f i , i = 1, . . . , p, are nonsmooth, then smooth solvers, Nesterov-type optimal methods [37,38,40], and proximal splitting methods [11,19] are not able to handle the problem. In this case subgradient methods [14] and Nesterov's universal gradient method with the level of smoothness parameter ν = 0 [41] can deal with the problem. Note that the mirror descent methods [12,36] can only handle the constrained version of (1). On the other hand, to the best of our knowledge there is only little work involving subspace search techniques for nonsmooth optimization problems [23,35], where they are based on smoothing the objective functions so that can not be used in our numerical comparison. For high-dimensional problems involving dense matrices, applying OSGA-S with a multidimensional subspace search results in a substantial gain in the running time, despite the extra effort needed for applying the subspace optimization. Indeed, the inner level runs OSGA-S only on a low-dimensional unconstrained auxiliary problem in an adaptive multidimensional subspace, and the associated solution is used to accelerate the outer level of OSGA-S iteration on the original problem. The multidimensional subspace uses some previously computed directions and results in a low-dimensional problem with typically at most 20 variables. Numerical experiments and comparison with subgradient methods and the universal gradient method show that the subspace search can significantly accelerate OSGA, especially when the objective involves costly linear operators. The remainder of this paper is organized as follows. In the next section we briefly review the main idea of OSGA. Section 3 describes a combination of OSGA and a multidimensional subspace search. Numerical results are reported in Sect. 4, and some conclusions are given in Sect. 5. Notations Let V be a real finite-dimensional vector space endowed with the norm · , and V * denotes its dual space, which is formed by all linear functional on V where the bilinear pairing g, x denotes the value of the functional g ∈ V * at x ∈ V. If V = R n , then, for 1 ≤ p ≤ ∞, The set of all subgradients is called the subdifferential of f at x denoted by ∂ f (x). We denote by f x and g x , the function value f (x) and the subgradient g at x ∈ C, respectively. A Review of OSGA In this section we briefly review the main idea of the optimal subgradient algorithm (see Algorithm 1) proposed by Neumaier in [42] for solving the convex constrained minimization problem where f : C → R is a proper and convex function defined on a nonempty, closed, and convex subset C of V. OSGA is a subgradient algorithm for problem (2) that uses first-order information, i.e., function values and subgradients, to construct a sequence of iterations {x k } ∈ C whose sequence of function values { f (x k )} converge to the minimum f = f ( x) with the optimal complexity. OSGA requires no information regarding global parameters such as Lipschitz constants of function values and gradients. In the unconstrained version relevant for the present work, we have C = V, and we work with a quadratic prox-function Q(z) : , where x 0 ∈ V is a given starting point and Q 0 an appropriate positive constant. Let us denote by g Q (x) the gradient by Q at x. At each iteration, OSGA satisfies the bound on the currently best function value f (x b ) with a monotonically decreasing error factor η that is guaranteed to converge to zero by an appropriate steplength selection strategy (see Procedure PUS ). Note that x is not known, thus the error bound is not fully constructive, but enough to guarantee the convergence of f (x b ) to f with a predictable worst-case complexity. To maintain (3), OSGA considers linear relaxations of f at z, where γ ∈ R and h ∈ V * , updated using linear underestimators available from the subgradients evaluated (see Algorithm 1). For each such linear relaxation, OSGA solves a maximization problem of the form where Let (5). From (4) and (6), we obtain Setting η := E(γ b , h) in (7) implies that (3) is valid. If x b is not optimal then the right inequality in (7) is strict, and since Q(z) ≥ Q 0 > 0, we conclude that the maximum η is positive. In each step, OSGA uses the next scheme for updating the given parameters α, h, γ , η, and u, see [42] for more details. Procedure PUS(parameters updating scheme) if η < η then 10 h ← h; γ ← γ ; η ← η; u ← u; 11 end 12 end Algorithm 1: OSGA (optimal subgradient algorithm) update the parameters α, h, γ , η and u using PUS; In [42], it is shown that the number of iterations to achieve an ε-optimum is of the optimal order O ε −1/2 for a smooth f with Lipschitz continuous gradients and of the order O ε −2 for a Lipschitz continuous nonsmooth f . The algorithm has low memory requirements so that, if the subproblem (5) can be solved efficiently, OSGA is appropriate for solving large-scale problems. Numerical results reported by Ahookhosh in [1,3] for unconstrained problems, and by Ahookhosh and Neumaier in [4][5][6] for simply constrained problems show the good behavior of OSGA for solving practical problems. Note that there is a flexibility in choosing In the next section we give a two-stage scheme (called OSGA-S) that the outer stage is OSGA and the inner stage is a multidimensional subspace search used to produce a suitable point increasing a significant computational cost to the outer stage OSGA for solving the problem (1). Structured Convex Optimization Problems In this paper we consider the convex optimization problem (1), which appears in many applications such as signal and image processing, machine learning, statistics, data fitting, and inverse problems; see, e.g., [15,50]. In many applications, the objective function of (1) involves expensive linear mappings (equivalently matrix-vector products with dense matrices). To apply a first-order method for minimizing such problems, the first-order oracle (function values and subgradients) should be available, i.e., Hence, in each call of the first-order oracle, p forward operators A i , i = 1, . . . , p, and p adjoint operators A * i , i = 1, . . . , p, must be applied requiring O(n 2 ) operations. This computationally leads to overall expensive function and subgradient evaluations such that the total cost of using a first-order method is dominated by the cost of applying forward and adjoint linear operators. This motivates the quest for developing an acceleration of OSGA using a multidimensional subspace search for solving such problems. The primary idea of multidimensional subspace methods is to restrict the next iteration to a low-dimensional subspace by constructing a subproblem with a reduced dimension. Let us fix M n, where n is the number of variables. Let the sequence {x k } k≥0 be generated by In this case a direction d belongs to the subspace S if and only if there exist constants where U : is a matrix constructed from the directions considered and t = (t 1 , t 2 , . . . , t M ) T is a vector of coefficients. Afterwards, the M-dimensional minimization problem is considered to determine the best possible vector of coefficients t, where v i := A i x and V i := A i U . The minimization problem (10) shows that the procedure of searching the best possible direction of the form (9) in the subspace (8) generalizes the idea of exact line search, see, e.g., [44], but it provides an approximate minimization. One can also construct the subspace Then the subspace minimization is defined by Since M n, the minimization subproblems (10) and (12) are low-dimensional and can be solved efficiently by classical optimization methods. Hence subspace search techniques can be implemented extremely fast. This leads to suitable schemes for large-scale optimization as the number of variables of practical problems growing up. Moreover, using a multidimensional subspace search as an inner step of iterative schemes needs low memory, which may be considerably cheaper than performing one step of the algorithm in the full dimension. Further, many common ideas in nonlinear optimization can be considered as multidimensional subspace search techniques, namely conjugate gradient, limited memory quasi-Newton, and memory gradient methods; see, e.g., [18,23,54]. Motivated by the above-mentioned discussion, the multidimensional subspace search scheme can be outlined as follows: Algorithm 2: MDSS (multidimensional subspace search) (10) or (12) inexactly to find t * ; To implement Algorithm 2 successfully, some factors are crucial: (i) the number of directions M controlling the computational cost of the scheme; (ii) choosing suitable directions to construct the subspaces; (iii) solving the minimization problem (10) or (12) efficiently. Indeed, for choosing the number of directions M, there is a trade-off between the total computational cost per iteration and the amount of possible decrease in function values. We here use MDSS as an accelerator of OSGA for solving problems involving costly linear operators. More precisely, we save some previously computed points, construct a subspace of the form (8) and apply MDSS to find a point x b in Line 8 of OSGA. This typically gives us a better point x b in Line 9 of OSGA. In the next subsection, we will show how the subspace S is constructed and how the subproblem (12) can be solved efficiently at a reasonable cost. Solving the Auxiliary Problem (12) by OSGA In this section we show how one can construct a suitable subspace of the form (11) and how to solve the auxiliary problem (12) with OSGA. Without loss of generality, we here assume . . , p, U : j and (V i ) : j denote the jth column of the matrices U and V i , respectively. Let us consider a variant of OSGA using the multidimensional subspace search technique as follows: Algorithm 3: OSGA-S (optimal subgradient algorithm with subspace search) Input: global parameters: δ, α max ∈ ]0, 1[, 0 < κ ≤ κ; local parameters: In OSGA-S if the number of iterations is less than M, we save the points x and x and related vectors v These points and the best iteration so far (x b ) are used to construct the subspace If the number of iterations is larger than or equal to M, we use the subspace (13) and solve a subspace problem of the form (12) with t ∈ R 2M+1 , i.e., This possibly leads us to a better point x b than that provided in Line 8 of OSGA. Note that if the number of iterations is bigger or equal than M, OSGA-S is a two-stage algorithm, where the outer stage is OSGA and the inner stage is a subspace search in Line 15. In the next result we show that Let also the points be generated by the former iterations of OSGA-S to construct the subspace (13). Then each step of OSGA applied to (14) in MDSS needs 4 pm(2M + 1) operations. In step k of OSGA-S, we have and Then we have ) . This means that the construction of V ik , i = 1, . . . , p, has no extra cost if they have been saved in the outer scheme of OSGA-S. We now compute the first-order oracle at t by Computing each of V ik t and V * ik ∂ f i (V ik t), i = 1, . . . , p, needs m(2M +1) operations. Therefore, apart from the cost of nonlinear terms, we need 2 pm(2M + 1) operations in each call of the first-order oracle for the problem (14). Since OSGA requires two calls of the first-order oracle in each iteration, we need 4 pm(2M + 1) operations in each iteration of MDSS. By (15) This implies that x k , x k , x b ∈ S, leading to Let t * ∈ R 2M+1 be the minimizer of the subspace problem (14) associated to the subspace (13). By (18) and setting x b = U k t * , we can write giving the result. Note that if U k and V ik , for i = 1, . . . , p, are collected in the outer stage of OSGA-S, then no extra efforts for computing them are needed in applying the subspace search scheme MDSS (see Lines 5,8,10, and 17 of OSGA-S). Let us assume m ≈ n. Then Theorem 1 implies that in each step of OSGA for solving (14) one needs O(n) operations. Therefore, applying n step of OSGA to (14) have the complexity the same as one call of the oracle for the full-dimensional problem by the outer scheme. Since we suppose n is a large number, the cost of applying n 0 (n 0 n) steps of OSGA to (14) can be ignored in comparison to the cost of a single call of the first-order oracle in the full dimension. Hence MDSS can be applied efficiently to accelerate OSGA without imposing too much computational cost for large-scale objectives involving expensive linear operators and cheap nonlinear terms. Theorem 1 implies that OSGA-S is a special case of OSGA obtained by specializing the choice of Line 8 in OSGA. Therefore, all theoretical feature of OSGA remains valid. Therefore, OSGA-S is optimal for smooth problems with Lipschitz continuous gradients, Lipschitz continuous nonsmooth problems, and strongly convex problems. We summarize this result in the next theorem that was proved in [42]. We compare OSGA-S with OSGA, SGA-1 (a non-summable diminishing steplength subgradient algorithm, cf. [14]), SGA-2 (a non-summable diminishing step-size subgradient algorithm, cf. [14]), and NESUN (Nesterov's universal gradient method, cf. [41]). In our implementation, SGA-1 and SGA-2 use the following step-sizes Overdetermined Linear System of Equations Consider the overdetermined linear system of equations where x ∈ R n is an unknown vector, A ∈ R m×n with m > n, y ∈ R m is an observation vector, and ν ∈ R m is unknown but small an additive noise. The objective is to recover x from y by solving (19). Such problems appear in many applications, see, e.g., [9,10,16]. They are of particular interest for robust fitting of linear models to data. In practice, this problem is typically ill-posed, cf. [43]. Therefore, x is usually computed by a minimization problem of the form (1) with one of the objective functions of Table 1. Here, we set where m = 50000 and n = 5000. Since some of the problems given in Table 1 involve regularization terms that NESUN subproblem cannot be solved efficiently (e.g., · ∞ ), we will not consider it in this comparison. We therefore use SGA-1, SGA-2, OSGA, and OSGA-S for solving this overdetermined system of equations. We set α 0 = 8 × 10 −1 for SGA-1, use α 0 = 10 −4 for SGA-2 if it applies to the problems L22R, L22L22R, L22L1R, L1R, L1L22R, and L1L1R, and exploit α 0 = 2 × 10 −2 for SGA-2 if it applies to the problems L2R, L2L22R, L2L1R, LIR, LIL22R, and LIL1R. Note that SGA-2 is very sensitive to the parameter α 0 for different problems, so we Table 1 List of minimization problems for solving overdetermined systems of equations, where λ denotes the regularization parameter Function Name Function Name The objective functions are convex and contain a linear mapping A, which is typically a dense matrix and y is defined by (19) tunned α 0 to attain the best performance of SGA-2 for the considered set of problems. For all problems of Table 1, we set λ = 1. We first conduct an experiment on the parameter M to find an optimal range for this parameter. To this end, we consider the problems of Table 1, solve the problem by OSGA in 100 iterations, save the best function value f s in each case, and run OSGA-S with M = 1, 2, . . . , 20 to achieve f s . The results of our experiment are summarized in Table 2 and Figs. 1 and 2. In Table 2, the best parameter M best for each problem regarding the best number of iterations and the best running time, along with the results for M best , M = 2, and M = 20, is reported. Figures 1 and 2 From the results of Table 2 and Figs. 1 and 2, it can be seen that M best is varied for the considered problems; however, the interval [1, 5] seems to be statistically reasonable for the parameter M. In addition, it is clear that the performance of OSGA-S depends on the parameter M, but if we set M ∈ [1, 5], OSGA-S outperforms OSGA except for L1R, L1L22R, and L1L1R (see figures (g), (h), and (i) of Fig. 2). We now solve the problems reported in Table 1 by SGA-1, SGA-2, OSGA, and OSGA-S, where we first solve these problems by OSGA in 100 iterations, save the best function value f s and stop the others whenever they attain a function value less or equal than f s or the number of iterations reaches to the maximum number of iterations, which is 500 here. We set M = 2 for OSGA-S. The results of implementation are summarized in Table 3 and Fig. 3. In Table 3, N and T denote the number of iterations and the running time, respectively. The results of Table 3 show that OSGA and OSGA-S outperform SGA-1 and SGA-2 significantly regarding both the number of iterations and the running time; however, OSGA-S needs fewer iterations and less running time than OSGA. In Fig. Table 1, where N (M) denotes the total number of iterations 3, we illustrate the relative error of function values versus iterations, i.e., where f 0 , f k , and f denote the function values at a starting point x 0 , the current point x k , and the minimizer x, respectively. The results of Fig. 3 show that in many cases OSGA-S get the same accuracy in fewer iterations and less running time; however, for Table 1, where T (M) denotes the running time some cases such as L1R, L22L1R, and L1L1R the difference between the number of iterations of OSGA-S and OSGA is not significant and worse running time are attained by OSGA-S. Moreover, the results of OSGA-S is much better than SGA-1, SGA-2, and OSGA for LIR and LIL1R that might be because of poor sparse subgradients of the infinity norm · ∞ for SGA-1, SGA-2, and OSGA, while the subspace minimization step of OSGA-S involves a combination of several former points resulting to better directions. . 3 The relative error of function values δ k against iterations for SGA-1, SGA-2, OSGA, and OSGA-S for solving overdetermined systems of equations using the minimization problems presented in Table 1 Support Vector Machines The learning with support vector machines (SVM) leads to several expensive convex optimization problems with large dense data set. Some of these problems have the form designed in this paper. Let us consider a binary classification, where a set of training data (x 1 , y 1 ), . . . , (x q , y q ) in which x i ∈ R n and y i ∈ {−1, 1} for i = 1, . . . , q is given. The aim is to find a classification rule using the training data, so that for a new point x one can assign a class y ∈ {−1, 1} to x by the derived classification rule. The classification rule for SVM is given by the sign of x, w + w 0 , where w and w 0 may be determined by solving a penalized problem where [z] + = max{z, 0}, and ψ can be · 1 (SVML1R), · 2 2 (SVML22R), and 1 2 · 2 2 + · 1 (SVML22L1R) (see, e.g., [13,48,55] and references therein). For x, w = w T x, let us define Then the problem (21) can be rewritten in the form where [1 − A w] + = max{1 − A w, 0} and 1 ∈ R q is the vector of all ones. Typically A is a dense matrix constructed by data points x i and y i for i = 1, . . . , q. It is clear that (22) is of the form (1), where an associated subgradient g is given by In order to show the benefit of our subspace technique for this kind of problems, we apply SVML1R, SVML22R, and SVML22L1R to the leukemia data given by Golub et al. in [24], available in [25]. This dataset comes from a study of gene expression in two types of acute leukemias (acute myeloid leukemia (AML) and acute lymphoblastic leukemia (ALL)) and it consists of 38 training data points and 34 test data points. We apply SVML1R, SVML22R, and SVML22L1R to the training data points (q = 38 and n = 7129) with six levels of regularization parameters. We first solve the problems by OSGA in 1000 iterations and save the best function value f s in each case. We then run SGA-1, SGA-2, NESUN, OSGA, and OSGA-S, where they are stopped after 5000 iterations or after achieving a function value at least as good as f s . The associated results are summarized in Table 4 and Figs. 4 and 5. The results of Table 4 show that OSGA and NESUN are comparable but better than SGA-1 and SGA-2, and OSGA-S outperforms all others significantly with respect to the number of iterations (N ) and the running time (T ) (the best average is given by OSGA-S). In Figs. 4 and 5, we illustrate the function values versus iterations indicating that OSGA-S needs few iterations (typically less than 35 iterations) to get the accuracy that OSGA attains in 1000 iterations and SGA-1, and SGA-2 get in few thousands of iterations; however, this number of iterations is varied from about 100 to few thousands for NESUN for different problems. This shows a good potential of OSGA-S to be applied to machine learning problems. We now consider the accuracy (the ratio of the number of correctly predicted data labels to the total number of data multiplied by 100) of OSGA-S for solving (22) N , T , and f b denote the number of the function values, the running time, and the best function value achieved by the associated algorithm. In each problems, the best iteration, time (in second), and function value are displayed as bold The results are summarized in Table 5. From the results of this table, it is clear that in many cases the accuracy of OSGA-S is increased by considering a bigger number of iterations; however, it produces acceptable results after 50 iterations. In addition, it can be seen that the regularization parameter plays a crucial role in the accuracy of OSGA-S. Among state-of-the-art SVM solvers, we here compare the accuracy of OSGA-S with LIBSVM [17], FITCSVM (MATLAB internal function), PEGASOS [47], SVMperf [29] with their default parameters to solve (22). In our implementation, LIBSVM, FITCSVM, SVMperf, and PEGASOS attain the accuracies 65.27, 79.17, 79.17, and 69.44, respectively. A comparison among these accuracies with those reported in Table 5 shows that the accuracy of OSGA-S is comparable or even better than LIBSVM, FITCSVM, SVMperf, and PEGASOS for the considered data set. Since the number of training and testing data for the leukemia data set [24,25] is small, we consider a comparison among OSGA-S and LIBSVM, FITCSVM, SVMperf, and PEGASOS for w1a-w8a data sets [45]. After training procedure, we consider a concatenation of the training and testing data and apply the derived classification functions, where the obtained accuracy for each solver is reported in Table 6. In this table, OSGA-S-1, OSGA-S-2, and OSGA-S-3 stand for OSGA-S for the problems SVML1R, SVML22R, and SVML22L1R, where we tune the regularization parameter to get the best performance of OSGA-S (see the numbers in parentheses of the last three columns of Table 6) and stop OSGA-S after 50 iterations. The results of Table 6 show that the accuracy of OSGA-S is almost comparable with those of state-of-the-art solvers LIBSVM, FITCSVM, SVMperf, and PEGASOS. Table 5 The accuracy of OSGA-S for several levels of the regularization parameters after various number of iterations for solving SVML1R, SVML22R, and SVML22L1R Problem name Table 6 The accuracy of LIBSVM, FITCSVM, SVMperf, PEGASOS, OSGA-S-1, OSGA-S-2, OSGA-S-3 for solving the problem (22). The number of features for all data is 300, and TrD and TeD denote the number of training and testing data, respectively. Conclusions In this paper we give an iterative scheme for solving convex optimization problems involving costly linear operators with cheap nonlinear terms. More precisely, we combine OSGA with a multidimensional subspace search, which leads to solve a sequence of low-dimensional subproblems that can be solved efficiently by OSGA. Numerical results for overdetermined system of equations and support vector machines show the efficiency of the scheme proposed.
7,466.2
2018-10-10T00:00:00.000
[ "Computer Science" ]
An improved adjoint-based ocean wave reconstruction and prediction method Abstract We propose an improved adjoint-based method for the reconstruction and prediction of the nonlinear wave field from coarse-resolution measurement data. We adopt the data assimilation framework using an adjoint equation to search for the optimal initial wave field to match the wave field simulation result at later times with the given measurement data. Compared with the conventional approach where the optimised initial surface elevation and velocity potential are independent of each other, our method features an additional constraint to dynamically connect these two control variables based on the dispersion relation of waves. The performance of our new method and the conventional method is assessed with the nonlinear wave data generated from phase-resolved nonlinear wave simulations using the high-order spectral method. We consider a variety of wave steepness and noise levels for the nonlinear irregular waves. It is found that the conventional method tends to overestimate the surface elevation in the high-frequency region and underestimate the velocity potential. In comparison, our new method shows significantly improved performance in the reconstruction and prediction of instantaneous surface elevation, surface velocity potential and high-order wave statistics, including the skewness and kurtosis. Introduction In recent years, with the increasing capabilities in water wave measurement, substantial efforts have been made to assimilate the observation data of water waves into computational models for wave field reconstruction and prediction. In modern observational studies, the key wave properties and the spatial distribution of the wave surface elevations and velocities can be measured using remote sensing Flow E2-3 In the present study, we propose a new adjoint-based data assimilation method, named the connected-parameter method (CPM), to address the inconsistency between measurement resolution and simulation resolution. The key feature of our method is the consideration of the wave-physics-based connection between the control variables, i.e. the initial surface elevation and the initial surface velocity potential. A typical wave model imposes constraints on the time evolution of the wave state, not on the initial wave state. The unconstrained initial wave states have a significant impact on wave dynamics. Therefore, for wave reconstruction, it is necessary to develop a wave-physics-based connection to guide the algorithm to search for the optimal initial wave state. We show that the conventional method, named the free-parameter method (FPM) in this paper, has unsatisfactory performance when the measurement data have a lower resolution than the reconstructed wave field. The wave reconstruction and prediction performance of both methods are evaluated for wave data of various nonlinearity and noise levels, and the new CPM is shown to have much improved performance. Mathematical model and methodology In this section, we introduce the mathematical foundations of the wave reconstruction and prediction framework. As shown in figure 1, the key components include a wave model, the corresponding adjoint model and an optimiser. The wave model serves as a nonlinear function that maps the control variable, i.e. the initial wave condition, to a time series of wave simulation data. The adjoint model is used for calculating the gradients of a predefined cost function with respect to the control variables. The control variables are then updated iteratively through the optimisation process. Wave model For the wave simulation, we use the high-order spectral (HOS) method (Dommermuth & Yue, 1987;West et al., 1987), a phase-resolved wave model. Under the potential flow assumption, it can be shown that the wave system is uniquely determined by the quantities at the surface (Zakharov, 1968). The governing equations expanded to the third perturbation order are (Aragh & Nwogu, 2008;West et al., 1987;Yoon et al., 2015) where = (x, y, t) is the surface elevation, = (x, y, z = , t) is the velocity potential at the water surface, z = / z(x, y, z = , t) is the surface vertical velocity, ∇ = ( / x, / y) is the gradient operator in the horizontal directions, x and y denote the horizontal coordinates, z denotes the vertical coordinate and L[ ] = −F −1 [|k|F [ ]] is a linear operator with |k| being the magnitude of the wavenumber. Here, F and F −1 denote Fourier transform and inverse Fourier transform, respectively. The boundary condition is assumed to be periodic and the spatial derivatives are calculated efficiently with the fast Fourier transform. The fourth-order Runge-Kutta method is used for time advancement of the evolution equations. The HOS method has been used extensively in wave simulations. More details on its numerical scheme and validation can be found in Mei, Stiassnie, and Yue (2018). Adjoint model Based on the wave model, i.e. (2.1) and (2.2), the corresponding adjoint model is (Aragh & Nwogu, 2008) (x, y, t) L-BFGS-b Figure 1. Wave field reconstruction and prediction scheme consisting of the HOS method, the adjoint model and a gradient-based optimiser. where 1 = 1 (x, y, t) and 2 = 2 (x, y, t) are the adjoint variables. The state variables, and , which are predicted by the wave model, are stored and serve as the parameters in the adjoint model. The adjoint variables 1 and 2 are initialised as 0 at the final time instants. At each observation time instant, the difference between the predicted surface elevation obtained from the wave model and the measured data, i.e. ( − M ), is added to the adjoint variable 1 at the corresponding measured locations as 1 = 1 + ( − M ). The numerical scheme for integrating the adjoint model is the same as that for the wave model, except that the adjoint model is integrated backwards in time (i.e. with a negative time step) to obtain the adjoint variables at the initial time, which determine the gradients of the cost function with respect to the control variables, as explained in the next section. Cost function and gradients Searching for the optimal initial wave condition that minimises the difference between the reconstructed wave field and the measurement is a key step in wave reconstruction. In this study, we define a cost function to quantify this difference between the predicted surface elevation from the time evolution of the initial wave state and the measured surface elevation M , based on the L 2 -norm error (Aragh & Nwogu, 2008;Gronskis et al., 2013;Xu & Wei, 2016) (2.5) where N X and N Y denote the grid numbers of the measurement in the x and y coordinates, respectively, and N T denotes the number of time instants of the measurement used in the wave reconstruction process. For a given wave model, the predicted surface elevation is determined uniquely by the initial wave state used in the forward wave simulation. Therefore, J is a function of the initial conditions 0 and 0 , bounded by the wave model. The difference between our new method, CPM, and the conventional method, FPM, is summarised in table 1. In the FPM, both 0 and 0 are treated as the independent control variables to minimise Method Control parameters Gradient expression FPM 0 (x, y) and 0 (x, y) the cost function in the optimisation problem. In the optimisation process, 0 and 0 are updated at each iteration step using the gradient information J/ 0 and J/ 0 , respectively. In the new method, CPM, we derive a physics-based constraint that connects 0 to 0 by utilising the dispersion relation. The cost function is then determined entirely by 0 . Therefore, in an iteration step in the optimisation process of CPM, 0 is updated and 0 is calculated based on 0 via the physics-based constraint. Here we present the expression of the gradients of the cost function with respect to the control variables. The detailed derivations are given in the supplementary material available at https://doi.org/ 10.1017/flo.2021.19. As shown in table 1, the gradients of the cost function regarding 0 and 0 in FPM The total derivative of the cost function with respect to 0 in CPM is which utilises the relation between 0 and 0 , where = (g|k|) 1/2 is the wave angular frequency and the sgn function is defined as follows: We stress that, in the CPM, the first-order approximation shown in (2.8) only applies to the initial wave field, and the nonlinear wave dynamics are captured using the nonlinear wave evolution model shown in (2.1) and (2.2). Wave field reconstruction and prediction With the gradient information calculated from the adjoint model as shown in table 1, the L-BFGS method (Byrd et al., 1995;Zhu et al., 1997) is then used to optimise the control parameters to reduce the cost function. Similar reconstruction frameworks, including the forward model, the adjoint model and a gradient-based optimiser, have been used in previous studies (see e.g. Aragh & Nwogu, 2008;Foures et al., 2014;Gronskis et al., 2013;Xu & Wei, 2016). The key steps to reconstruct and predict the wave field from measurement are sketched in figure 1 and summarised as follows: • Step 1. An initial guess of 0 and 0 is given for starting the wave simulation. • Step 2. The HOS method is used for the forward simulation of the wave field from the initial time t 0 to the final time t f . • Step 3. The cost function J is calculated using (2.5). If the difference of J in two consecutive optimisation iterations is smaller than a predefined threshold value, the process ends and the initial field is the optimal solution. Otherwise, go to Step 4. • Step 4. The adjoint model is integrated from the final time t f to the initial time t 0 to obtain the gradient information J/ 0 and J/ 0 . • Step 5. The gradient information and the cost function are fed into the L-BFGS method for the optimisation of the initial condition 0 and 0 to reduce the cost function J. • Step 6: Return to Step 2. A new optimisation iteration is started with the modified initial condition 0 and 0 . Generation of wave data In this study, we use the wave solution obtained from the wave simulation using the third-order HOS method as the true wave data, which is sufficient to capture the nonlinear four-wave interactions (Hasselmann, 1962), to test the performance of the wave reconstruction methods. The initial condition of the wave field is constructed from the directional Joint North Sea Wave Project (JONSWAP) spectrum (Hasselmann et al., 1973) where p is the Phillips parameter, is the wave frequency, p = 1.57 rad s −1 is the peak wave frequency, p = 24.98 m is the peak wavelength, T p = 4 s is the peak wave period, = 3.3 is the peak-enhancement parameter, = 0.07 for ≤ p , = 0.09 for > p , and D( ) = 2 cos 2 ( )/π with ∈ [−π/2, π/2] is the angular spreading function. The parameters chosen here are similar to those in Qi, Wu, Liu, Kim, and Yue (2018). The computational domain size is set to L x × L y = 16 p × 16 p , with 512 grid points in the x and y directions, respectively. In each case, the wave data are collected after a relaxation period of wave evolution. In the present study, this relaxation period is 100 s, which is sufficient for capturing the nonlinear wave dynamics, as suggested in Dommermuth (2000). The simulation time interval is 0.08 s, and the time duration used for wave reconstruction and prediction is 100 s. The simulated surface elevations are referred to as the true wave field T (x, y, t). In the simulation of wave data, we consider different wave nonlinearity and noise levels in the computational cases listed in table 2. We use two quantities to measure the wave field nonlinearity, including the effective wave steepness defined as (Qi, Wu, Liu, Kim, & Yue, 2018) is the root mean square of the initial surface elevation, and the local maximal wave steepness (ka) l = max ( 2 x + 2 y ) 1/2 . As listed in table 2, we consider a range of wave steepness in cases KA03-N00, KA06-N00, KA09-N00 and KA13-N00. To account for the effect of the measurement error in cases KA09-N03, KA09-N06 and KA09-N10, we add a random noise with a magnitude of 3 %, 6 % and 10 %, respectively, of the maximal value of wave surface elevation to the true wave field as the measurement M (x, y, t). As shown in figure 2 and table 3, M has a much lower spatial resolution than the true wave field T . After the synthetic measurement data M are obtained, we then perform the data assimilation process separately using the FPM and CPM (see § 2.4). The grid number used for the wave reconstruction/prediction is 512 × 512. The degrees of freedom of the independent control variables is then 5.2 × 10 5 in FPM and 2.6 × 10 5 in CPM. In the optimisation iteration, we set both 0 and 0 to zero as the initial guess. The wave fields obtained by solving the governing equations (2.1) and (2.2) from the initial guess with data assimilation using FPM and CPM are referred to as the reconstructed/predicted wave field FPM and CPM , respectively (table 3). We use the first 50 s of data for wave reconstruction and the remaining 50 s for wave prediction. Note that the wave measurement data M are assumed unknown in the prediction time duration, i.e. [50, 100] s, in the data assimilation process. The choice of the above parameters is consistent with the recent studies on wave field reconstruction and prediction (Qi, Wu, Liu, Kim, & Yue, 2018). Performance comparison of CPM and FPM In this section, we evaluate the performance of CPM and FPM by comparing their results with the ground truth. The results presented are from case KA09-N00, and those from other cases are presented in the next section to examine the effects of nonlinearity and measurement noise. We evaluate the convergence Figure 3 shows the convergence of the normalised cost function as the number of optimisation iterations increases. In FPM, the cost function saturates at approximately 20 % of the initial value, while in CPM, the cost function converges to below 0.1 % of the initial value. Wave evolution To evaluate the algorithm performance, we first present the reconstructed initial wave state. In figure 4, we plot the instantaneous wave field at t = 0 of the true wave field T in the KA09-N00 case as described in § 3.1, the wave surface elevation FPM and CPM reconstructed using the conventional method, FPM, and our new method, CPM, respectively, and their zoomed-in views. In figure 5, we plot the same figures for velocity potential . As shown, the reconstructed surface elevation using FPM preserves the main features of the true wave field. However, we also observe spurious surface fluctuations in the wave field reconstructed by the FPM (figure 4b). The zoomed-in view shows that the FPM overestimates the wave crests and troughs, comparing figures 4(d) and 4(e). As shown in figures 4(d) and 4( f ), the reconstructed wave field using the CPM agrees well with the true wave field, including the regions near the wave troughs and crests. The result for the surface velocity potential is shown in figure 5. The magnitude of the reconstructed velocity potential by the FPM (figure 5b,e) is significantly underestimated compared with the ground truth (figure 5a,d). On the other hand, the results produced by our CPM (figure 5c, f ) and the ground truth are indistinguishable. Next, we present the omnidirectional wavenumber spectra of reconstructed 0 of FPM, CPM and the ground truth in figure 6(a). The difference between the FPM result and the ground truth is small in the low-wavenumber range but significant in the high-wavenumber region of k > 1.5k p , while the spectrum calculated from the CPM agrees with the true value throughout the entire wavenumber range. The unsatisfactory performance of FPM is likely caused by the coarse measurement resolution, which is much smaller than the resolution of the ground-truth wave field. Specifically, for the high-wavenumber wave components that are not resolved spatially, their dynamics may still be partially captured by the time series of the wave measurement data. The FPM fails to reconstruct these wave components, while CPM is effective because of the additional constraint in (2.8). The advantage of the CPM is also seen from the spectrum of the surface velocity potential plotted in figure 6(b). In our CPM, the spectrum of the velocity potential S agrees with the ground truth. In contrast, the FPM notably underestimates the spectrum S even for the peak wave, and the corresponding magnitude of FPM is then around half of the true value T . This might be explained by the different order of gradient magnitudes in the optimisation process (see figure S1 of the supplementary material), where J/ 0 is nearly one order of magnitude smaller than J/ 0 . In the gradient-based optimisation, the control parameters' increments are proportional to the gradient magnitudes, and thus the cost function minimisation would place more emphasis on the parameters with large gradients, i.e. 0 , resulting in the underestimated 0 in the found suboptimal solution. However, in CPM, the control parameter 0 is connected to 0 , which would not have the same problem caused by the different order of gradient magnitudes. We also present the summed omnidirectional spectrum of the surface velocities (u, v, w)| z= at both the beginning and end of the reconstruction time duration as in figures 6(c) and 6(d), where the surface velocities are calculated from the surface elevation and surface velocity potential as (Aragh & Nwogu, 2008;West et al., 1987;Yoon et al., 2015) The spectrum in FPM deviates from the ground truth significantly and has a non-physical high energy concentration in the large-wavenumber region, whereas the result in CPM agrees well with the ground truth. Furthermore, the spectrum distribution in FPM, which changes rapidly from t = 0 to t = 50 s as evidenced by the comparison between figures 6(c) and 6(d), also indicates that FPM fails to find a physically optimal solution. We further compare the time history of the reconstructed and predicted wave fields with the true data. Considering that the reconstructed and predicted wave field has a higher spatial resolution than the measurement (see table 3), we have first examined locations where the synthetic surface elevation measurement data are available (not plotted due to space limit): the FPM generally produces a slightly worse result compared with the CPM. As a comparison, we plot in figure 7 the results obtained using both methods at a fixed location without the measurement data (x = 9.4 p , y = 8 p ). The reconstructed surface elevation obtained by FPM has notable deviations from the true wave state, and the surface velocity potential obtained by FPM shares a similar distribution with the ground truth but with a visible difference, while the results obtained by CPM agree well with the true values. In addition, we also compare the spatial distribution of the reconstructed wave field and the true wave field along the line y = 8 p at t = 24 s and t = 72 s in figure 8. As shown in figures 8(a) and 8(b), the reconstructed surface elevation using FPM contains non-physical high-wavenumber oscillations in the spatial domain. The velocity potential obtained by FPM has a noticeable difference compared to the true data T as shown in figures 8(c) and 8(d), which would result in a more significant difference in velocity as shown in figures 6(c) and 6(d) considering the fact that the velocity is related to the spatial derivative of the velocity potential. Similar to figure 7, CPM produces more accurate results compared to FPM. We also observed that, for the results computed at other locations (not plotted), the CPM always outperforms FPM. In summary, including a constraint between 0 and the independent control variable 0 in CPM provides an apparent improvement over FPM in recovering the true wave dynamics in both the reconstruction and prediction time duration. Wave statistics To evaluate the statistics of the reconstructed wave field, we calculate the probability density function (p.d.f.) of the wave field. In figure 9, we plot the p.d.f. of / 2 1/2 and / 2 1/2 at t = 0, t = 24 s for the reconstructed wave field, and t = 72 s for the predicted wave field, where the bracket · · · denotes the spatial mean. Under the linear wave assumptions, the p.d.f. of the surface elevation yields a Gaussian distribution. However, as observed in field measurements (see e.g. Ochi & Wang, 1985), if the wave slopes are not small, the p.d.f. deviates from the Gaussian due to the nonlinearity, which is consistent with the p.d.f. of T and T in our result. The reconstructed and predicted wave fields obtained by CPM successfully recover the p.d.f. of the true wave field, while the results by FPM have a non-negligible deviation, indicating the difference of FPM and FPM from the true values, consistent with the results in the preceding section. Skewness and kurtosis are important statistics to reflect the physical features of a nonlinear wave field. Specifically, the skewness measures the deviation of the wave profile from a sinusoidal shape, and the kurtosis indicates the probability of the occurrence of extreme waves (Xiao, Liu, Wu, & Yue, 2013). We compute the skewness C 3 and the kurtosis C 4 from the instantaneous surface elevation as (3.3a,b) We present the evolution of skewness and kurtosis of the reconstructed wave field and the true wave field in figure 10. For a standard Gaussian distribution, their values are C 3 = 0 and C 4 = 3, respectively. x/λ p Figure 9. Probability density function of the normalised reconstructed wave fields obtained using FPM and CPM, and the true wave field at (a,d) t = 0; (b,e) t = 24 s (in reconstruction time duration); and (c, f) t = 72 s (in prediction time duration). The standard Gaussian distribution is also plotted. The skewness of the computed wave field varies from 0.05 to 0.2 and the kurtosis varies from 2.8 to 3.3, mostly above 3, which differs from the statistics of a standard Gaussian distribution because of the wave nonlinearity. As shown in figure 10, our CPM can successfully recover the skewness and kurtosis of the wave field in both the reconstruction time duration and the prediction time duration. On the other hand, there exists a distinct difference between the FPM results and the ground truth, especially for the kurtosis (see e.g. t/T p = 11 in figure 10b). Effect of nonlinearity and measurement noise To evaluate the effect of wave field nonlinearity and measurement noise, we perform the data assimilation for the reconstruction and prediction for wave fields with different nonlinearity and noise levels, as shown in table 2. The wave data generation process is described in § 3.1. We present the omnidirectional wavenumber spectra of the reconstructed initial wave field and the true wave field in figures 11 and 12 for the cases with the largest wave steepness and noise level, i.e. cases KA013-N00 and KA009-N10, respectively. Compared with the results for the case KA09-N00 (see figure 6), the deviation of the wave surface elevation obtained by CPM from the true wave data increases in the high-wavenumber range due to the high nonlinearity and noise. Nevertheless, the performance of CPM in reconstructing/predicting the surface elevation is much better than that of FPM. To quantify the overall performance of CPM and FPM, we define the correlation coefficient between and T as Figure 14. Same legend as in figure 13. The results for cases KA09-N00, KA09-N03, where N X and N Y denote the grid numbers of the reconstructed and predicted wave field in the x and y coordinates, respectively. The value of ( , T ) is a measure of the accuracy of the data assimilation scheme, and a larger value corresponds to a better accuracy. When = T , ( , T ) = 1. For the surface velocity potential, the correlation coefficient is calculated using the same definition as in the above equation. The time-averaged correlation coefficient in the reconstruction time duration (t < 50 s) and the prediction time duration (t > 50 s) for wave fields with different wave steepness and noise level are presented in figures 13 and 14. For the FPM result, the time-averaged correlation coefficients ( R , T ) and ( R , T ) in the reconstruction time duration decrease from 0.8 to 0.6 and from 0.9 to 0.7, respectively, with increasing wave steepness, and the correlation coefficients in the prediction time duration decrease from the values in the reconstruction time duration more rapidly with higher wave steepness. For the CPM result, the correlation coefficient of the reconstructed wave field with the true wave field is above 0.9 for all the cases. Figure 14 shows that the effect of noise level in the range of 0 % to 10 % has a negligible effect on the reconstruction performance. In these cases, the correlation coefficients of the reconstructed and predicted wave field obtained by CPM are notably higher than those obtained by FPM. Therefore, the advantages of CPM over FPM are unaffected by the measurement noise and higher wave nonlinearity. Simulations with different initial random wave phases are also performed for cases KA09-N00, KA09-N10 and KA13-N00. As shown in table 4, we found that the wave reconstruction and prediction performance for both FPM and CPM, quantified by correlation coefficients, varies only slightly with the initial random wave phases. Reasons for better performance of CPM The better performance in CPM over FPM is likely due to three factors. First, the non-physical highwavenumber wave components have been removed by using the wave-physics-based constraint for control variables in CPM. The wave model, i.e. (2.1) and (2.2), only restricts the evolution of and . However, 0 and 0 are independent variables that serve as the initial condition (Zakharov, 1968). In wave simulations, appropriate initial states are required to ensure that the wave dynamics are captured correctly. In FPM, the initial wave states are obtained via a gradient-based optimiser that does not guarantee the physical reasonableness of the found solutions for this complex system and results in a solution with non-physical high-wavenumber waves. The additional constraint in CPM, on the other hand, helps the optimisation algorithm to search for a solution with the reasonable initial wave state quantified by 0 . If the nonlinearity were to be included in the dispersion relation, the frequency of a given wave would be a function not only of the wavenumber and amplitude of itself but also of those of all other free-wave components (Wu, 2004). Therefore, it is infeasible to write a similar formula to serve as the constraint. Besides, because this linear assumption is imposed only at the initial time, and the forward wave model captures the nonlinear evolution afterwards, we do not expect a significant change in the reconstructed and predicted wave field even if a nonlinear dispersion relation can be incorporated into the constraint between 0 and 0 . Second, the additional constraint in CPM addresses the issue of the inhomogeneity in the magnitude of the gradient in the optimisation process of FPM. In the gradient-based optimisation used in FPM, the increment of the control parameters is proportional to the gradient magnitudes, and thus the optimiser tends to modify the parameters with large gradients, i.e. 0 . However, in CPM, the control parameter 0 is connected to 0 , which would not have the same problem caused by the different order of gradient magnitudes. An alternative way to examine this issue is to use the scaling strategy. Strictly speaking, the suitable value for scaling is unknown before wave reconstruction because the velocity potential is unknown and only sparse measurements of surface elevation are available. However, for the sake of the performance testing, we assume and are known and choose them as the scaling factors for and , respectively, to ensure that the normalised variables have the same order of magnitude. As shown in table 5, while the scaling strategy enhances the performance of FPM, its cost function is still 60 times larger than that in CPM, suggesting that the performance issue in FPM cannot be solved by scaling. Third, this seemingly counter-intuitive worse performance in FPM by using an extra control variable 0 is known as the degradation problem widely observed in the optimisation of complex systems with large degrees of freedom (He, Zhang, Ren, & Sun, 2016). Specifically, when the number of independent control variables increases, the performance of the optimiser decreases counter-intuitively such that the optimiser fails to find the global optimal solution. The degradation problem is based on the observation of an abnormal increase of the cost function with adding more free control variables. Rigorous theoretical explanation is still a research topic to address in the research community. An effective method that has been adopted by the deep neural network optimisation community to alleviate the degradation problem Figure 15. Omnidirectional wavenumber spectra of the reconstructed initial surface elevation using measurements of different temporal resolutions and the true initial surface elevation: (a) for case and (b) for a typical wave field of wave period T p = 10 s. Results are obtained using CPM. is to add connections in different neural layers to change the system structure. Our results show that a wave-physics-based constraint introduced in the CPM can also provide an effective solution to help the optimiser when searching for the globally optimal result. Effect of the discretisation of measurement on CPM performance We have shown that measurement with a coarse resolution of Δx M / p = 0.5 and Δt M /T p = 0.1 is sufficient to accurately capture the high-frequency information at the present configuration using CPM. Theoretically, by assuming that the temporal resolution of the measurement is adequately high and by using the linearised wave theory, it is possible to determine the predictable zone for irregular wave fields by the maximum and minimum wave group velocities and direction spreading angle of the wave field as well as the spatial and temporal extents of the measurement Wu, 2004). However, this theoretical work is inapplicable for discretised spatial and temporal resolutions, which are typically seen in wave measurement practice. For the case KA09-N00, CPM has a fairly good performance in the reconstructed surface elevation spectrum when the temporal resolution is changed to Δt M /T p = 0.2, as shown in figure 15(a). However, the performance declines significantly when the resolution is Δt M /T p = 0.3. We also test the reconstruction performance for another wave field with p = 156 m, T p = 10 s, (ka) e = 0.08 and (ka) l = 0.33 as shown in figure 15(b). The simulation time step Δt = 0.25 s and the domain length is 16 p ×16 p with the 512×512 grid resolution. Two simulations are conducted with this wave field with measurements of different temporal discretisation Δt M /T p = 0.125 and Δt M /T p = 0.25 and the same spatial resolution Δx M / p = 0.5. Similar to the result shown above, high-wavenumber wave components are observed with Δt M /T p = 0.25, while CPM obtains good reconstruction performance for Δt M /T p = 0.125. Note that for all the cases, reconstructed wave fields obtained in CPM have higher correlation coefficients with the true wave fields than the results obtained from FPM. Therefore, the discretisation of measurement is an important factor that affects the accuracy of CPM. In real applications, we need to choose appropriate combinations of the spatial and temporal resolutions of measurement to obtain satisfactory results. Conclusions In this study, we have investigated the assimilation of measurement data for the wave field reconstruction and prediction based on the HOS method and its adjoint model. In our method, we quantify the difference between the reconstructed/predicted wave field and measurement as a cost function. Compared with the conventional adjoint-based wave data assimilation method, FPM, we have introduced a physical constraint on the initial wave field in the new method, CPM, which is shown to effectively reduce the cost function in both the reconstruction and prediction. In the optimisation process, we use the L-BFGSb method to minimise the cost function, and convergence can be reached within several iterations steps. The new method can be applied to situations where the wave measurement data have a low resolution. To evaluate the performance of the new method, CPM, we have generated wave data with different wave steepness and noise levels. The gradient information calculated using the adjoint model is validated against that obtained from the finite-difference method (see the supplementary material). We have conducted numerical tests to examine the optimisation, reconstruction and prediction performance for the new CPM and the conventional FPM. In the test results, CPM shows an advantage over FPM. Using one half of the control variables in FPM, CPM proves to be more efficient in reducing the overall cost function than FPM. We have also calculated the omnidirectional wavenumber spectra of the optimal initial wave states. It is found that non-physical high-wavenumber components are generated in reconstructed surface elevation and the magnitude of the initial surface velocity potential is underestimated in the FPM results, while the CPM results are close to the ground truth, demonstrating an improved capability of wave field reconstruction and prediction. The time history and the spatial distribution of the wave states reconstructed and predicted by CPM show significantly smaller errors than FPM. In addition, the CPM can successfully predict key wave statistics, including the p.d.f., skewness and kurtosis of the wave field. We have also investigated the effect of measurement noise and wave nonlinearity, and observed a better performance of CPM over FPM in all cases. Finally, we remark that in this study the performance of data assimilation is evaluated using the synthetic wave data obtained from simulation, which might be limited by the assumptions employed by the wave model, e.g. potential flow and periodic boundary conditions. In applications, similar tests can be conducted using wave data with realistic noise obtained from field measurements. Other effects, such as bottom topology, ambient current, wind forcing and wave breaking, are not incorporated in the present wave model but could in principle also be included with modifications (Wu, 2004). When a physical process is significant but not captured in a wave model, the error caused by the model inaccuracy would result in an inaccurate reconstructed wave field. In future studies, it would be interesting to incorporate these effects in a modified data assimilation framework for both the wave model and the adjoint model.
8,118.6
2022-01-24T00:00:00.000
[ "Environmental Science", "Engineering", "Physics" ]
School enrollment in Slovakia In this paper the author analyzes the development of unemployed graduates in Slovakia. The aim of this paper is to analyze the course of unemployment of graduates, the main causes of unemployment and ways to address the situation in the labor market. INTRODUCTION Dramatic rise in youth unemployment due to the global financial and economic crisis is one of the key challenges of the current labor market. Unemployment manifests itself as one of the most serious economic problems, which impacts on almost every aspect of the unemployed individual as well as his loved ones. The most vulnerable groups in the labor market tend to be young people who are seeking their first job after they have completed their education. The paper attempts to show the development of graduate's unemployment rate, from high school to university graduates. The author analyses those fields of study in which the graduate's unemployment is the most notable. The more promising courses and places where the graduate's unemployment rate is significantly influenced by regional disparities are presented in the paper together with suggested measures to improve matters. CURRENT SYSTEM OF EDUCATION AND LABOUR MARKET Slovakia has been suffering poor linkage of education to the labor market, which results in high unemployment of graduates. It is not just the lack of jobs in the labor market for this group, but often the opposite is true when there are enough jobs, but the lack of qualified graduates. Two thirds of graduates end humanities oriented disciplines (economics, law and social work) while the labor market lacks mainly the graduates of Technical Education (programmers, electricians), who are in short supply as a result of inadequate communication between the system of education, the labor market and employers. Slovakia needs the reform of the system of education so that the graduates get applicable in the labor market. Graduates are disadvantaged by the fact that they have no work experience and experience in their field of study. Some put a lot of energy into the studies but there is no demand on the labor market. For those unemployed not only economic but also psychological problems increase the risk of anti-social behavior and creating unhealthy dependence on parents. In my work I use data on unemployment, which are collected by the Central Office of Labour, Social Affairs and Family and data on graduates of schools of the Institute of Information and Prognoses of Education. By definition defined by the Law No.5 / 2004 on Employment Services a school graduate is a citizen under 25 years of age, who ended systematic vocational preparation in full-time study less than two years agoand has not earned his first regular pay. Graduate's unemployment rate The unemployment rate in Slovakia is assessed in two ways, according to the Statistical Office of the Slovak Republic based on labor force survey and by the Office of Labour, Social Affairs and Family of the available number of job seekers in the total number of applicants. Each of these sources is characterizes unemployment in Slovakia from a different perspective. One based on the official unemployment register, which is supported by national legislation, the other one is survey based on standard international methodology. Due to the diversity of methodologies, the data can not completely coincide. The following table analyzes the development of the overall rate for the period. Table. 2 documents the number of unemployed graduates (secondary and tertiary). The lowest numbers of unemployed graduates were in 2007 and 2008. The declining trend was associated with young people moving abroad for work, but also with made structural changes, implementation of reforms and action of tools of active labor market policy of the state. 66 Volume 46 Sources of data: Basic statistical indicators of the labor market; monthly statistics ÚPSVR, own calculations; http: // www.upsvar.sk The number of unemployed graduates is increasing mostly in the field of social sciences and services. Conversely, graduates of technical directions are increasingly in demand for labor and the number registered as unemployed is decreasing. Higher unemployment of graduates of educational and social sciences is to be considered more serious Graduate's unemployment according to the length of their registration as unemployed The following table shows the number of graduates by length of tally. As can be seen, the unemployment of graduates tends to shift higher to long-term unemployment, which is double in comparison to that of 2008. In the records of the unemployed there are more high school graduates. University graduates are in the labor office records for a shorter period of time before they can find a job, it is also due to higher qualifications. This analysis of unemployment of secondary schools and universities graduates showed structural disparities that exist in the labor market. Based on the findings of the analysis following may be recommended:  close interrelationship of individual schools with employers and training carried out in those fields of study that are necessary for the labor market  encourage schools and students to study specializations requested by the labor market  better preparedness of young people in terms of language and communication, as well as management skills and their ability to support independent business activities  establish a system of cooperation between schools and employers so that graduates gain working skills during their studies and are more prepared for the needs of practice  the creation of system of employment policies should focus increasingly on employers who provide scope for reducing unemployment. It is also necessary to set up active policy instruments that will stimulate production of new job positions and which will also motivate the workforce to employ and eliminate those attempts which are decreasing effect and act as unsystematic. CONCLUSION In Slovakia, school graduates` unemployment has become a serious economic problem, which must be paid due attention by the State. Governments show efforts to create new job positions, they also generate different instruments of active labor market policies, but these tend to be unaddressed in many cases and rather abused by employers. As a result, graduates do not obtain job skills that employers require. Only coordination of all actors in the labor market may lead to systemic and comprehensive solution to the problem of unemployment of graduates. Very positive element in the employment of the young generation is the motivation of employers, which can not be based on inefficient contributions, but must be based on a system of reciprocal benefits. Therefore various areas of state policy should be framed so that the state enters the economy with measures that will not restrict the rights and interests of business, but rather to encourage them to create new job positions. This issue, however, requires realistic analyses which have been insufficient or completely absent in Slovakia.
1,570.8
2015-01-19T00:00:00.000
[ "Economics" ]
Commissioning measurements for photon beam data on three TrueBeam linear accelerators, and comparison with Trilogy and Clinac 2100 linear accelerators This study presents the beam data measurement results from the commissioning of three TrueBeam linear accelerators. An additional evaluation of the measured beam data within the TrueBeam linear accelerators contrasted with two other linear accelerators from the same manufacturer (i.e., Clinac and Trilogy) was performed to identify and evaluate any differences in the beam characteristics between the machines and to evaluate the possibility of beam matching for standard photon energies. We performed a comparison of commissioned photon beam data for two standard photon energies (6 MV and 15 MV) and one flattening filter‐free (“FFF”) photon energy (10 FFF) between three different TrueBeam linear accelerators. An analysis of the beam data was then performed to evaluate the reproducibility of the results and the possibility of “beam matching” between the TrueBeam linear accelerators. Additionally, the data from the TrueBeam linear accelerator was compared with comparable data obtained from one Clinac and one Trilogy linear accelerator models produced by the same manufacturer to evaluate the possibility of “beam matching” between the TrueBeam linear accelerators and the previous models. The energies evaluated between the linear accelerator models are the 6 MV for low energy and the 15 MV for high energy. PDD and output factor data showed less than 1% variation and profile data showed variations within 1% or 2 mm between the three TrueBeam linear accelerators. PDD and profile data between the TrueBeam, the Clinac, and Trilogy linear accelerators were almost identical (less than 1% variation). Small variations were observed in the shape of the profile for 15 MV at shallow depths (< 5 cm) probably due to the differences in the flattening filter design. A difference in the penumbra shape was observed between the TrueBeam and the other linear accelerators; the TrueBeam data resulted in a slightly greater penumbra width. The diagonal scans demonstrated significant differences in the profile shapes at a distance greater than 20 cm from the central axis, and this was more notable for the 15 MV energy. Output factor differences were found primarily at the ends of the field size spectrum, with observed differences of less than 2% as compared to the other linear accelerators. The TrueBeam's output factor varied less as a function of field size than the output factors for the previous models; this was especially true for the 6 MV. Photon beam data were found to be reproducible between different TrueBeam linear accelerators well within the accepted clinical tolerance of ±2%. The results indicate reproducibility in the TrueBeam machine head construction and a potential for beam matching between these types of linear accelerators. Photon beam data (6 MV and 15 MV) from the Trilogy and Clinac 2100 showed several similarities and some small variations when compared to the same data measured on the TrueBeam linear accelerator. The differences found could affect small field data and also very large field sizes in beam matching considerations between the TrueBeam and previous linear accelerator models from the same manufacturer, but should be within the accepted clinical tolerance for standard field sizes and standard treatments. PACS number: 87.56. bd I. INTRODUCTION The TrueBeam is a new linear accelerator model manufactured by Varian. In this newest platform from Varian, many key elements differ significantly from those found in previous models. One of the key features is the availability of two types of photon beams: standard flattened filtered beams and flattening filter-free (FFF) beams. The TrueBeam linear accelerator has a slightly different design for the head and related components from its predecessors. For example, the carrousel system has been modified to permit the use of several photon energies (flattened and FFF modes). This accelerator has an integrated bending magnet with an in-air target instead of the vacuum-sealed target found in the standard Varian models. The TrueBeam also contains a thicker primary collimator of slightly different design to permit sharper beam fall-off, and uses an antibackscatter filter which can reduce the dose dependency on field size. This linear accelerator utilizes the same flattening filters for all standard photon energies as its predecessors, except for the 15 MV energy. The TrueBeam's 15 MV filter uses two different materials, while the Clinac and Trilogy models use a solid tungsten flattening filter. The previous dual-energy linear accelerator models from Varian include the Clinac models and the Trilogy model. The linear accelerator heads for these models are designed to the same specifications, but there are some implementation differences between the Clinac machines and the Trilogy model. The main difference is the availability in the Trilogy of a high-dose rate (1000 MUs/min) 6 MV photon delivery mode, which has a separate small filter optimized for small field treatments. An important characteristic of the Clinac and Trilogy models has been the availability of "beam matching" when the linear accelerators were set within the manufacturer's specifications or set to a specific dataset within the manufacturer specification range. (1) Such "beam-matched" energies result in dosimetric characteristics between different linear accelerators that may properly be considered dosimetrically equivalent. The beam matching criteria are based on depth ionization curves, as well as profiles measured in a certain specified geometry. The vendor's product documentation describes the beam match concept and data analysis protocols in detail; they have also been the subject of previously published studies. (2)(3)(4) One of the clear advantages of beam-matching linear accelerators is the improved efficiency and flexibility in patient treatment for institutions with multiple linear accelerators. Beam-match results and beam data reproducibility for Varian linear accelerators have previously been analyzed and presented. (1)(2)(3)(4) An additional characteristic from the reproducibility in the construction of the standard linear accelerator models is the availability of a beam dataset known as the Golden Beam Data (GBD). This reference dosimetric dataset is provided by the manufacturer (Varian Medical Systems). The accuracy of the Clinac dataset has previously been compared and evaluated. (5) No reference dataset is currently available for the TrueBeam linear accelerator. With the introduction of the new TrueBeam linear accelerator model, the additional FFF photon beam delivery mode will need to be considered in addition to the effects of the changes in the linear accelerator design. Several works have considered the beam characteristics and the benefits of using flattening filter-free photon for radiation oncology treatments. (6)(7)(8)(9) Other works have explored the considerations of treatment planning for FFF modes. (10)(11) A recent work compared the data of regular photon beams with FFF beams for a Varian TrueBeam linear accelerator. (12) However, no study has yet evaluated the beam characteristics of several TrueBeam linear accelerators for standard and FFF beam, or compared the dosimetric characteristics of previous linear accelerator models from the same manufacturer with the new TrueBeam linear accelerator. This study evaluates the beam characteristics and the potential for beam-matching capabilities of the TrueBeam linear accelerators. A comparison of two standard photon energies, 6 MV and 15 MV, and one flattening filter-free (FFF) photon energy, 10 MV FFF (or 10 FFF) is performed for three different TrueBeam linear accelerators. The dosimetric and beam characteristics of two standard photon energies from the TrueBeam are then compared with the Clinac and Trilogy models from the same manufacturer and the possibility of "beam matching" between the TrueBeam and the standard Varian linear accelerator models is analyzed. The energies evaluated between the different linear accelerator models are the 6 MV for low energy and the 15 MV for high energy. II. MATERIALS AND METHODS This study has been bifurcated for convenience. The first section compares the data from three separate TrueBeam linear accelerators. The second section compares the beam data measurements obtained from the TrueBeam linear accelerator with the data measurements from Trilogy and Clinac linear accelerators. A. TrueBeam data comparison Beam data commissioning measurements of percent depth doses (PDDs), beam profiles, and output factors were performed on three Varian TrueBeam linear accelerators (Varian Medical Systems, Palo Alto, CA) located at three different locations. Measurements were performed for 6 MV and 15 MV standard photon energies and for 10 FFF photon energy. The linear accelerators were accepted following the manufacturer's recommended procedures, and each accelerator's mechanical parameters and beam data were confirmed to be within the manufacturer's specifications for normal operation. No attempt to match these machines was performed, as the data was acquired at different instances and the data comparison occurred after all relevant measurements had been obtained. Measurements for all three linear accelerators were performed using a CC13 0.125 cm 3 ion chamber and IBA-Wellhofer scanning phantom system (IBA Dosimetry, Barlett, TN). The chamber was offset to the effective point of measurement (0.6*r cav ) for all photon beam data measurements performed. An analysis of percent depth dose (PDD) data was performed to evaluate the energy match. The depth of dose maximum (d max ) and PDD at 10 cm (PDD 10 ) was evaluated for three field sizes: 4 × 4 cm 2 , 10 × 10 cm 2 , and 40 × 40 cm 2 . An energy parameter value for comparison purposes was obtained by using a TPR 20/10 ratio. The TPR values were determined from the measured PDD 20cm and PDD 10cm data using an empirical approximation relation: TPR 20,10 = 1.2661 PDD 20,10 -0.0595, where PDD 20,10 is the ratio of percent depth doses at 20 cm and 10 cm depths. (13) An analysis of the mean and sample standard deviation was performed to evaluate the data variation. The 95% confidence interval (CI) on the mean was computed following the Student's t-distribution. An additional analysis was performed by comparing measurements for the crossplane beam profiles derived from two field sizes (10 × 10 cm 2 and 40 × 40 cm 2 ) at two different depths (depth of approximately dose maximum and depth of 10 cm) and 100 cm SSD. Beam profile data analysis was performed by calculating the difference between the profile data for two linear accelerators (TrueBeam#2 and TrueBeam#3) to the remaining TrueBeam linear accelerator (TrueBeam#1). An analysis of the difference in beam profile value based on relative dose normalized to 100% at the central axis was performed. A further evaluation of the distance-to-agreement (DTA) was performed using a gamma analysis of the profiles. (14) Profile data points were sampled at 1 mm spacing and the gamma analysis performed using criteria of dose difference (DD) of 2% and DTA of 1 mm. Since previous data indicated variations in output factor data between energy-matched linear accelerators, (1) we evaluated additional output factor data. The total scatter factor data can determine variations in the beam filter construction and other characteristics of the linear accelerator head construction. We obtained relative output factor measurement data by using an isocentric setup at depth of 5 cm (95 cm SSD) for several field sizes ranging from 3 × 3 cm 2 to 40 × 40 cm 2 . The resulting data were then averaged and compared to determine the variability between the different TrueBeam linear accelerators. B. Linear accelerator data comparison For the second portion of our study, we measured PDDs, beam profiles, and output factors on two additional Varian linear accelerator models: a Trilogy and a Clinac 2100. Measurements were performed for 6 MV and 15 MV photon energies. These linear accelerators were fully accepted and determined to be operating within the manufacturer's specifications prior to the start of the beam data acquisition. No attempt to match these machines was performed, as the relevant data had been acquired at different times and at different facilities. Measurements for the two linear accelerators were performed using a CC13 0.125 cm 3 ion chamber and IBA-Wellhofer scanning phantom system. The chamber was offset to the effective point of measurement (0.6*r cav ) for all beam data measurements performed. The resulting dosimetric dataset from each linear accelerator was compared to the average data derived from the three TrueBeam linear accelerators. An analysis of percent depth dose (PDD) data was performed to evaluate the energy match. Data from three field sizes were analyzed: 4 × 4 cm 2 , 10 × 10 cm 2 , and 40 × 40 cm 2 . The relative output factor measurement data were compared to the average data acquired from the TrueBeam linear accelerators to determine the variability between the different models. The measured crossplane beam profiles were compared with the TrueBeam profiles. Beam profile penumbra (distance between the 80% and 20% relative dose points) and the field size definition (width at 50% relative dose) were then evaluated. Additional beam profile data analysis was performed by calculating the profile difference and gamma analysis (DD = 2% and DTA = 1 mm or 2 mm) with the TrueBeam profiles. The diagonal profile shape was compared to evaluate any additional effects from the differences in the collimator and head design. A. TrueBeam data comparison Measurements of PDDs between the three TrueBeam linear accelerators showed variability of less than 1.0% for the PDD 10 and variability within 2 mm for the d max at the field sizes evaluated ( Table 1). The statistical analysis presented in Table 1 is limited due to the small sample size. Specifically, the confidence intervals for the standard deviations values are necessarily broad. However, the reported 95% confidence interval of the mean can provide guidance for the expected reproducibility. The analysis of the TrueBeam TPR value showed minimum difference between the three different linear accelerators for the energies evaluated. It was expected that these parameters would be very similar since the PDD value at 10 cm for a 10 × 10 cm 2 field size is a key parameter in the beam quality specification during acceptance testing with a tolerance of 1%. All profiles measured were also essentially identical for the three machines. The overlays of the profiles for the three machines were close to a single line (Figs. 1-3) indicating a similar beam quality and also high tolerance in the construction of the flattening filter for the standard photon energies. An analysis of the profile difference between TrueBeam#2 and #3 as compared with TrueBeam#1 showed variations < 1.0% for areas of low gradient. The gamma analysis (DD = 2% and DTA = 1 mm) resulted in a 100% passing rate for the profiles evaluated. The relative output factor in water at depth of 5 cm with respect to field size showed minimum variation (< 0.5%) for all three TrueBeam linear accelerators ( Table 2). The minimum variability of these measurements shows reproducibility in the collimator head construction of the TrueBeam linear accelerators. Similar results in photon beam data reproducibility as obtained in this study had been reported for the previous Clinac linear accelerator model. A study showed photon beam data measurements within 2% variability, with most beam parameters analyzed within 1% variability. (2) It should be noted that since no attempt on "beam matching" these TrueBeam linear accelerators was performed, the dataset could probably be fine-tuned to a greater agreement, if necessary, as has been performed for previous linear accelerator models. (1) B. Linear accelerator data comparison Measurements of PDD showed relative variability less than 1.0% for the PDD 10 and within 2 mm for the d max between the average TrueBeam, on the one hand, and the Clinac 2100 and the Trilogy, on the other hand, at the field sizes evaluated ( Table 3). The TPR data showed minimum difference between the linear accelerators. The TPR data for the10 × 10 cm 2 field size was found to be well within the TrueBeam data confidence interval, indicating no significant energy difference between the linear accelerators for the energies evaluated. It was expected that these parameters would be very similar since the PDD value and tolerance at 10 cm for a 10 × 10 cm 2 field size as specified in the acceptance documents from the manufacturer are the same for each of the linear accelerators tested. The gamma analysis of the profiles (DD = 2% and DTA = 1 mm) resulted in a passing rate greater than 99.0% for all cases except at depth of 2.8 cm for 15 MV, where the passing rate was greater than 98.0%. To further evaluate the profile differences, the profile data for one TrueBeam was graphically superimposed to the Clinac 2100 and Trilogy profiles (Figs. 4 and 5). Some variations were observed in the shape of the profile for 15 MV within the field and were more noticeable at the depth close to dose maximum. This can be clearly observed on the dose difference plots. This is probably caused by a difference in the flattening filter design for the 15 MV. However, even with the change in the flattening filter, the square field crossprofiles from the TrueBeam linear accelerators matched closely with the standard Varian linear accelerators; the gamma analysis with DD = 2% and DTA = 2 mm resulted in a 100.0% passing rate. A difference in the penumbra shape was observed between the TrueBeam and the other linear accelerators, with the TrueBeam data resulting in a slightly larger penumbra width (Fig. 6). The penumbra for the Clinac and the Trilogy were slightly sharper than that for the TrueBeams for most cases (Table 4). It should be noted that no attempt was made to match the jaw positioning calibration between the different machines. However, the scanned field width variations were within 2 mm for the profiles evaluated. Since the TrueBeam, Trilogy, and Clinac 2100 linear accelerators all use the same collimator jaw design and materials, the slightly wider penumbra on the TrueBeam profiles is probably caused by the different design and materials of the linear accelerator head affecting the beam scattering and by the different design of the bending magnet affecting the electron spot size at the X-ray target (Personal communication, Varian Medical Systems, July 20, 2012). Additional small field data measurements and comparisons are needed to determine the possible effects of the penumbra difference for small field treatments. Similarly, additional validations using beam modeled data are necessary to determine the possible implications for treatment planning. The diagonal scans demonstrated a significant difference in the profile shape at a distance greater than 20 cm from the central axis which was most notable for the 15 MV photon energy (Fig. 7). The additional "peak" in the shape of the 15 MV diagonal profile for the TrueBeam was probably caused by the difference in the shape of the flattening filter. The other differences observed outside the field for both photon energies are probably caused by the thicker primary collimator in the TrueBeam as compared with both the Trilogy and the Clinac models. The TrueBeam primary collimator was designed to give a sharper field drop-off which was clearly observed in the diagonal profiles of the TrueBeam as compared to the other standard linear accelerators. The analysis of the output factors from the TrueBeam average data showed some differences greater than 1% but less than 2% for smaller and larger field sizes when compared to the Clinac and the Trilogy linear accelerators. This can clearly be seen in the graphical representation of the output factors as a function of field size (Fig. 8). It was noted that the TrueBeam output factor values varied less as a function of field size, especially for the 6 MV beam, than that of the Clinac or the Trilogy linear accelerators. The difference in the field size dependence of the output factors is probably related to the antibackscatter filter introduced in the TrueBeam to reduce the dose dependency on field size. Other differences in the head construction of the TrueBeam could also have caused the differences in the output factors when compared to the Clinac 2100 and Trilogy linear accelerators. In summary, it can be noted that the Clinac 2100 and Trilogy photon beam data have some differences with the data from the new TrueBeam linear accelerator, but also several similarities. With the exception of the diagonal profiles, the photon beam data variation was less than 2%. The differences encountered are mostly related with the changes in the head design of the new linear accelerator model. These differences could possibly affect small field data and also very large field sizes in beam-matching considerations, but should be within the accepted clinical tolerance for standard field sizes and standard treatments. IV. CONCLUSIONS Photon beam data were found to be reproducible between different TrueBeam linear accelerators indicating reproducibility in the filter and machine head construction. The consistency of the beam data implies that a single beam dataset could be established for a set of TrueBeam linear accelerators within a clinic, indicating the potential for beam matching between such machines in the clinical environment. Photon beam data PDDs from TrueBeam (6 and 15 MV) as compared with the Trilogy and Clinac 2100 were very similar. The profiles from the TrueBeam (6 and 15 MV) showed some small differences as compared with the Trilogy and Clinac 2100. Some difference in shape of the profile within the field was observed for 15 MV. The TrueBeam profiles evaluated showed a slightly wider penumbra. Differences were also found in the shape of the diagonal profiles at distances greater than 20 cm from the central axis. Some differences (< 2%) were found in the output factors, mainly for the small and large field sizes, with the TrueBeam output factor data varying less as a function of field size. These results could affect small field data and also very large field sizes in beam-matching considerations between the TrueBeam and previous linear accelerator models from the same manufacturer. Additional studies involving the equivalence of treatment planning modeling with beam data from each linear accelerator type are necessary to determine the range of clinical significance for beam-matching considerations.
5,062.4
2013-01-01T00:00:00.000
[ "Physics", "Engineering" ]
Galvanic corrosion and cathodic protection of re-grouted, post-tensioned (PTd) concrete systems . Grouted post-tensioned (PTd) concrete systems are widely used in long-span segmental bridges with a target service life of 100+ years. However, the usage of inadequate grout materials and grouting practices have resulted in the formation of unwanted air voids in the duct, which in turn led to premature corrosion (say, within about 20 years) of strands and failure of tendons. Also, the re-grouting/repairing of void regions have led to localized corrosion of strands at the interface between the dissimilar base-grout (usually carbonated) and repair-grout. This study aims (i) to quantify the galvanic corrosion at the void region in a PTd system re-grouted with a dissimilar grout and (ii) to develop cathodic protection system to protect PTd anchorage regions. Specimens simulating the re-grouted strand-grout-air (SGA) interface were prepared with prestressing steel wires and cementitious grout. The macro-cell current (galvanic current) between the prestressing steels embedded in carbonated base-grout and repair-grout indicated that galvanic corrosion can be possible at the SGA interface – reducing the long-term structural reliability of re-grouted PTd bridges. In addition, the feasibility of galvanic anode cathodic protection system to protect PTd anchorage regions was assessed. For this, a proof-of-concept study was conducted to validate that a thin layer of grout around the strand will be sufficient for a galvanic anode (connected to the end of the strand at outside the tendon anchorage) to protect the strand portions inside the duct/anchorage. Introduction Grouted post-tensioned (PTd) concrete systems are widely used in long-span segmental bridges with a target service life of 100+ years. However, premature strand corrosion within about 10 to 20 years has been observed in many such PTd concrete systems due to inadequate grouting materials and practices [1][2][3][4][5]. Figure 1 shows the photo and schematic of the anchorage region from a PTd system with exposed prestressed strands due to inadequate grouting materials and practices. Figure 2 shows the schematic of the side elevation of such inadequately grouted PTd anchorage region with the void and strand-grout-air (SGA) interface. These voids expose the prestressing strands and the SGA interface to atmospheric humidity, CO2 and chlorides. In a typical internal PTd concrete system, the prestressing strands are embedded inside the duct, which is then embedded inside the concrete. Hence, the condition of strands is not visible from the outside. Therefore, visual inspections cannot be used as a tool to assess the condition of strands in the anchorage region. If left unnoticed, the corrosion of strands might lead to a reduction in the load-carrying capacity and/or catastrophic failures. The major challenge in repairing such inadequately grouted PTd system is the difficulty in accessing the strands which are embedded inside the concrete. The repair work must be done from outside the anchorage. The objective of this work is to understand corrosion of strands in anchorage regions and then propose a feasible electrochemical repair method. The remaining paper is organized as follows: first, a review of possible techniques to repair PTd systems and their associated challenges are presented. Then, an experimental programme assessing the severity of galvanic corrosion due to the electrochemical incompatibility between the base-grout and repair-grout in a re-grouted PTd system is presented. Finally, the feasibility of galvanic anode cathodic protection system in protecting the PTd anchorage region is discussed. Possible repair techniques and their challenges Following are some of the possible techniques to control corrosion of prestressing strands in PTd systems, which can be implemented from outside the anchorage region. Re-greasing is a method by which grease can be used to protect the exposed anchorages by the formation of a barrier to water and other corrosive contaminants. However, the removal of old grease is difficult, and if not properly cleaned, the old grease will cover the existing rust and can allow the pits to grow deeper [6], if sufficient moisture and oxygen are available. In chemical impregnation, corrosion-inhibiting chemicals (hydrocarbon and silicon-based) can be impregnated by pressure through the interstitial spaces between the wires of each strand. These chemicals form a thin film around the steel surface; improve the corrosion resistance of the existing grout and inhibit the corrosive environment. However, it is difficult to fill all the voids [7]. Neutral-pH rust remover solutions can be pumped through the interstitial spaces in the strands. By the process of selective chelation, the solution reacts with the rust surface and eventually gets detached from the metal beneath. This method may not provide much protection for severely corroded steel and has to be combined with other techniques (say, chemical impregnation) [6]. In cable drying/de-humidification, inert noncorrosive gas like nitrogen is passed into the ducts under pressure. The inert gases will displace the oxygen around the tendons; dry the wet grouts (say, RH < 40%); and inhibit the corrosive environment. However, it is difficult to maintain the required pressure all the time. Also, this method may not be feasible for highly impervious grouts [6,8]. Re-grouting the voids with cementitious grouts is one of the feasible methods to repair a PTd anchorage system. Much research has been done on the initial grouting of PT ducts, but only limited research has been done on the repair-grouting of the PT ducts. The voids are generally filled/re-grouted with repair-grouts without addressing/repairing the carbonated SGA interface, as shown in Figure 2. In such situations, an electrochemical incompatibility can arise due to variations in the physical and chemical properties of the base-grout and repair-grout, resulting in the formation of a corrosion cell at the re-grouted SGA interface. The case study on the corrosion failure of a PTd tendon in a bridge within four years after re-grouting emphasize the severity of galvanic corrosion [9]. Due to the concerns of possible galvanic corrosion after re-grouting, supplemental methods to control corrosion in re-grouted PTd systems are required. The idea of cathodic protection (CP) of prestressed concrete bridges has been adopted from the CP of prestressed concrete cylinder pipes used for water and sewer transmission services [10]. CP is commonly applied in prestressed concrete bridges primarily to protect mild steel reinforcements from corrosion. The application of CP to the prestressed strand is still developing due to the following concern. The cold-drawn prestressed steel can be susceptible to hydrogen embrittlement when they are cathodically over-protected. In other words, when the prestressed steel is polarized to potentials equal to or more negative than the hydrogen evolution potential, then the metal can be susceptible to hydrogen embrittlement. Several authors have studied this mechanism using slow strain rate experiments and recommended the threshold potential for CP should be ₋ 900 mV (SCE) to prevent hydrogen embrittlement [11][12][13]. However, CP using galvanic anodes is considered safe from the standpoint of hydrogen embrittlement due to the limited polarization and ohmic losses in concrete structures [13]. Galvanic corrosion in re-grouted post-tensioned concrete systems This study attempted to quantify the galvanic corrosion between the carbonated base-grout and repair-grout in a re-grouted PTd system. For this, specimens simulating the re-grouted SGA interface were prepared with prestressing steel wires and site-batched grout (w/c = 0.45 and plasticized expansive admixture dosage of 0.45 percent by weight of binder). The specimen preparation involves casting the bottom portion simulating the carbonated base-grout and then the top portion simulating the repair-grout, which is explained next. Prestressing steel of 5.2 mm diameter and 10 mm length was extracted from the central king-wire of a 15.2 mm diameter prestressing strand. The steel specimens were drilled and tapped at one end to enable an electrical connection for the testing. The specimens were ultrasonically cleaned with distilled water for 15 minutes and then wiped with a cotton cloth soaked with ethanol. A stainless-steel rod of 3 mm diameter was fastened to the prestressing steel at one end. The junction between the prestressing steel and the stainless steel rod was coated with epoxy to eliminate the presence of humidity to initiate galvanic corrosion between the prestressing steel and stainless steel. Figure 3 shows the schematic of the bottom portion of the specimen. The specimens were cast and demoulded after 24 hours and were kept in a 65% RH and 25 °C environment for seven days to enable the passivation of the prestressing steel. After that, the specimens were allowed to carbonate in a 3% CO2 environment (± 25 °C and 65 % RH) in a carbonation chamber until the depassivation of the embedded prestressing steel wire. Depassivation behaviour of prestressing steel was assessed using electrochemical impedance spectroscopy (EIS) tests on the 3-electrode corrosion cell with the prestressing steel as the working electrode (WE), a Nickel-Chromium mesh as the counter electrode and a saturated calomel electrode as the reference electrode ( Figure 3). The testing was carried out with an AC perturbation signal of ±10 mV amplitude applied over a frequency range of 10 5 to 0.01 Hz at open circuit potential (OCP) with 10 points per decade. Figure 4 shows the Nyquist and Bode responses of one of the tested specimens. Incomplete and overlapping arcs of three semicircles are observed. As shown in Figure 4(a), an equivalent circuit with three Resistor-Constant Phase (R-Q) elements in series was used to model the response from the specimen. The first semicircle corresponds to the solution (say, water + grout; Rs and Qs). The second semicircle corresponds to the double layer (Rdl and Qdl). The third semicircle corresponds to the oxide layer (passive layer) of the prestressing steel (Rox and Qox). A schematic of an ideal representation of the EIS response is shown in the inset of Figure 4(a) for clarity and easy understanding. Depassivation of the specimen was assessed based on the comparison of the oxide layer characteristic (Rox). The slope of the arc in the low-frequency region (10 ˗1 to 10 ˗2 Hz) in the Nyquist representation qualitatively indicates the condition of passive layer. A larger slope represents a larger diameter semicircle, indicating a larger Rox. The EIS testing was conducted on Day 30 and Day 45 of carbonation exposure. A reduction in the slope of the arc in the low-frequency region was observed in the Nyquist representation at Day 45. The reduction in slope indicates a decreased Rox, corresponding to a depassivated system. This could be evident from Bode frequency and magnitude responses, as shown in Figures 4b and c. A gradual reduction in the magnitude and phase angle corresponding to the low-frequency region (0.01 Hz) indicates a change in the passive layer. The |Z|0.01 reduced from 95 kΩ on Day 30 to 60 kΩ on Day 45. The φ0.01 also reduced from 66 to 62 degrees -ascertaining that the specimen was de-passivated. After that, the specimens were removed from the carbonation chamber and the repair-grout was cast over the base-grout. It must be noted that a gap of 1 mm was maintained between the circular end faces of two prestressing steel wire pieces. After one day, the specimen was demoulded, and the two prestressing steel wire pieces were connected externally, as shown in Figure 5. The specimens were maintained at 95% RH and 25 °C until steady currents were observed. A pico-ammeter was used to measure the current between the two prestressing steel wire pieces within the specimen. An average galvanic current density of 1.5 µA/cm 2 was measured between the base-grout and the repair-grout. The direction of the galvanic current indicates that the prestressing steel in the base-grout was the anode and the prestressing steel in the repair-grout was the cathode. Then, the theoretical mass loss of the prestressing steel was calculated using Faraday's law of electrolysis and the measured galvanic current. The analysis estimated a 5% reduction in mass of the prestressing wire within 20 years of service, emphasising the severity of such galvanic corrosion. Hence, this study recommends the prohibition of re-grouting of voids without treating the carbonated SGA interface. This study also proposes the re-alkalization of the carbonated SGA interface with alkaline solutions before re-grouting. However, it could be difficult to achieve complete re-grouting of voids; and hence, galvanic anodes can be used as a supplemental method to protect the strands from corrosion which is explained next. Cathodic protection of strands in the anchorage regions This study proposes the use of galvanic anode cathodic protection systems to protect the anchorage region (portions of strands lying inside and outside the bearing plate) of a PTd concrete system. Figure 6 shows the schematic of the proposed corrosion protection system with a galvanic anode attached to the end of the strand outside the bearing plate. It is important to understand that every metallic system protected by a galvanic anode needs the following two components: a path for the transfer of electrons and a path for the transfer of ions. The strands, wedges, and bearing plate are all in contact with each other, making them all electrically connected. Hence, it can be well understood that the galvanic anode can protect the metallic components outside the bearing plate due to the availability of an ionic path. However, there arises a question on the protection of strands inside the bearing plate by the galvanic anode, and this study attempted to answer this question by a proof-of-concept experiment. Fig. 6. Schematic of a cathodically protected PTd anchorage region A proof-of-concept experiment was conducted to validate the feasibility of galvanic anode to protect the strands lying inside the bearing plate. Figure 7 shows the schematic of the experimental setup simulating the inside and outside portions of a PTd anchorage region. A prestressing wire coated with a thin layer of grout (~ 1 to 2 mm) was positioned between two containers, A and B. The containers were filled with simulated concrete pore solution mixed with 3.5 % chlorides. A discrete zinc-based galvanic anode was immersed in container B and was electrically connected to the prestressing wire through a switch arrangement. Initially, the switch was kept in the OFF position. One end of the prestressing steel was connected to a potentiostat, and the open circuit potential (OCP) of the prestressing steel in both containers was measured. To achieve this, a saturated calomel reference electrode was kept inside container A, and the OCP was measured as -250 mV. Similarly, the saturated calomel reference electrode was kept inside container B, and the OCP was measured as -255 mV. Then, the experiment was continued with the reference electrode placed inside Container A. At this point, the switch was turned ON, and a sudden increase in the OCP towards the negative side was observed, as shown in Figure 8. It can be inferred that as soon as the switch was turned ON, the system became a coupled system (prestressing steel + galvanic anode), and the potentiostat started recording the mixed potential. From this, it is evident that a thin layer of grout over the strand is enough to conduct ions from the outside to the inside of the bearing plate. Hence, galvanic anode cathodic protection is an electrochemically feasible repair solution to protect inadequately grouted PTd anchorages. The use of inadequate grouting materials and practices has resulted in the formation of voids and premature corrosion in PTd concrete systems. A review of the possible techniques to repair such PTd systems and their challenges was presented. This study quantified the galvanic corrosion strands at the interface of the carbonated base-grout and repair-grout in a re-grouted PTd system. A galvanic current density of 1.5 µA/cm 2 was determined at 95% RH and 25 °C. The analysis revealed that such galvanic current density could result in about 5% reduction in mass of the prestressing wire within about 20 years of exposure/service, emphasising the severity of such galvanic corrosion while considering the century-long protection needed. Hence, this study recommends the re-alkalization of carbonated Strand-Grout-Air (SGA) interface followed by re-grouting of voids. However, it could be difficult to achieve complete re-grouting of voids; and hence, galvanic anodes can be used as a supplemental method to protect the strands from further corrosion. A proof-ofconcept experiment proved that a thin layer of grout around the strand would be sufficient for a galvanic anode (connected to the strand-end outside the tendon anchorage) to protect the strand portions inside the duct/anchorage, and hence extend the service life of PTd concrete systems.
3,766
2023-01-01T00:00:00.000
[ "Materials Science" ]
Radiographs Reveal Exceptional Forelimb Strength in the Sabertooth Cat, Smilodon fatalis Background The sabertooth cat, Smilodon fatalis, was an enigmatic predator without a true living analog. Their elongate canine teeth were more vulnerable to fracture than those of modern felids, making it imperative for them to immobilize prey with their forelimbs when making a kill. As a result, their need for heavily muscled forelimbs likely exceeded that of modern felids and thus should be reflected in their skeletons. Previous studies on forelimb bones of S. fatalis found them to be relatively robust but did not quantify their ability to withstand loading. Methodology/Principal Findings Using radiographs of the sabertooth cat, Smilodon fatalis, 28 extant felid species, and the larger, extinct American lion Panthera atrox, we measured cross-sectional properties of the humerus and femur to provide the first estimates of limb bone strength in bending and torsion. We found that the humeri of Smilodon were reinforced by cortical thickening to a greater degree than those observed in any living felid, or the much larger P. atrox. The femur of Smilodon also was thickened but not beyond the normal variation found in any other felid measured. Conclusions/Significance Based on the cross-sectional properties of its humerus, we interpret that Smilodon was a powerful predator that differed from extant felids in its greater ability to subdue prey using the forelimbs. This enhanced forelimb strength was part of an adaptive complex driven by the need to minimize the struggles of prey in order to protect the elongate canines from fracture and position the bite for a quick kill. Introduction Few extinct predators are as well-known as the saber tooth cats, which are touted for their prowess as ultimate mammalian predators [1,2]. Numerous studies of the skull, teeth, and neck of sabertooth cats have examined how they may have dispatched their prey, e.g. [1,[3][4][5][6][7][8][9][10]. A consensus has emerged that the sabertooth cat Smilodon fatalis probably differed from modern big cats in making relatively quick kills using directed slashing bites to the throat rather than a suffocating bite, as is typical of extant big cats such as lions. In association with this, Smilodon had robust forelimbs that were instrumental in restraining prey so that the killing bite or bites could be made with minimal risk of breaking the elongate canine teeth [11][12][13]. From external measurements of the forelimb bones, it appears that they were relatively thick for their length [2,12,13] and therefore probably more resistant to bending and compressive loads; however, more accurate estimates of strength require data on both external diameters and cortical bone thickness. Radiographs allow the measurement, in any plane of interest, of both endosteal and subperiosteal bone diameters and also cortical area and thickness. These measures can be used to estimate bone strength in axial compression (cortical area) as well as to calculate moments of area that reflect resistance to bending and torsion. Previous workers have used cross-sectional properties of mammalian limb bones in various species to identify differences in the pattern of forelimb versus hind limb use, e.g. [14,15], to estimate body mass in extant and extinct taxa, e.g. [16][17][18], to document significant declines in human bone strength over time despite relatively constant external bone dimensions [19], and even to document asymmetries in left vs. right arm strength in modern human athletes [20]. Despite the many uses of cross-sectional properties in the literature, there is substantial debate about the straightforwardness of these measurements. Studies [21][22][23] warn that cross-sections of limb midshafts do not always indicate repeated loading patterns in all animals in the same way and cross-sectional geometry of long bones does not correlate well with strain patterns. These authors recommend that in vivo data be used whenever possible to get accurate assessments of strain patterns and bone loading. With these caveats, there is still evidence that strain does play a role in bone remodeling, however, this role is more complex than originally thought [24][25][26]. When in vivo studies are not possible, as in fossil species, variations in bone structure can still be effective indicators of locomotor modes and limb use among closely related species [27]. Comparisons of bone cross-sectional properties can also be good estimators of mechanical ability, if the comparisons are kept to closely related groups that share similar body plans and locomotor ecologies, such as living and extinct felids [27]. Quadruped limbs are used for weight-bearing as well as other activities, such as climbing, digging, swimming and grappling with prey. In the case of large cats, the hind limb functions primarily in weight-bearing and propulsion, whereas the forelimb functions in weight-bearing, climbing, and prey killing [28,29]. Of course, the hind limbs contribute during climbing but their role is still largely propulsive whereas the forelimbs both grasp the trunk and pull the body upwards. Thus, it might be expected that the humeri of cats that are arboreal or take prey larger than themselves would exhibit greater cortical thickening than expected based on body mass alone. Surprisingly, this does not appear to be the case, as a recent study found that humeral cross-sectional properties were better predictors of body mass than prey size or locomotor habits in extant felids [30]. Given the proposed greater need for strong forelimbs, we hypothesized that the humerus of Smilodon would exhibit significantly greater resistance to bending and compression relative to other cats, whereas its femur would scale as expected for its body size. Here we provide the first quantitative analysis of the ability to resist bending stresses in the forelimbs of S. fatalis using radiographic images, and compare it to living cats; and because Smilodon was as large, or larger than the largest extant felids, we also included the extinct American lion (Panthera atrox) in our sample as a much larger species with forelimb morphology that is similar to its extant sister group, Panthera leo [31], and unlike Smilodon. Results When Smilodon fatalis was compared with all extant felids and the larger, extinct lion, Panthera atrox, it had humeri that were more resistant to non-axial bending (J/2) and more resistant to bending specifically in both the mediolateral and craniocaudal planes relative to bone length (Table 1, Fig. 1a-c). Although P. atrox is similar to S. fatalis with regards to bending in the craniocaudal and mediolateral planes, and average bending resistance (Ix, Iy and J/2 values respectively), its humerus is much longer. The greater rigidity of Smilodon humeri largely reflects a greater external diameter relative to bone length, but is also due to thicker cortical bone in Smilodon, suggesting that their bones were loaded more heavily in bending and axial compression than would be expected for similar-sized extant cats. The relative thickening of Smilodon humeri is apparent in radiographs ( Fig. 2) and in comparisons of K-values (Table 2). Low K-values indicate a small marrow cavity diameter relative to external diameter. In most cats, K ml is less than K cc indicating the humerus is loaded more heavily in the mediolateral direction. However, Smilodon exhibits the lowest K cc and greatest relative thickening of humeral cortical bone in the craniocaudal plane, and also ranks among the lowest values for K ml as well ( Table 2). The femur of S. fatalis also shows cortical thickening as evidenced by low K-values (Table 2). In both extant cats and Smilodon, values of K cc and K ml are similar for the femur. Despite the cortical thickening, the femur of Smilodon is similar to other cats in estimates of compressive and bending strength (Table 1, Fig. 1d). Large values for humerus thickness in Smilodon were also demonstrated by CA measurements (Table 2). Both femora and humeri showed significantly higher CA when compared with all cats, or with pantherins only. However, the disparity between Smilodon and other groups was always greater for humeral measurements (0.995 all cats, 0.325 pantherins) than for femoral measurements (0.704 all cats, 0.212 pantherins). All of the calculated estimates of bone strength and rigidity (CA, Ix, Iy, J/2) were positively allometric with respect to bone length in both the humerus and femur ( Table 1). As also found by Doubé et al. [29], the humerus shows a stronger positive allometry than the femur, perhaps because larger cats utilize their forelimbs to kill relatively larger prey [28]. Discussion Smilodon humeri were distinct from those of non-sabertooth cats: they were thicker and more resistant to bending in both the mediolateral and craniocaudal planes. Although large felids tend to have a minor advantage over smaller felids, with slightly more resistance to bending in the proximal forelimbs [28,29], for its size, S. fatalis had exceptional resistance to bending in the humerus. Sorkin [32] found similar results for external measurements of the humeri of both S. fatalis and P. atrox, with both of them having relatively robust humeri, but with Smilodon showing increased thickening relative to length. Although the femur also exhibits cortical thickening, it falls within the range of variation seen in extant cats, and thus follows scaling expectations. The combination of thickened cortical bone and expanded external diameter in the humerus of S. fatalis suggests an unusual adaptation for both large bending and compressive loads on the forelimbs. Cortical thickening helps resist buckling due to axial compression, while higher moments of area distribute bone farther from the neutral axis, increasing resistance to bending [27,33,34]. This is consistent with the probable presence of relatively large and forceful forelimb flexor and extensor musculature in S. fatalis as evidenced by prominent muscle scars and expanded attachment areas positioned to improve mechanical advantage [2,12,35,36]. Like modern big cats, S. fatalis used its forelimbs to both apprehend and position prey for a killing bite. However, unlike modern big cats, Smilodon may have had to rely more heavily on its forelimbs to hold prey because of its elongate canines. Salesa et al. [37] arrived at a similar conclusion in their recent study of an early Old World ancestor of Smilodon, Promegantereon ogygia, (age 9.7-8.7 million years ago). This early sabertooth also had robust forelimbs, intermediate in strength between less-robust conical tooth cats and later sabertooth species and the authors suggested that the greater forelimb strength co-evolved with elongated saber teeth as an adaptation to protect the sabers. Extant large cats, when killing large prey, use a prolonged suffocating bite to the throat or nose. This crushing bite adds a third point of contact and supports the forelimbs in immobilizing prey [38]. By contrast, sabertooth cats would have killed more quickly with slashing bites to the throat [1,39] that could not have assisted greatly or at all in holding the prey [8]. Additionally, because the elongate canines were relatively vulnerable to fracture [40], it would have been critical to minimize prey struggling and position the killing bite carefully to avoid contact with bone. This likely selected for enhanced forelimb strength in S. fatalis. Cross-sectional limb bone properties have been explored in only a few orders of mammals, including primates, rodents, ungulates, and carnivores, e.g. [14,15,17,19,20,29,33,[41][42][43][44]. Among these, there are two interesting partial analogs to the pattern of much greater forelimb than hind limb strength seen in Smilodon. The first is in a distantly related group that also uses its forelimbs in a specialized way, fossorial caviomorph rodents. The humerus of the Highland tuco-tuco (Ctenomys opimus) differs from other caviomorph rodents, in having thicker cortices and a higher resistance to nonaxial bending (high J/2); but its femur is similar to other species [44]. Like S. fatalis, the tuco-tuco has enlarged forelimb muscles and its forelimbs are loaded heavily, but for different reasons. Rather than grappling with prey, tuco-tucos use their forelimbs to excavate burrows, cutting dirt with powerful movements of their forefeet. Among caviomorphs, moderate or occasional diggers do not show such extreme adaptation. Thus, in both C. opimus and S. fatalis, greater differences in forelimb and hind limb use result in parallel differences in limb structure. A second example can be found in the bush dog: this small, rarely seen South American forest canid shows thickened cortical bone in the humerus relative to other dogs, and relative to its mass [30]. Bush dogs are excellent swimmers with partially webbed feet [45,46]; this habit might explain the increased cortical thickness in the humerus relative to the femur. It is unlikely that the enhanced forelimb strength of Smilodon represents an adaptation to either digging or swimming, rather than prey-killing, given that its distal unguals are retractile and shaped like those of felids rather than diggers [47] and a specialization for swimming would be quite surprising among felids. Another alternative explanation for enhanced forelimb strength in Smilodon might be as an adaptation to climbing given that skeletal adaptations of the forelimbs for climbing and prey-killing are similar in felids [28]. However, the largest extant felids (lions, tigers) and ursid (U. arctos) rarely climb as adults, probably because their mass makes climbing too difficult and dangerous [48][49][50]. Bones with thick cortices are heavier and are energetically more costly to build, maintain, and move. Their presence in S. fatalis Table 1. PAT = Panthera atrox, SFA = Smilodon fatalis, see Table S1 for extant species numbers. doi:10.1371/journal.pone.0011412.g001 strongly suggests a forelimb dominated predation strategy that differed from that of modern felids, and hence corroborates conclusions based on craniodental and neck anatomy [1,6,9,39,51]. The extreme specialization of the skull, teeth, neck and forelimbs of Smilodon probably made it an efficient predator of large ungulate prey, such as bison and camels [52], and, perhaps, juvenile proboscideans. Unfortunately, this specialization may also have led to Smilodon's extinction, as the cat may have been too specialized to switch to alternative, perhaps more agile prey, such as cervids during the ice age megafaunal extinctions [53]. Materials and Methods Humeral and femoral cortical areas were calculated using radiographic procedures following previous studies [15,16], with radiographs taken in both craniocaudal and mediolateral planes (Fig. 2). JMS radiographed humeri of 26 of 28 extant species at the Natural History Museum of the Smithsonian Institution (USNM) using a digital x-ray machine. The remaining two extant species humeri, all extinct species, and all femora were x-rayed by placing bones directly on a Dupont Quanta Rapid x-ray cassette containing 3M green light sensitive UVL film and using a portable x-ray machine. To equalize the effects of parallax for all specimens using the latter method, the x-ray machine was placed at a constant height above the film and external measurements were also taken directly from the bone. A measured difference of less than 4% (,3 mm) was found between the radiograph and the actual bone using this method for Panthera atrox, the largest species radiographed. Cortical thicknesses and, when possible, lengths were measured from digital radiographs using ImageJ [54] and from traditional radiographs to the nearest 0.1 mm using a light box and digital calipers. Table S1 includes a list of species measured and individual radiographic measurements and calculations. Measurements of internal and external diameters were taken for both humerus and femur approximately at the midshaft, taking humerus measurements immediately distal to the deltopectoral crest to minimize interference from this muscle insertion area. These measures were used to estimate aspects of long bone strength in axial compression (CA), bending about mediolateral and craniocaudal planes (Ix, Iy, respectively), and average rigidity in non-axial loading (J/2), [15,16,18,42,43]. Values were calculated using the following formulas: Iy~p AB 3 {ab 3 À Á 64 J~IxzIy where A = external craniocaudal diameter, B = external mediolateral diameter, a = craniocaudal diameter of the medullary cavity, and b = mediolateral diameter of the medullary cavity [15,16,43]. One additional measure of relative cortical thickness (K) was assessed that is independent of bone length, measured in the craniocaudal (cc) and mediolateral directions (ml) as: K~internal diameter=external diameter where values closer to one signify relatively thinner cortical bone and values closer to zero signify relatively thicker cortical bone [55]. To assess differences between species, species averages were calculated for CA, Ix, Iy, J/2, K cc , K ml , and lengths. All measurements except K were log 10 transformed and regressed against respective log 10 bone (humerus or femur) length. Differences between Smilodon and all other felids, and Smilodon and the clade that includes only large felids (pantherins) were analyzed using non-parametric Mann-Whitney U-tests. Supporting Information Table S1 List of species/specimens measured; number and letter abbreviations for Figure 1; sex, specimen number, limb element, raw measurement data and calculations of CA, Ix, Iy, and J. Found at: doi:10.1371/journal.pone.0011412.s001 (0.23 MB DOC)
3,819.2
2010-07-02T00:00:00.000
[ "Biology" ]
Effects of Water Content and Temperature on Bulk Resistivity of Hybrid Cement/Carbon Nanofiber Composites Cement nanocomposites with carbon nanofibers (CNFs) are electrically conductive and sensitive to mechanical loads. These features make them useful for sensing applications. The conductive and load sensing properties are well known to be dependent on carbon nanofiber content; however, much less is known about how the conductivity of hybrid cement–CNF depend on other parameters (e.g., water to cement ratio (w/c), water saturation of pore spaces and temperatures above ambient temperature). In this paper we fill-in these knowledge gaps by: (1) determining a relationship between the cement–CNF bulk resistivity and w/c ratio; (2) determining the effect of water present in the pores on bulk resistivity; (3) describing the resistivity changes upon temperature changes up to 180 °C. Our results show that the increase in the water to cement ratio results in increased bulk resistivity. The decrease in nanocomposite resistivity upon a stepwise temperature increase up to 180 °C was found to be related to free water release from cement pores and the dry materials were relatively insensitive to temperature changes. The re-saturation of pores with water was not reversible with respect to electrical resistivity. The results also suggest that the change in the type of electrical connection can lead to two orders of magnitude different bulk resistivity results for the same material. It is expected that the findings from this paper will contribute to application of cement–CNF-based sensors at temperatures higher than ambient temperature. Introduction Cement nanocomposites have recently gained much attention. Nanoparticles are added not only to engineer the mechanical properties [1][2][3][4] of the composites but also to render cement electrically conductive and responsive to mechanical load [5,6]. It has been shown that cement materials with well dispersed electrically conductive fillers, such as metal fibers, graphite powder, carbon nanofibers and carbon nanotubes, show conductive properties above a percolation threshold [7][8][9][10][11][12][13][14]. The percolation threshold is a critical concentration of the dispersed material above which the dispersed particles form a continuous network [15,16] so that it conducts electric charge. Due to their conductive properties, as well as sensitivity to stress (through the piezoelectric effect), the hybrid cement materials are considered as excellent sensors in areas such as the structural health monitoring of reinforced concrete structures [8,17] and traffic Materials 2020, 13 monitoring [18][19][20]. In these sensing applications the hybrid cement materials function as signal transducers that translate changes in mechanical load or material failure into the changes in electrical conductivity. The structural health monitoring of reinforced concrete structures relies on detecting and localizing failures in the concrete [21][22][23]. When a fracture starts to propagate in the material, the effective resistivity of the sensors increases as the conductive network gets interrupted [21]. In a traffic monitoring application, the strain sensitivity of hybrid cement materials is utilized. The strain sensitivity of conductive cements relies on changes in electrical resistivity upon the application of mechanical load. The physical mechanism underpinning this phenomenon is associated with the connectivity between the conductive particles. When a uniaxial compression is applied to the material with embedded electrically conductive fillers, the inter-particle distance in the filler decreases, and new conductive paths are created. The closer the conductive particles are and the more interparticle connections that are created, the larger an electrical current can be established leading to a decrease in resistivity of the material [24]. It has been well established that increasing CNF content leads to a decrease in the bulk resistivity of CNF-filled materials, with the highest drop within the percolation threshold concentration range [16,25]. More precisely, the bulk resistivity decreases most significantly in the concentration range at which nanofibers start forming a connected network. However, CNF concentration is not the only parameter that may affect the bulk resistivity values. Changes in other compositional parameters (e.g., water content or the concentration of other fillers and additives) may also affect the cement bulk resistivity at given CNF concentrations [25,26]. The results published so far on the effect of water on cement-CNF conductivity leads to contradictory conclusions. Some authors report an increase in conductivity upon the drying of cement-CNF samples [26]. According to those authors, water forms an isolation layer between fibers that disappears after drying, leading to increased conductivity. Others report a resistivity decrease upon immersion in aqueous solutions [27]. So far, conductive cement materials were intended for applications at atmospheric conditions and thus studied at ambient temperatures only. The hybrid materials can, however, also find applications as sensors at conditions where temperatures significantly exceed room temperature or even the boiling point of water (100 • C). Thus, it seems important to define how CNF/cement materials behave at elevated temperatures, and what parameters can be important for designing conductive cement-based sensors for high temperature applications. This is the main subject of this paper. Preparation of Cement-CNF Materials and Resistivity Measurements Pyrograf PR-19 XT-LHT nanofibers (NF) from Applied Sciences Inc. were used in this work. NF were heat-treated at temperatures of 1500 • C, which chemically carbonized the vapor-deposited carbon present on their surface. Such a heat treatment, according to the supplier, produces nanofibers providing the highest electrical conductivity in nanocomposites. PR-19 has an average diameter of about 150 nanometers and a length in the range of 50-200 microns. The surface area of the NFs is estimated to be 15-20 m 2 /g. Figure S1 in the Supporting Information, shows TEM images of the fibers extracted from cement samples. Due to their hydrophobic nature, CNFs require the application of a dispersant to improve the homogeneity of CNFs in a cement slurry. Typically, polymers, surfactants, or a combination of the two is used to improve the dispersion of CNFs in the cement slurry. It has been previously shown that, sometimes, the combination of a polymer with the surfactant gives better CNF dispersion in cement materials than the application of a polymer or surfactant alone [28]. Thus in this work, two different dispersant systems were used: (1) MasterGlenium SKY 899 (BASF) superplasticizer polymer (SP) and (2) a combination of SP with sodium dodecyl sulfate surface active agent (SDS) in 1:1 weight ratio. First, the CNF fibers were dispersed in the water/dispersant system. The CNF to dispersant weight ratio was 5:2. Next, the CNF dispersion was mixed with Portland G cement (Norcem) and additional water to yield the water to cement ratio, given in Table 1. Samples with the w/c ratio ranging between 0.49 and 0.66 were prepared. The CNF/cement weight ratio was constant and kept at 0.03 for all samples. The cement/CNF slurry was hand-mixed for 3 min and molded. Samples K1 and K2 were molded in 3D-printed cubic shaped forms with 3 cm long edges. Two metal plates separated by around 1 cm were placed in the middle and were acting as electrodes/connectors, as shown in Figure 1. presenting the K1 sample in the mold (a) and removed from the mold (b). Samples C1-C3 were molded in syringes with an internal diameter of around 12 mm. After one day of hardening at room conditions, the samples were placed in sealed plastic bags to prevent water evaporation. After two weeks of further hardening, the metal connectors were glued to the cylinder ends using a conductive (silver nanoparticles filled) epoxy resin (EpoTek, H21D), as shown in Figure 1c,d. Due to their hydrophobic nature, CNFs require the application of a dispersant to improve the homogeneity of CNFs in a cement slurry. Typically, polymers, surfactants, or a combination of the two is used to improve the dispersion of CNFs in the cement slurry. It has been previously shown that, sometimes, the combination of a polymer with the surfactant gives better CNF dispersion in cement materials than the application of a polymer or surfactant alone [28]. Thus in this work, two different dispersant systems were used: (1) MasterGlenium SKY 899 (BASF) superplasticizer polymer (SP) and (2) a combination of SP with sodium dodecyl sulfate surface active agent (SDS) in 1:1 weight ratio. First, the CNF fibers were dispersed in the water/dispersant system. The CNF to dispersant weight ratio was 5:2. Next, the CNF dispersion was mixed with Portland G cement (Norcem) and additional water to yield the water to cement ratio, given in Table 1. Samples with the w/c ratio ranging between 0.49 and 0.66 were prepared. The CNF/cement weight ratio was constant and kept Resistance (R) between connectors was measured using a Fluke multimeter. The bulk resistivity (ρ, also called volume resistivity) of these materials were calculated according to Equation (1): where: R is the electrical resistance measured between connectors, A is the surface area of a connector, l is the distance between connectors. Resistance was measured for the samples at elevated temperatures. The Fluke 123 Scopemeter was used and was operating at a DC current of 0.5 mA or less (depending on the range), and an open voltage of no more than 4 V. The measurement was typically done in a heating oven approximately 30 min after the temperature of the heating oven was set at the test temperature and a relatively stable resistivity value was obtained. The electrical resistance of cement materials without carbon nanofibers was in the range of tens of megaohms. The cement-CNF composite of the same geometry had electrical resistance in the kilohms range. The three orders of magnitude, or more, lower resistivity obtained for the composite samples suggests that the carbon nanofibers percolated in the cement matrix, giving rise to the electrical conductivity of composite materials. X-Ray Micro-Computed Tomography (µ-CT) X-ray micro-computed tomography (µ-CT) was performed on the cube samples in order to validate whether CNFs were homogeneously distributed using an industrial CT scanner (XT H 225 ST). It was operated at 210 kV and with a current of 155 µA. A tin filter was used. The raw CT data were reconstructed into cross-sectional slices. The resolution of the CT images was around 30 mm/1310 pixel = 0.02 mm/pixel. The material segmentation was done using Avizo Fire software. Scanning Electron Microscopy (SEM) Scanning electron microscopy (SEM) imaging was used to visualize the distribution of the CNFs within the cement matrix at a smaller scale. A Hitachi S3400N SEM was used for this purpose. The acceleration voltage was set to 10 kV and the images were acquired with a secondary electron detector. Powder X-Ray Diffraction (XRD) In order to quantify the amount of crystalline non-hydrated cement components, powder X-ray diffraction measurements were performed. Corundum (α-alumina) was used as an internal standard. The samples for XRD measurements were prepared by grinding the cement material together with corundum by hand using a mortar and pestle. The measurements were performed at room temperature, with the diffraction angle 2θ between 10 • and 75 • on a Bruker D8 Advance DaVinci diffractometer with Bragg-Brentano geometry, using CuKα radiation (λ = 1.54187 Å). The X-ray powder diffraction pattern was collected over the course of one hour. Significance of Electrical Connection Type for Bulk Resistivity of Cement-CNF Transducers at Higher Temperatures K1 and K2 specimens were scanned using X-ray tomography (CT). The tomography cross-sections are presented in Figure 2. The brightest stripes in the middle of the figure are connectors made of steel that have high X-ray absorption coefficients compared to the cement material. The very dark strikes, visible on the pictures as an extension of very bright metal connectors, are scanning artefacts associated with beam hardening. The darker spots visible in sample K2, as shown in Figure 2b, within the grey-colored cement matrix, suggests that the CNF are inhomogeneously distributed within the cement. The presence of millimeter-sized darker spots suggests that CNFs, whose X-ray adsorption is lower compared to cement, are present in the form of large aggregates rather than well-distributed within the material. On the other hand, the 2D CT cross-sections through sample K1, as shown in Figure 2a, suggest that CNFs are well-distributed within the cement and much less aggregated at the scan resolution limit. Samples K1 and K2 contained identical weight fractions of CNFs; thus, it was expected that the CNFs in sample K1 formed better connected and thus perhaps better conducting networks. Indeed, the measured bulk resistivity at 23 • C for sample K1 was in the range of 70 kΩcm, while for sample K2 it was in the range of a few MΩcm, as shown in Figure 3. artefacts associated with beam hardening. The darker spots visible in sample K2, as shown in Figure 2b, within the grey-colored cement matrix, suggests that the CNF are inhomogeneously distributed within the cement. The presence of millimeter-sized darker spots suggests that CNFs, whose X-ray adsorption is lower compared to cement, are present in the form of large aggregates rather than welldistributed within the material. On the other hand, the 2D CT cross-sections through sample K1, as shown in Figure 2a, suggest that CNFs are well-distributed within the cement and much less aggregated at the scan resolution limit. Samples K1 and K2 contained identical weight fractions of CNFs; thus, it was expected that the CNFs in sample K1 formed better connected and thus perhaps better conducting networks. Indeed, the measured bulk resistivity at 23 °C for sample K1 was in the range of 70 kΩcm, while for sample K2 it was in the range of a few MΩcm, as shown in Figure 3. Figure 3 shows the bulk resistivity changes for samples K1 and K2 with temperature increases up to 180 °C. A gradual decrease in bulk resistivity was observed for both samples. The decrease continued until the temperature reached 60 and 160 °C for sample K2 and K1, respectively, and then started to increase. The bulk resistivity of sample K1 decreased from 70 kΩ·cm at 23 °C to 55 kΩ·cm at 160 °C. The sudden increase in resistivity for both samples coincided with audible crack initiation. Figure 3 shows the bulk resistivity changes for samples K1 and K2 with temperature increases up to 180 • C. A gradual decrease in bulk resistivity was observed for both samples. The decrease continued until the temperature reached 60 and 160 • C for sample K2 and K1, respectively, and then started to increase. The bulk resistivity of sample K1 decreased from 70 kΩ·cm at 23 • C to 55 kΩ·cm at 160 • C. The sudden increase in resistivity for both samples coincided with audible crack initiation. The sudden increase in bulk resistivity was an indication of a loss of electrical conductivity, resulting from the interruption in the CNF network between the metal connectors. The most likely reason for cement samples fracturing was the thermal expansion of the metal connectors and the associated development of tensile stresses within the specimens. Figure 4 shows photographs of specimen K1 after the thermal cycling. Two cracks were seen, propagating along the metal connector surfaces. This suggests a loss of connectivity between the metal connector surfaces and the CNF network in the cement. The increase in resistivity upon the fracturing of the conductive cement materials has already been reported in the literature and its application to the structural health monitoring of reinforced concrete structures has been suggested [21,22]. Our results suggest that the choice of connector material, form, and size are important factors that have to be considered when designing high temperature sensors based on conductive cements. Sample K1 fractured at a significantly higher temperature compared to K2. This is most likely due to the higher tensile strength of the samples with well dispersed CNFs. It has already been shown that the mechanical properties of cements with well distributed CNFs can be significantly improved compared to pure cements [29]. The surfaces of cement samples K1 and K2 that were in contact with the metal connector, as well as the surfaces of the fracture, were imaged using SEM. Typical images are presented in Figure 5. One striking difference in the appearance of the fracture surfaces between the samples is the presence of CNF fibers visible in sample K1 and their absence in sample K2. The fibers are evenly distributed over the fracture surface of sample K1. Given that the CNF contents are identical in samples K1 and K2, the lack of large numbers of CNFs at the crack surfaces of sample K2 should be attributed to its inhomogeneous distribution of CNFs. Indeed, higher magnification images of the two surfaces, as shown in Figure 6, indicate that the CNFs in sample K1 were homogeneously distributed, while in sample K2 they were present in the form of aggregates. This suggests that the superplasticizer polymer alone was a better dispersant of CNFs than the combination of polymer with a surface-active agent. It has been shown in the literature that, sometimes, a combination of polymer and surfactant or two surfactants can give better dispersion of CNFs than the surfactant or polymer on their own [28,30]. According to Wang Baomin [28], methyl cellulose polymer, when used together with sodium dodecyl sulfate, results in a more homogeneous dispersion of CNFs in cement than the polymer and sodium dodecyl sulfate alone. The differences in behavior between Wang's system and the one described here is the nature of the polymer. Methyl cellulose is a nonionic polymer, while the superplasticizer polymer used in this work is most likely anionic as, according to the supplier, it is a modified polycarboxylate polymer. Although the exact structure of this polymer is not well described by the supplier, it can be expected that the polymer-surfactant interactions [31] in these two polymersurfactant systems are significantly different, which is the most likely reason for the inconsistency in observations done by Wang and in this paper. On the other hand, some authors report a non-uniform distribution of CNFs in cement when using a polycarboxylate polymer only [32]. This may imply that the modified polycarboxylate superplasticizer polymer used in this work has better potential to The surfaces of cement samples K1 and K2 that were in contact with the metal connector, as well as the surfaces of the fracture, were imaged using SEM. Typical images are presented in Figure 5. One striking difference in the appearance of the fracture surfaces between the samples is the presence of CNF fibers visible in sample K1 and their absence in sample K2. The fibers are evenly distributed over the fracture surface of sample K1. Given that the CNF contents are identical in samples K1 and K2, the lack of large numbers of CNFs at the crack surfaces of sample K2 should be attributed to its inhomogeneous distribution of CNFs. Indeed, higher magnification images of the two surfaces, as shown in Figure 6, indicate that the CNFs in sample K1 were homogeneously distributed, while in sample K2 they were present in the form of aggregates. This suggests that the superplasticizer polymer alone was a better dispersant of CNFs than the combination of polymer with a surface-active agent. It has been shown in the literature that, sometimes, a combination of polymer and surfactant or two surfactants can give better dispersion of CNFs than the surfactant or polymer on their own [28,30]. According to Wang Baomin [28], methyl cellulose polymer, when used together with sodium dodecyl sulfate, results in a more homogeneous dispersion of CNFs in cement than the polymer and sodium dodecyl sulfate alone. The differences in behavior between Wang's system and the one described here is the nature of the polymer. Methyl cellulose is a nonionic polymer, while the superplasticizer polymer used in this work is most likely anionic as, according to the supplier, it is a modified polycarboxylate Materials 2020, 13, 2884 7 of 13 polymer. Although the exact structure of this polymer is not well described by the supplier, it can be expected that the polymer-surfactant interactions [31] in these two polymer-surfactant systems are significantly different, which is the most likely reason for the inconsistency in observations done by Wang and in this paper. On the other hand, some authors report a non-uniform distribution of CNFs in cement when using a polycarboxylate polymer only [32]. This may imply that the modified polycarboxylate superplasticizer polymer used in this work has better potential to disperse CNFs in cement than typical unmodified polycarboxylate superplasticizers. Another reason why the presence of sodium dodecyl sulfate (SDS) surfactant contributes to the larger inhomogeneity of the resulting composite material may be air entrapment. SDS, along with other surfactants, has shown suitability as air-entraining admixtures for cements [33]. Thus, it is likely that SDS had enhanced air entrapment in the composite, which led to the large material inhomogeneity, as well as higher electrical resistivity of the K2 composite. It is obvious from the images taken from the surfaces in contact with the metal connector plates that only a small fraction of metal surface is in direct contact with the cement at the microlevel. This suggests that the calculations of bulk resistivity, that assume the whole surface of the connector being in direct touch with the cement, are burdened with a large error. This also suggests that the type of connection used in the sensor preparation has a large effect on the estimated resistivity values. For example, comparing bulk resistivity values for samples K1 and C2 at 23 • C can give an idea about how significant the effect of the connector type can be. Samples K1 and C2 have identical composition; the only difference is the incorporation of the metal connectors. Whereas in the case of sample K1 the connectors are embedded directly in the cement, sample C2 has connectors attached using a conductive epoxy resin (for details see the Materials and Methods section above). The bulk resistivity for K1 is calculated to be around 70 kΩ·cm, while for C2, the bulk resistivity value is almost two orders of magnitude lower and is around 850 Ω·cm. Given that the multiple reproducibility tests always resulted in the same order of magnitude values, the large difference in the bulk resistivity for samples K1 and C2 should be ascribed to differences in the incorporation of metal connectors. The conductive epoxy resin is filled with silver nanoparticles. Typically, epoxy resins are known for their good penetration into the pore structures of porous materials [34]. Thus, it is expected that the effective surface area of cement and CNFs, being in contact with conductive connectors, is higher for an epoxy connector than for a metal plate connector immersed in cement. This higher effective contact area between the conductive connector and the cement with the CNF network most likely contributes to the significantly lower bulk resistivity values observed in sample C2, compared to sample K1. The suggested mechanism underpinning this effect is illustrated in Figure 7. Summarizing, the contact resistance is important. The measured resistance is a net value of resistance for bulk material and the contact resistance between the cement composite and the electrical connector, as well as the contact resistance between the electrical connector and the multimeter probes. The contact resistance measured for the conductive epoxy itself was 0.2 Ω, so the multimeter and the conductive epoxy connection are regarded as conductive. If the contact resistance between the connector and the cement material is large, compared with the bulk resistance of the material, the contact resistance will be a limiting factor and, thus detecting small changes in the bulk resistance may be difficult. It is obvious from the images taken from the surfaces in contact with the metal connector plates that only a small fraction of metal surface is in direct contact with the cement at the microlevel. This suggests that the calculations of bulk resistivity, that assume the whole surface of the connector being in direct touch with the cement, are burdened with a large error. This also suggests that the type of connection used in the sensor preparation has a large effect on the estimated resistivity values. For should be ascribed to differences in the incorporation of metal connectors. The conductive epoxy resin is filled with silver nanoparticles. Typically, epoxy resins are known for their good penetration into the pore structures of porous materials [34].Thus, it is expected that the effective surface area of cement and CNFs, being in contact with conductive connectors, is higher for an epoxy connector than for a metal plate connector immersed in cement. This higher effective contact area between the conductive connector and the cement with the CNF network most likely contributes to the significantly lower bulk resistivity values observed in sample C2, compared to sample K1. The suggested mechanism underpinning this effect is illustrated in Figure 7. Summarizing, the contact resistance is important. The measured resistance is a net value of resistance for bulk material and the contact resistance between the cement composite and the electrical connector, as well as the contact resistance between the electrical connector and the multimeter probes. The contact resistance measured for the conductive epoxy itself was 0.2 Ω, so the multimeter and the conductive epoxy connection are regarded as conductive. If the contact resistance between the connector and the cement material is large, compared with the bulk resistance of the material, the contact resistance will be a limiting factor and, thus detecting small changes in the bulk resistance may be difficult. Effect of w/c Ratio and Pore Water Content on Bulk Resistivity of Cement-CNF Materials Due to challenges associated with the fracturing of cement samples with embedded metal connectors upon temperature increase, the samples with epoxy-glued connectors were used to study the effect of temperature, water to cement ratio, as well as free water content on cement sensor resistivity. The bulk resistivity changes upon temperature increase up to 180 °C for the prepared Effect of w/c Ratio and Pore Water Content on Bulk Resistivity of Cement-CNF Materials Due to challenges associated with the fracturing of cement samples with embedded metal connectors upon temperature increase, the samples with epoxy-glued connectors were used to study the effect of temperature, water to cement ratio, as well as free water content on cement sensor resistivity. The bulk resistivity changes upon temperature increase up to 180 • C for the prepared samples, as well as the samples dried for one week at 40 • C, are presented in Figure 8. The results suggest that cement-CNF material bulk resistivity is strongly dependent on both the water to cement ratio and the amount of free water present in the cement pore system. Figure S2 in the Supporting Information. The increase in the water to cement ratio at the stage of cement mixing results in increased bulk resistivity. The increased water to cement ratio typically coincides with the increased amount of hydration products. Indeed, quantitative analysis of XRD patterns, displayed in Figure 9, show increasing amounts of hydration products (calcium hydroxide, CH) and decreasing amount of nonhydrated substrates (tricalcium silicate hydrate, C3S) with increasing w/c content. Table 2 shows the contents of the most abundant crystalline phases present in the cement samples with respect to corundum as an internal standard. The higher the amount of hydration products, the more disturbed the network preformed at the stage of mixing and molding CNF. Figure 10 schematically presents how the hydration products may affect the CNF network. While mixing and molding, CNFs are distributed within non-hydrated cement particles and form the connected network. At the moment the cement powder is mixed with water, the hydration processes start. As a result of hydration, hydration products (calcium hydroxide, calcium silicate hydrate) are precipitated within the free spaces between the cement particles. This precipitation most likely contributes to increased separation between the CNFs in a twofold manner: (1) The distances between the CNFs may increase as a result of cement particle volume increase, associated with water binding and the precipitation of hydration products (amorphous calcium silicate hydrate and crystalline calcium hydroxide [35]) on their surface. (2) On the other hand, the nonconductive hydration products may precipitate in the spaces between fibers, leading to the loss of electric contact between them. The two effects may thus explain the increase in bulk resistivity upon w/c ratio increase. Figure S2 in the Supporting Information. The increase in the water to cement ratio at the stage of cement mixing results in increased bulk resistivity. The increased water to cement ratio typically coincides with the increased amount of hydration products. Indeed, quantitative analysis of XRD patterns, displayed in Figure 9, show increasing amounts of hydration products (calcium hydroxide, CH) and decreasing amount of non-hydrated substrates (tricalcium silicate hydrate, C3S) with increasing w/c content. Table 2 shows the contents of the most abundant crystalline phases present in the cement samples with respect to corundum as an internal standard. The higher the amount of hydration products, the more disturbed the network preformed at the stage of mixing and molding CNF. Figure 10 schematically presents how the hydration products may affect the CNF network. While mixing and molding, CNFs are distributed within non-hydrated cement particles and form the connected network. At the moment the cement powder is mixed with water, the hydration processes start. As a result of hydration, hydration products (calcium hydroxide, calcium silicate hydrate) are precipitated within the free spaces between the cement particles. This precipitation most likely contributes to increased separation between the CNFs in a twofold manner: (1) The distances between the CNFs may increase as a result of cement particle volume increase, associated with water binding and the precipitation of hydration products (amorphous calcium silicate hydrate and crystalline calcium hydroxide [35]) on their surface. (2) On the other hand, the nonconductive hydration products may precipitate in the spaces between fibers, leading to the loss of electric contact between them. The two effects may thus explain the increase in bulk resistivity upon w/c ratio increase. Figure 10. Schematic illustration of mechanism explaining increase in resistivity with increased water to cement ratio. The non-hydrated cement particles react with water. The hydration products (calcium silicate hydrate, calcium hydroxide) contribute to separation between conductive fibers. The bulk resistivity of the prepared samples was sensitive to temperature changes, as shown in Figure 8. The decrease in bulk resistivity was observed upon a temperature increase from room temperature to 180 °C. However, the samples dried over a week at 40 °C were not significantly responsive to temperature increase. This suggests that the temperature sensitivity of the prepared samples is most likely due to water loss associated with the temperature increase. The dried samples had lower bulk resistivity values. This suggests that the presence of free water in the CNF-cement Figure 10. Schematic illustration of mechanism explaining increase in resistivity with increased water to cement ratio. The non-hydrated cement particles react with water. The hydration products (calcium silicate hydrate, calcium hydroxide) contribute to separation between conductive fibers. The bulk resistivity of the prepared samples was sensitive to temperature changes, as shown in Figure 8. The decrease in bulk resistivity was observed upon a temperature increase from room temperature to 180 °C. However, the samples dried over a week at 40 °C were not significantly responsive to temperature increase. This suggests that the temperature sensitivity of the prepared samples is most likely due to water loss associated with the temperature increase. The dried samples had lower bulk resistivity values. This suggests that the presence of free water in the CNF-cement Figure 10. Schematic illustration of mechanism explaining increase in resistivity with increased water to cement ratio. The non-hydrated cement particles react with water. The hydration products (calcium silicate hydrate, calcium hydroxide) contribute to separation between conductive fibers. The bulk resistivity of the prepared samples was sensitive to temperature changes, as shown in Figure 8. The decrease in bulk resistivity was observed upon a temperature increase from room temperature to 180 • C. However, the samples dried over a week at 40 • C were not significantly responsive to temperature increase. This suggests that the temperature sensitivity of the prepared samples is most likely due to water loss associated with the temperature increase. The dried samples had lower bulk resistivity values. This suggests that the presence of free water in the CNF-cement pore volume contributes to the electrical connectivity loss between CNFs. This is in line with the observations made by Zhang et al. [26] and Tzounis et al. [36], who hypothesize that water forms an insulating layer between fibers that is removed upon drying, which leads to an increase in conductivity. According to Sihai Wen et al. [37], in the dry cement-CNF composites, electronic conduction is dominating, while in the wet state, ionic conduction plays a significant role. In the wet state, highly concentrated electrolytes are present in the cement-CNF pore water [36], which contributes to conductivity. Nevertheless, the conductivity of our samples is higher after drying, and it can thus be concluded that the ionic conduction in the wet state is a less efficient conduction mechanism than electronic conduction in a dry state. Figure 8b compares resistivity changes as a function of temperature for the prepared, dried, and water re-saturated C1 sample. After re-saturation, the resistivity increases, but it does not reach the value measured for the original sample. The reason could be that the water, upon re-saturation, is unable to enter all the smallest micro-and mesopores from which it is removed upon drying. This could be due to the hydrophobic nature of the carbon nanofibers that, once aggregated, do not allow polar fluids to enter the spaces at their interface [38]. Conclusions In this work the electrical response of hybrid, conductive cement-CNF materials to elevated temperatures has been studied. It has been shown that: (1) The electrical response of these materials is related with two types of water present in the cement: (a) water that is mixed with cement powder and is partially consumed in cement hydration processes, and (b) free water that is present in cement pores that can be removed by drying. (2) The increase in the water to cement ratio at the stage of cement mixing results in increased bulk resistivity. This has been attributed to the precipitation of larger amounts of hydration products that leads to a larger separation of carbon nanofibers and, thus larger resistivity. (3) The material response to a stepwise temperature increase up to 180 • C is related to free water release from cement pores and the dry materials are relatively insensitive to temperature changes. The re-saturation of pores with water results in a slightly increased resistivity but the re-saturation process is not entirely reversible. (4) The choice of electrical connector material, form, and size are important factors that have to be considered when designing high temperature sensors based on conductive cements. A very important design parameter that must also be taken into account is the effective contact surface area of the electrical connector material with the cement-CNF matrix. It has been shown that the change in the type of electrical connection can lead to two orders of magnitude different bulk resistivity results for the same material. Supplementary Materials: The following are available online at http://www.mdpi.com/1996-1944/13/13/2884/s1, Figure S1: TEM image of carbon nanofibers extracted from the cement sample K1 and K2, Figure S2 Funding: Financial support from SINTEF Industry strategic project SEP Farawell is gratefully acknowledged. This work was also funded by the European Union's Horizon 2020 research and innovation program, grant agreement number 764531, "SECURe-Subsurface Evaluation of Carbon capture and storage and Unconventional risks".
8,180.2
2020-06-27T00:00:00.000
[ "Materials Science" ]
Natural dualities through product representations: bilattices and beyond This paper focuses on natural dualities for varieties of bilattice-based algebras.Such varieties have been widely studied as semantic models in situations where information is incomplete or inconsistent. The most popular tool for studying bilattices-based algebras is product representation. The authors recently set up a widely applicable algebraic framework which enabled product representations over a base variety to be derived in a uniform and categorical manner. By combining this methodology with that of natural duality theory, we demonstrate how to build a natural duality for any bilattice-based variety which has a suitable product representation over a dualisable base variety. This procedure allows us systematically to present economical natural dualities for many bilattice-based varieties, for most of which no dual representation has previously been given. Among our results we highlight that for bilattices with a generalised conflation operation (not assumed to be an involution or commute with negation). Here both the associated product representation and the duality are new. Finally we outline analogous procedures for pre-bilattice-based algebras (so negation is absent). Introduction Bilattices, with and without additional operations, have been identified by researchers in artificial intelligence and in philosophical logic as of value for analysing scenarios in which information may be incomplete or inconsistent. Over twenty years, a bewildering array of different mathematical models has been developed which employ bilattice-based algebras in such situations; [19,23,15,26] give just a sample of the literature. Within a logical context, bilattices have been used to interpret truth values of formal systems. The range of possibilities is illustrated by [2,1,17,18,16,5,27,25]. To date, the structure theory of bilattices has had two main strands: product representations (see in particular [4,11,9] and references therein) and topological duality theory [24,22,8]. In this paper we entwine these two strands, demonstrating how a dual representation and a product representation can be expected to fit together and to operate in a symbiotic way. Our work on distributive bilattices in [8] provides a prototype. Crucially, as in [8], we exploit the theory of natural dualities; see Section 3. In [9] we set up a uniform framework for product representation. We introduced a formal definition of duplication of a base variety of algebras which gives rise to a new variety with additional operations built by combining suitable algebraic terms in the base language and coordinate manipulation (details are recalled in Section 2). This construction led to a very general categorical theorem on product representation [9,Theorem 3.2] which makes overt the intrinsic structure of such representations. The examples we present below all involve bilattice-based varieties, but we stress that 2010 Mathematics Subject Classification. primary: 08C20, secondary 03G10, O3G25, 06B10, 06D50. the scope of the theorem is not confined to such varieties. Our Duality Transfer Theorem (Theorem 3.1) demonstrates how a natural duality for a given base class immediately yields a natural duality for any duplicate of that class. Moreover, the dualities for duplicated varieties mirror those for the base varieties, as regards both advantageous properties and complexity (note the concluding remarks in Section 4). By combining the Duality Transfer Theorem with product representation we can set up dualities for assorted bilattice-based varieties (see Section 4, Table 1). In almost all cases the dualities are new. The varieties in question arise as duplicates of B (Boolean algebras), D (bounded distributive lattices) K (Kleene algebras), DM (De Morgan algebras), and DB (bounded distributive bilattices), all of which have amenable natural dualities (see [10] and also [8]). Variants are available when lattice bounds are omitted. We contrast key features of our natural duality approach with earlier work on dualities for bilattice-based algebras. We stress that our methods lead directly to dual representations which are categorical: morphisms do not have to be treated case-by-case as an overlay to an object representation (as is done in [24,22]). Others' work on dualities in the context of distributive bilattices has sought instead, for a chosen class of algebras, a dual category which is an enrichment of a subcategory of Priestley spaces, that is, they start from Priestley duality, applied to the distributive lattice reducts of their algebras, and then superimpose extra structure to capture the suppressed operations. This strategy has been successfully applied to very many classes of distributive-lattice-based algebras, but it has drawbacks. Although the underlying Priestley duality is natural, the enriched Priestley space representation rarely is. Accordingly one cannot expect the rewards a natural duality offers, such as instant access to free algebras. Section 5 focuses on the variety DB´of (bounded) distributive bilattices with a conflation operation´which is not assumed to be an involution or to commute with the negation. This variety has not been investigated before and would not have susceptible to earlier methods. We realise DB´as a duplicate of the variety DO of double Ockham algebras and set up a natural duality for DO, whence we obtain a duality for DB´. Both results are new. This example is also a novelty within bilattice theory since it takes us outside the realm of finitely generated varieties without losing the benefits of having a natural duality. In Section 6 we consider the negation-free setting of pre-bilattice-based algebras, and link the ideas of [9, Section 9] with dual representations. Again, a very general theorem enables us to transfer a known duality from a base variety to a suitably constructed duplicate. Here multisorted duality theory is needed. Nonetheless the ideas and the categorical arguments are simple, and the proof of Theorem 3.1 is easily adapted. The general product representation theorem recalled We shall assume that readers are familiar with the basic notions concerning bilattices. A summary can be found, for example, in [4] and a bare minimum in [9, Section 2]. Here we simply draw attention to some salient points concerning notation and terminology since usage in the literature varies. Except in Section 6 we assume that a negation operator is present. A (unbounded) bilattice is an algebra A " pA; _ t ,^t, _ k ,^k, q, where the reducts A t :" pA; _ t ,^tq and A k :" pA; _ k ,^kq are lattices (respectively the truth lattice and knowledge lattice). The operation , capturing negation, is an endomorphism of A k and a dual endomorphism of A t . Bilattice models come in two flavours: with and without bounds. Which flavour is preferred (or appropriate) may depend on an intended application, or on mathematical considerations. We refer to [8,Section 1] for the formal definition of the terms bounded and unbounded. Here we merely issue a reminder that when universal bounds for the lattice order are not included in the algebraic language for a class of lattice-based algebras then the algebras involved may, but need not, have bounds; when bounds do exist these do not have to be preserved by homomorphisms. A subscript u on the symbol denoting a category will indicate that we are working in the unbounded setting. So, for example, D denotes the category of bounded distributive lattices and D u the category of all distributive lattices. All the bilattices considered in this paper are distributive, meaning that each of the four lattice operations distributes over each of the other three. The weaker condition of interlacing is necessary and sufficient for a bilattice to have a product representation. However varieties of interlaced bilattice-based algebras seldom come within the scope of natural duality theory. Our investigations involve classes of algebras, viewed both algebraically and categorically. We draw, lightly, on some of the basic formalism and theory of universal algebra, specifically regarding varieties (alias equational classes) and prevarieties; a standard reference for this material is [6]. A class of algebras over a common language will be regarded as a category in the usual way: the morphisms are all the homomorphisms. The variety generated by a family M of algebras of common type is denoted VpMq. Equivalently VpMq is the class HSPpMq of homomorphic images of subalgebras of products of algebras in M. The prevariety generated by M is the class ISPpMq whose members are isomorphic images of subalgebras of products of members of M. Usually the algebras in M will be finite. We now recall our general product representation framework [9, Section 3]. We fix an arbitrary algebraic language Σ and let N be a family of Σ-algebras. Let Γ be a set of pairs of Σ-terms such that, for pt 1 , t 2 q P Γ, the terms t 1 and t 2 have common even arity, denoted 2n pt1,t2q . We view Γ as an algebraic language for a family of algebras P Γ pNq (N P N ), where the arity of pt 1 , t 2 q P Γ is n pt1,t2q . We write rt 1 , t 2 s when the pair pt 1 , t 2 q is regarded as belonging to Γ, qua language. For A P VpN q we define a Γ-algebra P Γ pAq " pAˆA; trt 1 , t 2 s PΓpAq | pt 1 , t 2 q P Γuq, in which the operation rt 1 , t 2 s PΓpAq : pAˆAq n Ñ AˆA is given by rt 1 , t 2 s PΓpAq ppa 1 , b 1 q, . . . , pa n , b n qq " pt A 1 pa 1 , b 1 , . . . , a n , b n q, t A 2 pa 1 , b 1 , . . . , a n , b n qq, where n " n pt1,t2q and pa 1 , b 1 q, . . . , pa n , b n q P AˆA. It is easy to check that the assignment A Þ Ñ P Γ pAq (on objects) and h Þ Ñ hˆh (on morphisms) defines a functor P Γ : VpN q Ñ VpP Γ pN qq. We shall also need the following notation. Given a set X the map δ X : X Ñ XˆX is given by δ X pxq " px, xq and π X 1 , π X 2 : XˆX Ñ X denote the projection maps. We are ready to recall a key definition from [9, Section 3], where further details can be found. We say that Γ duplicates N and that A " VpP Γ pN qq is a duplicate of B if the following conditions on N and Γ are satisfied: (L) for each n-ary operation symbol f P Σ and each i P t1, 2u there exists an n-ary Γ-term t (depending on f and i) such that π N i˝t PΓpNq˝p δ N q n " f N for each N P N ; (M) there exists a binary Γ-term v such that v PΓpNq ppa, bq, pc, dqq " pa, dq for N P N and a, b P N ; (P) there exists a unary Γ-term s such that s PΓpNq pa, bq " pb, aq for N P N and a, b P N . We now present the Product Representation Theorem [9, Theorem 3.2]. Theorem 2.1. Assume that Γ duplicates a class of algebras N and let B " VpN q. Then the functor P Γ : B Ñ A sets up a categorical equivalence between B and its duplicate A " VpP Γ pN qq. The classes of algebras arising in this section have prinicipally been varieties. In the next section we concentrate on singly-generated prevarieties. The following corollary tells us how the class operators HSP and ISP behave with respect to duplication. It is an almost immediate consequence of the fact that P Γ is a categorical equivalence; assertion (c) follows directly from (a) and (b). Corollary 2.2. Assume that Γ duplicates a class of algebras M. The following statements hold for each A P VpMq: (a) HSPpP Γ pAqq is categorically equivalent to HSPpAq. Natural duality and product representation It is appropriate to recall only in brief the theory of natural dualities as we shall employ it. A textbook treatment is given in [10] and a summary geared to applications to distributive bilattices in [8,Sections 3 and 5]. Our object of study in this section will be a prevariety A generated by an algebra M, so that A " ISPpMq. (Only in Section 6 will we replace the single algebra M by a family of algebras M. We shall then need to bring multisorted duality theory into play.) Traditionally (and in [10] in particular) M is assumed to be finite. This suffices for our applications in Section 4. However our application to bilattices with generalised conflation will depend on the more general theory presented in [12]. Therefore we shall assume that M can be equipped with a compact Hausdorff topology T with respect to which it becomes a topological algebra. When M is finite T is necessarily discrete. Our aim is to find a second category X whose objects are topological structures of common type and which is dually equivalent to A via functors D : A Ñ X and E : X Ñ A. Moreover-and this is a key feature of a natural duality-we want each algebra A in A to be concretely representable as an algebra of continuous structure-preserving maps from DpAq (the dual space of A) into M " , where M " P X has the same underlying set M as does M. For this to succeed, some compatibility between the structures M and M " will be necessary. We consider a topological structure M " " pM ; G, R, Tq where ‚ T is a topology on M (as demanded above); ‚ G is a set of operations on M , meaning that, for g P G of arity n ě 1, the map g : M n Ñ M is a continuous homomorphism (any nullary operation in G will be identified with a constant in the type of M); ‚ R is a set of relations on M such that if r P R is n-ary (n ě 1) then r is the universe of a topologically closed subalgebra r of M n . We refer to such a topological structure M " as an alter ego for M and say that M " and M are compatible. Of course. the topological conditions imposed on G and R are trivially satisfied if M is finite. (The general theory in [10] allows an alter ego also to include partial operations, but they do not arise in our intended applications.) We use M " to build a new category X. We first consider structures of the same type as M " . These have the form X " pX; G X , R X , T X q where T X is a compact Hausdorff topology and G X and R X are sets of operations and relations on X in bijective correspondence with those in G and R, with matching arities. Isomorphisms between such structures are defined in the obvious way. For any non-empty set S we give M S the product topology and lift the elements of G and R pointwise to M S . The topological prevariety generated by M " is X :" IS c P`pM " q, the class of isomorphic copies of closed substructures of non-empty powers of M " , with`indicating that the empty structure is included. We make X into a category by taking all continuous structure-preserving maps as the morphisms. As a consequence of the compatibility of M " and M, and the topological conditions imposed, the following assertions are true. Let A P A and X P X. Then ApA, Mq may be seen as a closed substrucructure of M " A and XpX, M " q as a subalgebra of M X . We can set up well-defined contravariant hom-functors D : A Ñ X and E : X T Ñ A; on objects: D : A Þ Ñ ApA, Mq, on morphisms: D : x Þ Ñ´˝x, and on objects: The following assertions are part of the standard framework of natural duality theory. Details can be found in [10, Chapter 2]; see also [12,Section 2]. Given A P A and X P X, we have natural evaluation maps e A : a Þ Ñ´˝a and ε X : x Þ Ñ´˝x, with e A : A Ñ EDpAq and ε X : X Ñ DEpXq. Moreover pD, E, e, εq is a dual adjunction. Each of the maps e A and ε X is an embedding. We say that M " yields a duality on A, or simply that M " dualises M, if each e A is surjective, so that it is an isomorphism e A : A -EDpAq. A dualising alter ego M " plays a special role in the duality it sets up: it is the dual space of the free algebra on one generator in A. This fact is a consequence of compatibility. More generally, the free algebra generated by a non-empty set S has dual space M " S . Assume that M " yields a duality on A and in addition that each ε X is surjective and so an isomorphism. Then we say M " fully dualises M or that the duality yielded by M " is full. In this case A and X are dually equivalent. Full dualities are particularly amenable if they are strong; this is the requirement that the alter ego be injective in the topological prevariety it generates. We do not need here to go deeply into the topic of strong dualities (see [10, Chapter 3] for a full discussion) but we do note in passing that each of the functors D and E in a strong duality interchanges embeddings and surjections-a major virtue if a duality is to be used to transfer algebraic problems into a dual setting. We are ready to present our duality theorem for duplicated (pre)varieties. Our notation is chosen to match that in Theorem 2.1. Theorem 3.1 (Duality Transfer Theorem). Let N be an algebra and assume that Γ duplicates N. If the topological structure N " " pN ; G, R, Tq yields a duality on B " ISPpNq with dual category Y " IS c P`pN " q, then N " 2 yields a duality on A " ISPpP Γ pNqq, again with Y as the dual category. If the former duality is full, respectively strong, then the same is true of the latter. Proof. For the purposes of the proof we shall assume that N , and hence also M , is finite. It is routine to check that the topological conditions which come into play when N is infinite lift to the duplicated set-up. We claim that N " 2 acts as a legitimate alter ego for M :" P Γ pNq. Certainly these structures have the same universe, namely NˆN . It follows from the definition of the operations of P Γ pNq that P Γ prq, whose universe is rˆr, is a subalgebra of pP Γ pNqq n whenever r P R is the universe of a subalgebra r of N n . But R N " 2 consists of the relations rˆr, for r P R. Likewise, an n-ary operation g in G gives rise to the same operation, viz. gˆg, of P Γ pNq and in the structure N " 2 . Hence gˆg is compatible with P Γ pNq. We now set up the functors for the existing duality for ISPpNq and for the duality sought for ISPpMq. Since Y " X, the functors D B and D A have a common codomain. Let A P A. By Corollary 2.2, we may assume that A " P Γ pBq, for some B P B. By Theorem 2.1 and the definition of P Γ on morphisms, This proves that e A : A Ñ E A D A pAq is surjective for each A P A, so that we do indeed have a duality for A based on the alter ego M " " N " 2 . We now claim that if N " fully dualises N then M " fully dualises M. To do this we shall show that the bijection η : D B pBq Ñ D A pAq, defined by ηpyq " yˆy for each y P D B pBq, is an isomorphism (of topological structures) from D B pBq onto D A pAq, where, as before, A " P Γ pBq, see [10, Lemma 3.1.1]. Let r be an n-ary relation in N " . For y 1 , . . . , y n P D B pBq, py 1 , . . . ,y n q P r D B pBq ðñ @a P N ppy 1 paq, . . . , y n paqq P rq ðñ @pa 1 , a 2 q P M pppy 1 pa 1 q, y 1 pa 2 qq, . . . , py n pa 1 q, y n pa 2 qqq P rˆrq ðñ py 1ˆy1 , . . . , y nˆyn q P prˆrq D A pAq . A similar argument applies to operations. The map η has compact codomain and Hausdorff domain and hence is a homeomorphism provided η´1 is continuous. To prove this it will suffice to show that each map π b˝η´1 is continuous, where π b denotes the projection from D B pBq, regarded as a subspace of N " This proves the continuity assertion. Finally, since N " is injective in Y if and only if N " 2 is, N " yields a strong duality on B if and only if N " 2 yields a strong duality on A, by [10, Theorem 3.2.4]. The proof of Theorem 3.1 is essentially routine, given the Product Representation Theorem. The theorem should not be disparaged because it is easy to derive. Rather the reverse: almost all the dualities given in Section 4 are new, and obtained at a stroke. Of course, though, Theorem 3.1 is only useful when we have a (strong) duality to hand for the base class ISPpNq we wish to employ. Nothing we have said about natural dualities so far tells us how to find an alter ego N " for N, or even whether a duality exists. Fortunately, simple and well-understood strong dualities exist for the base varieties ISPpNq which support the miscellany of logic-oriented examples presented in Section 4. In all cases considered there, N is a small finite algebra with a lattice reduct. Existence of such a reduct guarantees dualisability [10, Section 3.4]: a brute-force alter ego N " " pN ; SpN 2 q, Tq is available. However this default choice is likely to yield a tractable duality only when N is very small. Otherwise the subalgebra lattice SpN 2 q is generally unwieldy. Methodology exists for slimming down a given dualising alter ego to yield a potentially more workable duality (see [10,Chapter 8]), but it is preferable to obtain an economical duality from the outset. This is often possible when N is a distributive lattice, not necessarily finite: in many such cases one can apply the piggyback method which originated with Davey and Werner (see [10,Chapter 7] and [12]). We shall demonstrate its use in Section 5, where we develop a duality for double Ockham algebras, our base variety for studying generalised conflation. Against this background we can appreciate the merits of Theorem 3.1. Suppose we have a class ISPpMq (with M finite) which is expressible as a duplicate of a dualisable base variety ISPpNq. Then |M | " |N | 2 and, on cardinality grounds alone, finding an amenable duality directly for ISPpMq could be challenging, whereas the chances are much higher that we have available, or are able to set up, a simple dualising alter ego N " for N. And then, given N " we can immediately obtain an alter ego M " for M, with the same number of relations and operations in M " as in N " . Examples of natural dualities via duplication We now present a miscellany of examples. All involve bilattices but, as noted earlier, the scope of our methods is potentially wider. We derive (strong) dualities for certain (finitely generated) duplicated varieties given in [9] by calling on wellknown (strong) dualities for their base varieties. A catalogue of base varieties and duplicates is assembled in [9, Appendix, Table 1], with references to where in the paper these examples are presented. Table 1 lists alter egos for dualities for base varieties. These dualities are discussed in [10], with their sources attributed. Natural dualities for the indicated duplicated varieties, also strong, can be read off from the table, using the Duality Transfer Theorem. When specifying a generator for each base variety, we adopt abbreviations for standard sets of operations: we have elected to denote negation in Boolean algebras, De Morgan algebras and Kleene algebras by ", to distinguish it from bilattice negation, . The top row of Table 1 should be treated as a prototype, both algebraically and dually. There the base variety is D, the variety of bounded distributive lattices. The duplicated variety in this case is the variety DB of distributive bilattices. It is generated (as a prevariety) by the four-element algebra in DB. Full details of the natural duality for DB and its relationship to Priestley duality for the base variety D appear in [8]. All This is the situation with negation-by-failure. For the natural dualities recorded in Table 1, we note that, apart from D, the base variety in each case is De Morgan algebras or a subvariety thereof. The alter ego includes a partial order ď known as the alternating order in [10, Theorem 4.3.16]; in the case of DM, the relation ď on universe t0, 1u 2 of the four-element generator 4 DM is the knowledge order. The map g is the involution swapping the coordinates. Only simple modifications are needed to handle the case when the language of a lattice-based variety does not include lattice bounds as nullary operations. It is an old result that Priestley duality for the variety D u can be set up in much the same way as that for D, with the dual category being pointed Priestley spaces, as described in [10, Section 1.2 and Subsection 4.3.1]. Natural dualities for duplicates of D u are derived from those for corresponding duplicates of D simply by adding to the alter ego nullary operations p0, 0q and p1, 1q. Compare with [8,Section 4], which provides a direct treatment of duality for DB u ; here, even more than in the bounded case, we see the merit of the automatic process that Theorem 3.1 supplies. A duality for DM u (De Morgan lattices) is obtained by adding the top and bottom elements for the partial order ď to the alter ego for DM. Our transfer theorem then applies to unbounded distributive bilattices with conflation. Bilattices with generalised conflation In this section we break new ground, both in relation to product representation and in relation to natural duality. The bilattice-based variety DB´that we study-(bounded) distributive bilattices with generalised conflation-has not been considered before. Previous authors who have studied product representation when conflation is present have assumed that this operation is an involution that commutes with negation (see [14,Theorem 8.3], [4] and our treatment in [9, Section 5]). We shall demonstrate that neither assumption is necessary for the existence of a product representation. Our focus in this paper is on developing theoretical tools. Nevertheless we should supply application-oriented reasons to justify investigating generalised conflation. We first note that it is often, but not always, natural to assume that conflation be an involution. On the other hand, the justification for the commutation condition is less clear cut. Indeed, both the original definition in [14] and that in [25] exclude commutation, and this is brought in only later. In [25, Section 3] the emphasis is on truth values. The authors' desired interpretation then leads them to consider a special algebra SIXTEEN 3 , in which the conflation operation does commute with negation. In [18,Section 2] conflation is used to study (knowledge) consistent and exact elements of a lattice. The investigations in both [25] and [18] are intrinsically connected to the product representation for bilattices with conflation. Our product representation would permit similar interpretations when commutation fails and/or conflation is not an involution. In a different setting, conflation has been used in [15] to present an algebraic model of the logic system of revisions in databases, knowledge bases, and belief sets introduced in [23]. In this model the coordinates of a pair in a product representation of a bilattice are interpreted as the degrees of confidence for including in a database an item of information and for excluding it. Conflation then models the transformation of information that reinterprets as evidence for inclusion whatever did not previously count as evidence against, and vice versa. That is, conflation comprises two processes: given the information against (for) a certain argument, these capture information for (against) the same argument. In [15] these two transformations coincide, and are mutually inverse. Our work on generalised conflation would allow these assumptions to be weakened so facilitating a wider range of models. The class DB´consists of algebras of the form A " pA; _ t ,^t, _ k ,^k, ,´, 0, 1q, where the reduct of A obtained by suppressing´belongs to DB and´is an endomorphism of A t and a dual endomorphism of A k . Here we elect to include bounds. The variety DBC of (bounded) distributive bilattices with conflation (where by convention conflation and negation do commute) is a subvariety of DB´. However DB´and DBC behave quite differently: even though is an involution, is not. As a consequence the monoid these operations generate is not finite, as is the case in DBC. (We note that the unbounded case of generalised conflation could also be treated by making appropriate modifications to the above definition and throughout what follows.) Our product representation for DB´uses as its base variety the class DO of double Ockham algebras. This is a new departure as regards representations of bilattice expansions. A double Ockham algebra is a D-based algebras equipped with two dual endomorphisms of the D-reducts. An Ockham algebra carries just one such operation. The variety O of Ockham algebras, which includes Boolean algebras, De Morgan algebras and Kleene algebras among its subvarieties, has been exhaustively studied, both algebraically and via duality methods, as indicated by the texts [3,10] and many articles. The variety DO is much less well explored. The remainder of the section is accordingly organised as follows. Proposition 5.1 presents the product representation for DB´over the base variety DO. We then set DB´aside while we develop the theory of DO which we need if we are to apply our Duality Transfer Theorem to DB´. This requires us first to identify an algebra M such that DO " ISPpMq (Proposition 5.2). We then set up an alter ego M " for M and call on [12,Theorem 4.4] to obtain a natural duality for DO (Theorem 5.6). This is then combined with Theorem 3.1 to arrive at a natural duality for DB´(Theorem 5.7). To motivate how we can realise DB´as a duplicate of DO we briefly recall from [9, Section 5] how DBC arises as a duplicate of DM. We adopt the notation introduced in [9, Section 4]. Let Σ be a language and f be an n-ary function symbol in Σ. For m ě n and i 1 , . . . , i n P t1, . . . , mu we denote by f m i1¨¨¨in the mary term f m i1...in px 1 , . . . , x m q " f px i1 , . . . , x in q. We can capture the extra operatioń on the generator 16 DBC of DBC using the De Morgan negation ", combined with coordinate-flipping: the family of terms Γ DBC " Γ DB Y tp" 2 2 , " 2 1 qu acts as a duplicator for DM with DBC as the duplicated variety; here Γ DB duplicates bounded lattices. (See [9, Section 5] for an explanation as to why the form of the operations in DBC dictates that DM should be used as the base variety.) We now present our duplication result linking DO and DB´. Proposition 5.1. The set Γ DB´" Γ DB Y tpf 2 2 , g 2 1 qu duplicates DO. Moreover, DB´" V`P Γ DB´p DOq˘, where Σ Γ DB´i s identified with the language of DB´. Proof. Certainly Γ DB´d uplicates DO because pf 2 2 , g 2 1 q P Γ DB´a nd Γ DB is a duplicate for Σ D on D. This theorem gives insight into the effect of reinstating the assumptions customarily imposed on conflation and which we removed in passing from DBC to DB´. From the product representation for DB´, it follows that´is involutive if and only if f and g are. The resulting subvariety of DB´is a duplicate of double De Morgan algebras (that is, algebras in DO such that both unary operations are involutions). Similarly,´commutes with if and only if f " g. This time we obtain a subvariety of DB´which duplicates O. We now want to identify an (infinite) algebra which generates our base variety DO as a prevariety. We take our cue from the variety O of Ockham algebras: O is generated as a prevariety by an algebra M whose universe is t0, 1u N0 , where N 0 " t0, 1, 2. . . .u; lattice operations and constants are obtained pointwise from the two-element bounded lattice and, identifying the elements as infinite binary strings, negation is given by a left shift followed by pointwise Boolean complementation on t0, 1u. See for example [12,Section 4] for details. We may view the exponent N 0 as the free monoid on one generator e, with 0 as identity and n acting as the n-fold composite of e. For DO, analogously, we first consider the free monoid E " te 1 , e 2 u˚on two generators e 1 and e 2 and identify it with the set of all finite words in the language with e 1 and e 2 as function symbols, with the empty word corresponding to the identity element 1; the monoid operation¨is given by concatenation. For s P E, we denote the length of s by |s|. For us, DO will serve as a base variety. Accordingly we align our notation with that in Theorem 3.1. We now consider the algebra N with universe t0, 1u E with lattice operations and constants given pointwise. The lattice t0, 1u E is in fact a Boolean lattice, whose complementation operation we denote by c. The dual endomorphisms f and g are given as follows. For a P t0, 1u E we have f paqpsq " cpaps¨e 1 qq and gpaq " cpaps¨e 2 qq for every s P E. This gives us an algebra N :" pt0, 1u E ; _,^, f, g, 0, 1q P DO. For future use we show how to assign to each word s P E a unary term t s in the language of DO, as follows. If s " 1 (the empty word) then t s is the identity map; if s " e 1¨s 1 then t s " f˝t s 1 ; and if s " e 2¨s 1 then t s " g˝t s 1 . Structural induction shows that the term function t N s is given by for every a P N and s P E. # xpt s pcqq if |s| is even, 1´xpt s pcqq if |s| is odd, for c P A and s P E. It is routine to check that ϕ is a D-morphism which preserves f and g. Finally, ϕpcqp1q " xpcq, whence ϕpaq ‰ ϕpbq. We now seek a natural duality for DO which parallels that which is already known for the category O of Ockham algebras. Our treatment follows the same lines as that given for O in [12,Section 4], whereby a powerful version of the piggyback method is deployed. (The duality for O was originally developed by Goldberg [21] and re-derived as an early example of a piggyback duality by Davey and Werner [13].) A general description of the piggybacking method and the ideas underlying it can be found in [12,Section 3]. We wish to apply to DO a special case of [12,Theorem 4.4]. We first make some comments and establish notation. We piggyback over Priestley duality between D " ISPp2q and P " IS c P`p2 " q (where 2 and 2 " are the two-element objects in D and P with universe t0, 1u, defined in the usual way). We denote the hom-functors setting up the dual equivalence between D and P by H and K. The aim is to find an element ω P DpN 5 , 2q which, together with endomorphisms of N, captures enough information to build an alter ego N " of N which yields a full duality, in fact, a strong duality. We now work towards showing that we can apply [12,Theorem 4.4] to DO " ISPpNq, where N is as defined above. We shall take ω : N Ñ 2 to be the projection map given by ωpaq " ap1q. We want to set up an alter ego N " " pt0, 1u E ; G, R, Tq so that in particular N " has a Priestley space reduct N " 5 such that ω P PpN " 5 , 2 " q. Moreover we need the structure N " to be chosen in such a way that the conditions (1)-(3) in [12,Theorem 4.4] are satisfied. We define T to be the product topology on N " t0, 1u E derived from the discrete topology on t0, 1u; this is compact and Hausdorff and makes N into a topological algebra. We now need to specify G and R. We would expect R to contain an order relation ď such that pt0, 1u E ; ď, Tq P P. For Ockham algebras-where one uses the free monoid on one generator as the exponent rather than E-the corresponding order relation is the alternating order in which alternate coordinates are order-flipped; see [10, Section 7.5] (and recall the comment about De Morgan algebras, a subvariety of O, in Section 4). The key point is that a composition of an even (respectively odd) number of order-preserving selfmaps on an ordered set is order-preserving (respectively order-reversing). Hence the definition of ď in Lemma 5.3 is entirely natural. Lemma 5.3. Let N be as above. Then ď, given by is an order relation making pt0, 1u E ; ď, Tq a Priestley space. Moreover ď is the universe of a subalgebra of N 2 and this subalgebra is the unique maximal subalgebra of pω, ωq´1pďq " t pa, bq P N 2 | ωpaq ď ωpbq u. Proof. Each of 2 " and the structure 2 " B (that is, 2 " with the order reversed) is a Priestley space. It follows that the topological structure pt0, 1u E ; ď, Tq is a product of Priestley spaces and so itself a Priestley space. Take a, b, c, d in N such that a ď b and c ď d and let s P E. Then pa^cqpsq " apsq^cpsq ď bpsq^dpsq " pb^dqpsq if |s| is even, pa^cqpsq " apsq^cpsq ě bpsq^dpsq " pb^dqpsq if |s| is odd. Likewise gpaq ď gpbq. Thus ď is indeed the universe of a subalgebra of N 2 . Now let r be the universe of a subalgebra of N 2 maximal with respect to inclusion in pω, ωq´1pďq. Then, with t s as defined earlier for s P E, we have pa, bq P r ùñ p@s P Eq`pt s paq, t s pbqq P r˘ùñ p@s P Eq`t s paq ď t s pbqù ñ p@s P Eqp@e P Eq`t s paqpeq " 1 ùñ t s pbqpeq " 1˘. But We deduce that r is a subset of ď. In addition a ď b implies ωpaq ď ωpbq: consider s " 1. Maximality of r implies that r equals ď. Consequently ď is the unique maximal subalgebra contained in pω, ωq´1pďq. We now introduce the operations we shall include in our alter ego N " . Let the map γ i : E Ñ E be given by γ i psq " s¨e i . Then we can define an endomorphism u i of N by u i paq " a˝γ i , for i " 1, 2. These maps are continuous with respect to the topology T we have put on N . We define N " :" pt0, 1u E ; u 1 , u 2 , ď, Tq. Then N " is compatible with N. We let Y :" IS c P`pN " q be the topological prevariety generated by N " and by 5 the forgetful functor from Y into P which suppresses the operations u 1 and u 2 . We note that now ω, as defined earlier, may be seen to belong to DpN 5 , 2q X PpN " 5 , 2 " q. The following two lemmas concern the interaction of N, N " and ω as regards separation properties. Lemma 5.4. Assume that N, N " and ω are defined as above. Then, given a ‰ b in N , there exists a unary term u in the language of pN ; u 1 , u 2 q such that ωpupaqq ‰ ωpupbqq. Proof. Let a ‰ b P N. There exists s P E with s ‰ 1 such that apsq ‰ bpsq. Write s as a concatenation e i1¨¨¨¨¨ein , where i 1 , . . . , i n P t1, 2u. For each j " 1, . . . , n, there is an associated unary term u j such that, for all w P E, pu ij paqqpwq " pa˝γ ij qpwq " apw¨e ij q. Write u in˝. . .˝u i1 as u s . Then u s pcqp1q " cpsq for all c P N and hence pω˝u s qpaq " u s paqp1q " apsq ‰ bpsq " u s pbqp1q " pω˝u s qpbq. 5 , then there exists a unary term function t of N such that ωptpaqq " 1 and ωptpbqq " 0. Theorem 5.6 (Strong Duality Theorem for Double Ockham Algebras). Let N " pt0, 1u E ; _,^, f, g, 0, 1q and N " " pt0, 1u E ; u 1 , u 2 , ď, Tq be as defined above. Let ω P DpN 5 , 2q X PpN " 5 , 2 " q be given by evaluation at 1, the identity of the monoid E. Let D : DO Ñ Y and E : Y Ñ DO be the hom-functors: D :" DOp´, Nq and E :" Yp´, N " q. Then N " strongly dualises N, that is, D and E establish a strong duality between DO and Y. Moreover DpAq 5 -HpA 5 q in P and EpYq 5 -KpY 5 q in D, for A P DO and Y P Y, where the isomorphisms are set up by Φ A ω : x Þ Ñ ω˝x, for x P DpAq, and Ψ Y ω : α Þ Ñ ω˝α, for α P EpYq. Proof. We simply need to confirm that the conditions of [12,Theorem 4.4] are satisfied. We have everything set up to ensure that all the functors work as the theorem requires. In addition Lemmas 5.3-5.5 tell us that Conditions (1)-(3) in the theorem are satisfied. Some remarks are in order here. We stress that it is critical that we could find a map ω which acts as a morphism both on the algebra side and on the dual side, and has the separation properties set out in Lemmas 5.4 and 5.5. We also observe that for our application of [12,Theorem 4.4], its Condition (3) is met in a simpler way than the theorem allows for: the special form of the f, g (viz. dual endomorphisms with respect to the bounded lattice operations) that forces pω, ωq´1pďq to contain just one maximal subalgebra. We should comment too on how our natural duality for DO relates to a Priestleystyle duality for DO. The latter can be set up in just the same way as that for O originating in [28]. This duality is an enrichment of that between D and P, whereby f and g are captured on the dual side via a pair of order-reversing continuous maps p and q, and morphisms are required to preserve these maps. Theorem 5.6 tells us that, for any A P DO, there is an isomorphism between the Priestley space reduct DpAq 5 of the natural dual of A P DO and the Priestley dual HpA 5 q of the D-reduct of A. Both these Priestley spaces carry additional structure: u 1 and u 2 in the former case and p and q in the latter. When the reducts of the natural and Priestley-style dual spaces of the algebras are identified these pairs of maps coincide. Thus the two dualities for DO are essentially the same and one may toggle between them at will. We have a new example here of a 'best of both worlds' scenario, in which we have both the advantages of a natural duality and the benefits, pictorially, of a duality based on Priestley spaces. See [7,Section 3], [8,Section 6] and [12,Section 4] for earlier recognition of occurrences of this phenomenon: other varieties for which it arises are De Morgan algebras and Ockham algebras. In general it is not hereditary: it fails to occur for Kleene algebras, for example. Combining our results we arrive at our duality for the variety DB´. Theorem 5.7 (Strong Duality Theorem for Bounded Distributive Bilattices with Generalised Conflation). Let N " " pt0, 1u E ; u 1 , u 2 , ď, Tq be as in Theorem 5.6. Then N "ˆN " yields a strong duality on DB´. Moreover the dual category for this duality is Y :" IS c P`pN " q which may, in turn, be identified with the category P DO of double Ockham spaces. To illustrate the rewards derived from a natural duality for F DB´, we highlight the simple description of free objects that follows from Theorem 5.7: for a nonempty set S, the free algebra F DB´p Sq on S has pN " 2 q S as its natural dual space. Hence F DB´p Sq can be identified with the family of continuous structure-preserving maps from pN " 2 q S into N " 2 , with the operations defined pointwise. (Recall the remark on free algebras in Section 3.) Dualities for pre-bilattice-based varieties In this final section we consider dualities for pre-bilattice-based varieties. Here we call on the adaptation of the product representation theorem given in [9, Theorem 9.1]. Hitherto in this paper we have worked with dualities for prevarieties of the form ISPpMq, thereby encompassing dualities for many classes of interest in the context of bilattices. However when we drop negation and so move from bilattices to pre-bilattices the situation changes and we encounter classes of the form ISPpMq, where M is a finite set of algebras over a common language. For example, for distributive pre-bilattices M consists of a pair of two-element algebras, one with truth and knowledge orders equal, the other with these as order duals. Fortunately a form of natural duality theory exists which is applicable to classes of the form ISPpMq; this makes use of multisorted structures on the dual side. So in this section we shall consider dualities for pre-bilattice-based varieties. As a starting point we have the treatment of distributive pre-bilattices given in [8, Sections 9 and 10]; a self-contained summary of the rudiments of multisorted duality theory can also be found there or see [10,Chapter 7]. We first recall how [9, Theorem 9.1] differs from Theorem 2.1. We start from a base class VpN q, where N is a class of algebras over a common language Σ. Let Γ and P Γ pN q be as in Section 2. Negation in a product bilattice links the two factors, and condition (P) from the definition of duplication by Γ reflects this. In the absence of negation, (P) is dropped and the following condition is substituted: (D) for pt 1 , t 2 q P Γ with n pt1,t2q " n, there exist n-ary Σ-terms r 1 and r 2 such that t 1 px 1 , . . . , x 2n q " r 1 px 1 , x 3 , . . . , x 2n´1 q and t 2 px 1 , . . . , x 2n q " r 2 px 2 , x 4 , . . . , x 2n q. A product algebra associated with Γ now takes the form P d Γ Q " pPˆQ; trt 1 , t 2 s Pd Γ Q | pt 1 , t 2 q P Γuq, where P, Q belong to the base variety B " VpN q. This construction is used to define a functor d Γ : BˆB Ñ A as follows: on objects: pP, Qq Þ Ñ P d Γ Q, on morphisms: d Γ ph 1 , h 2 qpa, bq " ph 1 paq, h 2 pbqq. We move on to consider dualities for duplicated varieties. For simplicity we shall first assume that the base variety B " ISPpNq has a single-sorted duality with alter ego N " " pN ; G, R, Tq. Our next task is to determine a set of generators for A as a prevariety. We denote the trivial algebra by T. For C P B let fC : C Ñ T be the unique homomorphism from C into T. Lemma 6.2. If B " ISPpNq " VpNq for some algebra N, then Proof. Let A P A and a ‰ b P A. By Theorem 6.1, we may assume that there exist B, C P B such that A " B d Γ C. Let a 1 , b 1 P B and a 2 , b 2 P C such that a " pa 1 , a 2 q and b " pb 1 , b 2 q. By simmetry we may assume that a 1 ‰ b 1 . Then there exists a homomorphism h : Let M " tN d Γ T, T d Γ Nu. We now 'double up' N " in the obvious way. Let N " Z N " " pN 1 9 YN 2 ; G 1 , G 2 , R 1 , R 2 , Tq, based on disjointified universes N 1 and N 2 , such that pN i ; G i , R i , Tae Ni q is isomorphic to N " for i " 1, 2. Identify N 1 with NˆT and N 2 with TˆN and define M " " N " Z N " . We now present our transfer theorem for natural dualities associated with Theorem 6.1 (the single-sorted case). Its proof is largely a diagram-chase with functors. Below, Id C denotes the identity functor on a category C andis used to denote natural isomorphism. Theorem 6.3. Let N be a Σ-algebra and assume that Γ satisfies (L), (M) and (D) relative to N. Assume that N " " pN ; G, R, Tq yields a duality on B " ISPpNq " VpNq with dual category Y " IS c P`pN " q. Let M and M " be defined as above. Then M " yields a multisorted duality for A " ISPpMq " VpP d Γ Q | P, Q P VpN qq for which the dual category is X -YˆY. If the duality for B is full, respectively strong, then the same is true of that for A. Proof. Let pX 1 , X 2 q P YˆY " IS c P`pN " qˆIS c P`pN " q. We identify this structure with X 1 Z X 2 " pX 1 9 YX 2 ; G 1 , G 2 , R 1 , R 2 , Tq, where as before 9 Y denotes disjoint union and the topology T is the union of T 1 and T 2 . Morphisms in X are maps f : X 1 9 YX 2 Ñ Y 1 9 YY 2 that respect the structure and are such that f pxq P Y i when x P X i and i P t1, 2u. Hence the assignment: on objects: on morphisms: pf, gq Þ Ñ f 9 Yg sets up a categorical equivalence, Z. Let F : X Ñ YˆY denote its inverse. Identify N d Γ T and T d Γ N with N 1 and N 2 respectively. One sees that M " :" N " Z N " " pN 1 9 YN 2 ; G 1 , G 2 , R 1 , R 2 , Tq is a legitimate alter ego for M. Let D B : B Ñ Y and E B : Y Ñ B, and D A : A Ñ X and E A : X Ñ A be the homfunctors determined by N " and M " respectively. By Theorem 6.1, there exists a Figure 1. Natural duality by duplication functor C : A Ñ BˆB that together with d Γ : BˆB Ñ A determines a categorical equivalence. Take A, B P B and let D A pA d Γ Bq " pX 1 9 YX 2 ; G 1 , G 2 , R 1 , R 2 , Tq. For an n-ary relation r P R, let r Ad Γ B i be the corresponding relation in R Ad Γ B i Ď X n i (i " t1, 2u). So ph 1 , . . . , h n q P r Ad Γ B 1 if and only if h i " pg i , fBq P BpA, Nqt fBu for i P t1, . . . , nu and pg 1 , . . . , g n q P r A . Similarly, a tuple ph 1 , . . . , h n q belongs to r Ad Γ B 2 if and only if h i " pfÅ, g i q P tfÅuˆBpB, Nq for i P t1, . . . , nu and pg 1 , . . . , g n q P r B . The same argument applied to G proves that pX 1 ; G 1 , R 1 , Tae X1 q and pX 2 ; G 2 , R 2 , Tae X2 q are isomorphic to D B pAq and D B pBq, respectively. Thus FpD A pA d Γ Bqq is isomorphic to pD B pAq, D B pBqq in YˆY. Moreover, it is easy to see that the assignment FpD A pA d Γ Bqq Þ Ñ pD B pAq, D B pBqq determines a natural isomorphism between F˝D A˝dΓ and D BˆDB : BˆB Ñ XˆX. Similarly, for each pX, Yq P XˆX, Moreover, the assignment E A pX Z Yq Þ Ñ E B pXq d Γ E B pYq is natural in X and Y, that is, E A˝Z -pE BˆEB q˝d Γ . So (up to natural isomorphism) the diagrams in Figure 1 commute. A symbolchase now confirms that M " dualises M because N " dualises N: Figure 2. Full duality by duplication Assume that N " yields a full duality. Then the diagram in Figure 2 commutes. We can easily prove that D A˝EA -Id X , that is, M " yields a full duality. Moreover, if N " is injective in Y then pN " , N " q is injective in YˆY, or equivalently M " " N " Z N " is injective in X. Hence M " yields a strong duality if N does. Theorem 6.3 applies to the variety pDBu of (unbounded) distributive pre-bilattices. Its members are algebras A " pA; _ t ,^t, _ k ,^kq for which pA; _ t ,^tq P D u and pA; _ k ,^kq P D u . The well-known product representation for pDBu comes from the observation that the set Γ pDBu " tp_ 4 13 ,^4 24 q, p^4 13 , _ 4 24 q, p_ 4 13 , _ 4 24 q, p^4 13 ,^4 24 qu satisfies (L), (M) and (D) [9, Section 9]. Since 2 " u strongly dualises D u , the structure 2 " u Z 2 " u determines a multisorted strong duality for pDBu. This was established by different techniques in [8,Theorem 10.2]. Theorem 6.3 also yields dualities for distributive trilattices. These are (to the best of our knowledge) new. As with pre-bilattices, we opt for the unbounded case. An unbounded distributive trilattice is an algebra pA; _ t ,^t, _ f ,^f , _ i ,^iq such that pA; _ t ,^tq, pA; _ f ,^f q and pA; _ i ,^iq are distributive lattices. Let DT u denote the variety of (unbounded) distributive trilattices. An algebra pA; _ t ,^t, _ f ,^f , _ i ,^i,´tq is a distributive trilattice with t-involution if pA; _ t ,^t, _ f ,^f , _ i ,^iq P DT u and Γ DT´t " tpp^tq 4 13 , p^tq 4 24 q, pp_ t q 4 13 , p_ t q 4 24 q, pp_ k q 4 13 , p^kq 4 24 q, pp^kq 4 13 , p_ k q 4 24 q, pp^kq 4 13 , p^kq 4 24 q, pp_ k q 4 13 , p_ k q 4 24 q, p 2 1 , 2 2 qu. Then Γ DT´t satisfies (L), (M) and (D) over DBu (see [8,Example 9.4]). In Section 4, we used Theorem 3.1 to prove that p2 " u q 2 yields a strong duality on DBu. Now Theorem 6.3 implies that p2 " u q 2 Z p2 " u q 2 determines a multisorted strong duality for unbounded distributive trilattices with t-involution. We can easily adapt our results to cater for a base variety which admits a multisorted duality rather than a single-sorted one. Predictably this leads to multisortedness at the duplicate level. In the case of Theorem 3.1, one obtains the required alter ego by squaring the base level alter ego, sort by sort; as before, the base variety and its duplicate have the same dual category. The extension of Theorem 6.3 employs two disjoint copies of each sort of the base-level alter ego. The proofs of these results involve only minor modifications of those for the single-sorted case. As an example, the multisorted version of Theorem 6.3 combined with the results in [9, Example 9.4] leads to a strong duality for unbounded distributive trilattices which has four sorts, obtained from the two-sorted duality for pDBu.
13,876.8
2015-07-16T00:00:00.000
[ "Mathematics" ]
Collective dynamics of a dense streamer front We explore the dynamics of dense streamer channel fronts. We introduce a novel, fully three-dimensional, adaptive mesh refinement streamer simulation code, which leverages the power of general-purpose graphical processing units to accelerate computations. Our code enables the simulation of systems comprising several parallel-propagating streamers, using appropriate boundary conditions to emulate an infinitely extended front of positive streamers in ambient air. Our findings reveal that denser streamer packings result in slower front propagation and increased electric field screening within the streamers. To interpret these results and progress towards developing a coarse-grained corona model, we present a streamlined model that effectively approximates the behavior of the comprehensive microscopic system. Introduction A streamer discharge is a weakly ionized filament that propagates by enhancing the electric field in its tip, where electrons gain enough energy to further ionize the embedding medium [1].Because this mode of propagation is driven by fast electrons and does not require significative heating of the medium, streamers are initiated more readily than other types of gas discharges.They often precede and surround hot discharges such as leaders [2,3] and, in the presence of a spatially extended electric field, streamers can form complex filamentary structures.One example of these structures where individual streamers can be optically resolved are upper-atmospheric discharges called sprites [4][5][6][7].Fast breakdown occurring within thunderclouds is presumably also composed of streamer channels, possibly numbering around 10 8 [8,9] per event, although in this case individual streamers have never been independently observed.Fast breakdown [10][11][12] is the likely source Original Content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence.Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. of narrow bipolar events, which are very low frequency radio pulses with durations of tens of microseconds, associated with high frequency radiation [13][14][15] and also of Blue LUminous Events (BLUEs) [9,16,17] optical emissions emanating from thunderclouds. In sprites, in fast breakdown and in the corona that surrounds a leader there is a large number of simultaneously propagating streamers.One natural question to ask about these phenomena is whether streamer interaction has to be considered and, in that case, what is its influence on the global dynamics of the streamer system.The collective dynamics of streamer fronts have been investigated previously with models that treated streamers as one-dimensional advancing conductors [18,19] but a detailed, microscopical model of a streamer front requires expensive, three-dimensional computations that only recently have become within reach [20][21][22]. Here we apply a new code that combines adaptive mesh refinement (AMR) with graphical processing unit (GPU) computations to simulate the interactions between parallel streamers in a sufficiently dense front.Whereas AMR allows us to reduce the number of computations per time step, running these computations in a GPU allows us to perform many of these computations in parallel, achieving a higher throughput (that is, more operations per second).GPUs perform well for our task of solving the partial differential equations for streamer evolution because most of the computations consist in applying relatively simple operations to each cell inside a structured grid, which a GPU can process in parallel. As we describe below, the physical system that we simulate is an infinite, planar front of positive streamers propagating in a common direction.We model this infinite front by a lattice of identical square blocks, each containing a few streamers.In practice only one of these blocks needs to be considered in the simulation, the remaining lattice being taking into account by imposing periodic boundary conditions.The dynamics of the front is partly determined by the surface density of streamers, which we adjust by means of the number of streamers inside the representative lattice cell.We consider cases where the density is high enough that the interaction affects the streamer propagation. This article is divided as follows: in section 2 we describe our microscopic streamer model and geometrical configuration, section 3 reports simulation results that are discussed with the aid of a macroscopic model in section 4 and finally section 5 provides some concluding remarks. Numerical implementation The simulation code uses tree-based structured AMR (SAMR) and it features a total variation diminishing (TVD) method for the drift-diffusion equation based on the monotonized central (MC) flux limiter [23], and full approximation scheme (FAS) full multigrid (FMG) to solve the Poisson and Helmholtz equations [24]. An adaptive mesh was required since streamers are multiscale phenomena that cover a great span of spatial scales.However, as discussed in [25], there are different types of AMR to choose from.The two main classes of AMR are: structured AMR (SAMR), and unstructured AMR (UAMR).From our investigation, we found that UAMR (the common example of which is the finite element method (FEM) meshes) is most suited for a mesh that will change little in time.Streamer behavior is at odds with this requirement because the tip of the streamer (the region that requires the highest resolution) is in constant movement. Still, within SAMR there are two main approaches: treebased SAMR, and patch-based SAMR.Tree-based SAMR usually refers to a quadtree or octree data structure where each node corresponds to a cell.However, here we associate each node to a small square or cubic patch of cells.Patch-based SAMR, uses a base grid to cover the entire domain of the simulation, and then uses a collection of rectangular patches to cover the regions that require more resolution.We found the distinction here to be similar to the one between SAMR and UAMR.The more structured of the two, Tree-Based, is better suited for meshes that have to constantly change in time, as is our case. The final mesh, which we exemplify in figure 1, is also well suited for parallelization, particularly on GPUs.This is because all the patches have the same structure independently of the resolution. However, figure 1 only shows a static snapshot of the mesh.During an actual simulation, the mesh will have to adapt to the local need for resolution (or lack thereof).To do this we will define a criterium based on local and global characteristics that will define for each grid cell whether it needs a finer grid or it can have a coarser grid.Then, for each quadrant or octant of a patch we will create a finer patch if a single cell requires higher resolution.If all cells in the patch can work with lower resolution that patch is marked for removal.We provide an example of the adaptive grid in one of our simulations in figure 2. When designing a GPU-based application there are usually two main options to choose from: CUDA or OpenCL.In this case, we chose CUDA because of its wider range of numerical libraries, because it supports more C++ code features, and because it is more often available in computing servers.We run all our simulations in Nvidia Tesla P100 accelerator cards (released in 2016, 12GB of high-bandwidth memory, theoretical peak double-precision performance of 4.7 TFLOPS, i.e. 4.7 × 10 12 floating-point operations per second). Regarding the programming language, two common choices are to directly develop in C++ or use PyCUDA.Although PyCUDA can reduce the development time by minimizing the code length, it suffers from an outdated interface and misses features from newer CUDA versions.So, we chose to write directly in C++. Performant GPU-based computations rely on methods called kernels than simultaneously apply an operation to a large number of elementary input data (for example, all cells in the domain of a streamer simulation).Launching each kernel as a significant overhead, so ideally one has to minimize the number of launchings. Most of the compute-intensive calculations needed throughout one of our simulations can be implemented independently on each patch (figure 3) and therefore a single kernel suffices.However, for this to be possible the communication between patches needs to as efficient as possible. To achieve this we keep a data structure where each patch has easy access to the patches it needs to fill its ghost cells.We improve performance further by using a single kernel launch per refinement level of the mesh.The kernel covers all the ghost cell boundaries, including direct copy, iterpolation, and boundary conditions.Although this uses branching in the code-usually a problem for GPUs-as long as the boundaries have at least 32 cells, the cost is reasonably small because the kernel operations are divided into small subsets (warps) and we ensure that all operations inside a warp are equivalent. Physical model and configuration We use a fluid model for streamers in dry air at atmospheric pressure as previously described in other streamer studies [1,[26][27][28].We consider electrons that drift and diffuse at rates that depend on the local electric field.The electron number density n e follows Example of adaptive mesh refinement (AMR) in our code.For one of our simulations with the system and configuration described in section 2.2 (in this case four streamer seeds) we plot the electric field in the plane x = 1.955 cm at time t = 30 ns, together with the intersections with the oct-tree cube blocks that form the mesh.Each block consists of 8 × 8 × 8 grid cells.The plot represents the same simulation and time instant as figure 5. where µ is the electron mobility, D is a scalar electron diffusion coefficient, S l is the net local ionization rate and S ph is the source of free electrons due to photo-ionization.Positive and negative ions, being much heavier than electrons, are considered immobile.We neglect ion-ion or electron-ion recombination so positive and negative ions can be added into a single variable n i .This is governed by The net charge resulting from electrons and ion generates an electric field E = −∇ϕ where the electrostatic potential ϕ satisfies Poisson's equation where ϵ 0 is the vacuum permittivity and e is the elementary charge. The local ionization source includes impact ionization and attachment, both dissociative and three-body: where ν eff is an effective net ionization rate.The second source of ionization is photo-ionization, which is modeled through a non-local term; for this we follow Zhelezniak's model [29], which we approximate by solving a set of Helmholtz equations [30] using the two-term fit described by [31]. In order to facilitate the validation of our code (see appendix) and the comparison with existing streamer codes, for the electron mobility µ and diffusion coefficient D as well as for the net ionization rate ν eff we use the functional forms described in [28]. The configuration used in our simulations is sketched in figure 4. We model a planar front composed by a large number of streamers propagating approximately in parallel in the z direction.To make this system feasible for numerical simulations we divide the xy plane into an infinite lattice of square cells with a side length L and consider identical dynamics for each cell.With this simplification we reduce the system to the simulation of a single cell on which we impose periodic boundary conditions on each of its four sides.As we see below, each Scheme of the internal representation of the computational grid.At each refinement level grid is divided into blocks with identical size (left) that are then arranged into a tree structure (right).Given a tree structure, we compute the necessary data structures to store the connections between nodes.These structures are used to communicate the boundary conditions for blocks efficiently in a single kernel launch for each computational level.Sketch of the geometry used in our simulations.We consider an infinite, periodically arranged system of positive streamers propagating almost in parallel between two electrodes separated a distance H.The system is composed by a square cell of side L that repeats indefinitely in the two directions perpendicular to the propagation; this is modeled by considering a simulation domain with size L × L × H on which we impose periodic boundary conditions in the lateral boundaries (left picture).The sketch represents a configuration of two streamers per cell.cell contains from one to five streamers.A similar approach to modeling streamer fronts was undertaken by [32], although there the emphasis was on reproducing 2D (planar) analytical results. Modeling the front as a periodic lattice adds a long-range order to the front which is not present in real streamer coronas.The assumption here is that for long-range interactions the precise locations of companion streamers are not important.However, the interaction between close neighbors must be modeled with some detail, which is why several streamers are included in our computational domain. All our simulations share similar initial conditions.The simulation box has dimensions L = 4.096 cm and H = 8.192 cm (respectively 2 12 and 2 13 × 10 µm).A uniform electric field is established by setting a potential ϕ = 0 on the bottom plate (where the streamer seeds are placed) and a potential of ϕ = 204.8kV on the top plate, generating a background electric field of E = 2.5 MV m −1 , which we selected as representative of a high electric field but lower than the breakdown field.To initiate the streamers several seeds, with equal densities of electrons and positive ions, are placed randomly on the bottom plate (anode, to initiate positive streamers).Each seed is a super-Gaussian cylinder capped with a super-Gaussian semi-sphere, resulting in the following initial electron density for N initial seeds: where x i is the center of the randomly placed ith seed (always in the z = 0 plane), the maximum density is n max = 10 19 m −3 , the seed radius is r = 0.5 mm, and the seed length is l = 2 mm.With our parameters the probability of two seeds being close enough to produce a single streamer was low and we did not observe such an event.Finally, a background ionization n bg = 10 15 m −3 is added throughout the simulation box.One of the big challenges was to create a refinement criterium specific for streamer simulations.We start by defining 3 levels of refinement that take into account the time t: By adapting the highest resolution we limit the number of patches in late stages of the simulation, which require less resolution due to larger streamer radius.We also run simulations without this reduction and noticed negligible difference in the results despite significantly longer simulation times.The staggered lowering of the resolution for each level prevents the simultaneous collapse of multiple resolution levels.The conditions for each level are as follows: where S 1 ph is the first term in the Helmoltz approximation of the photo-ionization as found in [31].The specific values listed above were selected as a compromise after testing several simulations.We checked our refinement criterium in a smallscale simulation where we compared with a simulation with a homogeneous high-resolution mesh, seeing no significant difference in the results.The small differences that we observed when we reduced the minimum grid size are discussed below. Results With the code and configuration described above we run simulations containing from one to five initial seeds, imitating seed surface densities approximately from 0.06 cm −2 to 0.3 cm −2 .In order to collect meaningful statistics, the initial locations of the seeds were randomly selected 4 times for each of these initial densities, resulting in a total of 20 simulations.Table 1 shows the average time taken per simulation depending on the number of streamers, as well as the number of grid points at the end of the simulation.We appreciate a sub-linear scaling of the computational resources required to simulate increasing numbers of streamers which we attribute to the overlap of the fine meshes around the streamers. Figure 5 shows a snapshot from one of the simulations with four streamers, obtained at time t = 30 ns after the start time.The streamers have similar lengths but in this case one of them (to the right) has advanced slightly more.Each streamer carries a positive charge that accumulates mostly around its head. The effect of this charge on other streamers is to enhance the electric field of those that are ahead and screen the electric field Length of the longest streamer for each configuration.The inset shows the location z of the streamer that has travelled the furthest away from the anode whereas the main plot shows the difference between this distance and the mean distance travelled in simulations with a single streamer.For each configuration we plot the mean within four samples with the same configuration but different streamer starting positions (thick line) as well as the standard deviation of these four samples (shaded areas).Differences between simulations with the same number of streamers are in part due to numerical errors, which depend on the relative location between the seeds and the grid. of those behind.Therefore the spread in propagation length of all streamers in one simulation increases over time. Let us now focus on the effect that the presence of neighboring streamers has on the speed of the front.For each configuration we measured the length of the streamer that has traveled the furthest away from the inception electrode.The streamer tip location is defined here as the location of the corresponding local maximum of the electric field. Figure 6 shows the average of this maximal length for each streamer density together with a range of variation estimated as the standard deviation of the four simulations.We find that a denser packing of streamers leads to slower propagation.After 30 ns the simulations with five seeds have propagated about 8 mm less than the single-streamer simulations, which amounts to about 15% lower average speed.That a higher density of streamers leads to stronger screening and thus slows down the streamer propagation was already observed in a simplified model of a streamer corona around a spherical electrode [19].Note however that, as we discussed above, the electric field in the longest streamer is enhanced by the presence of surrounding streamers.The fact that this leading streamer is also slower when surrounded by others is therefore not immediately obvious. Note that in figure 6 substantial part of the variation between simulations with the same number of streamers is due to numerical errors in our simulations.By reducing the minimum grid size a factor 2 in the simulation with a single streamer, the length decreased by 1.4% whereas the spread between simulations decreased by a factor 0.14.The speeds in our simulations depend also on other, more physical, parameters such as the background ionization density and the length of the gap between the electrodes.We did not perform a parametric study to determine how far our results generalize to different configurations.For example, some of our simulations with a background density of 10 10 m −3 did not finish, presumably because of streamer branching.Avoiding the branching of streamers is one the motivations that we selected a relatively high background density.Nevertheless, we hypothesize that the underlying physical mechanisms that we discuss below operate also in different scenarios including, for example, streamer branching. To understand better our observations and to obtain a more detailed view of the dynamics of the streamer front, it is useful to define cross-sectional averages of some quantities of interest.The averaged electron number density and charge density per streamer read ne (z) = 1 NL 2 ˆdx dy n e (x, y, z), (6a) where N is the number of streamers in a cell of area L 2 .Rather than averaging the electric field across the simulation volume, it is more informative to measure the electric field that acts on the electrons inside each streamer.Thus, we define an internal field E int by weighting the average with the electron density: ˆdx dy E z (x, y, z)n e (x, y, z).(7) With these definitions the cross-sectional averaged electrical current can be approximated as with the latest approximation being exact for a constant electron mobility.The conservation of charge in the direction of propagation reads Figure 7 shows the evolution of E int , ne (z) and q for our simulations with 2, 4 and 5 streamers.The following features are worth pointing out: (i) The internal electric field is lower in a denser streamer configuration.With two streamers at t = 30 ns the lowest internal electric field is about 8 × 10 5 V m −1 whereas in simulations with five streamers this quantity is about 5 × 10 5 V m −1 .Because the internal electric field depends only on values of the electric field inside the streamers, where the electron density is high, this difference cannot be attributed to the larger volume occupied by five streamers.(ii) A higher streamer density leads to lower cross-sectionally averaged electron density.(iii) As equations ( 8) and (9) show, the evolution of the charge density roughly depends on the product of the internal field E int and the averaged electron density ne .Both the electron density and internal fields are lower for simulations with more streamers, leading to lower charge densities, as seen in the bottom row of the figure. These points are consistent with previous results of macroscopic models such as [19]. Discussion To understand the dynamics described in the previous section it is useful to look at how the presence of other streamers affects the electrostatic interactions between elements.As sketched in figure 8, there are two limiting cases of this interaction, depending on the distance ∆z between a given slice of a streamer containing on average a charge d Q = qdz and the electrons that are affected by this charge. For small ∆z the interaction is dominated by the effect of charges inside the same streamer.The effect on the electric field is proportional to d Q and thus independent of other streamers.At long range (larger ∆z) neighboring streamers come into play.In this limit the electrostatic interaction may be approximated by that created by an uniformly charged plane with a surface charge density dσ = Nd Q (i.e. it is proportional to the streamer surface density N/L 2 ). With these observations in mind, let us return to the behavior of the fastest-propagating streamer discussed above and plotted in figure 6.The interaction of neighboring streamers has two competing effects on the velocity of this leading streamer: (i) The electric field at the tip increases due to the contribution of the positive charges of streamers that have been left behind.(ii) These same charges have the opposite effect of decreasing the electric field in the interior of the leading streamer.This limits the amount of charge transported to the tip, reducing the field at the streamer tip.For point (i) it is mostly the short-range interaction that is important, whereas point (ii) is more dependent on the long-range interaction and therefore it is much more affected by the presence of other streamers.This would explain the slower propagation of denser configurations. Macroscopic one-dimensional model In order to check our understanding of the physics of dense coronas and as a first step in the construction of more rigurous coarse-grained models, we present here a simplified, toy model that can be compared qualitatively with the results of the rigurous, microscopic model.The full source code of our implementation of this 1D model is publicly available [33]. We describe the evolution of the averaged variables defined above, namely q(z, t) and ne (z, t) together with the location of the streamer front, z tip (t).Although in the microscopic model described above the radius of the streamers changes significantly over time, in the simplified model we set a constant radius R = 1 mm for all the streamers.4.1.1.Electrostatic interaction.As we mentioned above, for the long-range part of the electrostatic interaction the contribution of all streamers to the electric field can be approximated by that of an uniformly charged plane.The long-range electric field can thus be written as As our goal is to provide a qualitative understanding of the problem, we employ a simplified model for the shortrange interaction.We assume that electrons are affected by nearby charges as if the charges are uniformy distributed in disks with radius R. Thus, the integrated charge density NL 2 q concentrates in an area π NR 2 .Besides, we give this interaction a limited range R, resulting in the following equation for the short-range potential: from where the short-range electric field derives as We have thus an electric field E L that accounts for interactions in the long-wavelength limit and another one E S for the short-wavelength limit.When we combine them however, we must exercise caution as E L does not vanish in the short-range.We therefore construct a linear combination E L + cE S where the constant c is selected to obtain the proper short-range behaviour, namely which results in where ρ = π NR 2 /L 2 is the fraction of a cross-section covered by streamers.After these considerations, our equation for the internal electric field reads where E B is an additional, externally imposed electric field.For the solution of the one-dimensional Poisson equations for ϕ L and ϕ S we use homogeneous Dirichlet boundary conditions at both sides of the domain.Simulation of a streamer model with a simplified one-dimensional model.We plot results for N = 2 and N = 5 streamers.These results can be compared with figure 7 but note the slightly different snapshot times. Front velocity. We take the front velocity to be a function of the peak electric field resulting from (14).Specifically we use the expression proposed by Naidis [34].This results in 4.1.3.Electron density.The evolution of the electron density ne combines two components: in the streamer interior it is determined by electron ionization and attachment whereas close to the tip it is affected by photoionization and impact ionization ahead (two processes that we do not include with any detail in this simplified model).The evolution of electron density and radius of the streamers is outside the scope of this paper so here we fix the average electron density after the passage of the streamer to approximate the results of the microscopic model.We chose a linearly increasing density n0 = 5 × 10 18 m −4 × z tip (see figure 7).We represent the transition between head and body by a Gaussian profile with an e-folding length of R. The resulting equation is ) ) . (16) Here ν eff is an effective rate of ionization that includes ionization, dissociative and three-body attachment.The prefactor in the first term is such that its time integration for a uniform propagation with speed v yields n0 .The erfc function in the second term attenuates the effect of ionization close to the head, where this effect is already included in the first term.As we mentioned above, our definition of the internal electric field E int leads to a simple expression for the evolution of the average charge density q, namely equation (9).The values of the charge density q and the internal electric field E int derived from the simplified model are reasonably close to those of the microscopic simulations (n e is simply selected to match those results).Also the front speed is similar in both cases (for N = 2 we find v ≈ 1.3 × 10 6 m s −1 in the microscopic model against v ≈ 10 6 m s −1 in the onedimensional model).However we have not explored other parameter regimes so we only claim a qualitative similarity between the two models. From our one-dimensional model we conclude that the dynamics of a dense streamer front can be understood by thinking in terms of a stronger long-range interaction between distant points of the system.We propose that this insight should be applied in future, more accurate models of streamer coronas. Conclusions In this study, we have made progress in understanding the dynamics of streamer plasmas, particularly in the context of densely packed fronts.We have introduced a novel numerical code that harnesses the power of modern GPUs for massive parallelization of arithmetic operations in grid cells, while simultaneously optimizing performance through the implementation of a moving, adaptive mesh.This approach has enabled us to simulate streamer fronts with up to five nearly parallel streamers. Our findings reveal that the presence of neighboring streamers within a front slows down the propagation of even the fastest streamer in the group.This phenomenon is accompanied by lower electric fields within each individual streamer.Our one-dimensional model offers insights into these characteristics, suggesting that the long-range effect of charges in a streamer is seemingly enhanced by the presence of other streamers.While this serves as a heuristic explanation, the true underlying cause stems from the combined effects of charges in multiple streamers. The increased screening inside the streamer channels is consistent with previous explorations of streamer interactions.In [32], for the case of a periodic configuration of planar streamers, it was argued that the front approaches a state of complete field screening in its wake.These arguments should be also applicable to our configuration but our streamers did not propagate long enough to reach this state.On the other hand, a significant mutual screening was also noticed in [18], where the outcome was named collective streamer front or, informally, streamer of streamers.An important difference between the simulations of [18] and the present work is that in the former the front naturally developed a curved shape, which lead to stronger field enhancement around the center of the corona. The simultaneous propagation of a multitude of streamers occurs in several contexts and our results point to the relevance of streamer interactions when streamers are closely packed.We have investigated fronts at atmospheric pressure with densities about 600 m −2 (for 1 streamer in each 4 cm × 4 cm cell) to about 3000 m −2 (for five streamers).Let us provide context to these numbers: (i) A sprite consists of on the order of 100 streamers within a radius of about 10 km.This results in a density of about 3 × 10 −7 m −2 which, rescaled to atmospheric pressure, gives about 1400 m −2 .(ii) If fast breakdown [10] consists in a single streamer front with roughly 10 8 streamers [8,9] distributed in an circular area of radius 100 m their streamer density would be about 3000 m −2 .(iii) It is difficult to identify single streamers in photographs of laboratory experiments in meter-wide gaps such as those by [35].To obtain order-of-magnitude estimates we may consider an average distance between branchings at atmospheric pressure of about ℓ = 2 cm [36] (see also [37,38] for similar results after scaling pressure) and that the transversal spread of the streamer corona is not very different to its longitudinal span (which is the case, at least, for point-initiated discharges, see e.g.[39] and simulations in [18]).Then after propagating a distance d the front density would be around exp(d/ℓ)/π d 2 .This gives a front density 5000 m −2 after propagating a distance d = 10 cm. Based on these examples, it is likely that a streamer corona often transforms, through branching, into a front densely populated with streamers.These closely packed streamers interact in a way that should be considered in future models.We believe our work represents a step towards developing coarse-grained models capable of investigating the dynamics of extensive streamer coronas, potentially involving hundreds of millions of streamers.Such investigations are currently beyond the reach of existing computational capabilities, but we hope that our progress lays the foundation for future advancements in this field. Figure A1 . Results of our code when reproducing the setup of the first test case of [28].The upper panel shows the electric field in the central axis of the simulation at intervals of 1 ns.The lower two panels plot the location and of the streamer as a function of time, with the lower plot showing differences relative to a constant propagation speed v = 0.05 cm ns −1 .These plots should be compared with figure 5 in [28]. Appendix. Code validation We validated our code by comparing with the streamer simulations described by Bagheri et al [28], namely the first case setup in that paper.This is a simulation where a streamer propagates inside a cylindrical domain of height and radius L r = L z = 1.25 cm in which there is a potential difference of 18.75 kV between the lower and upper boundaries.A positive streamer is initiated by means of a spherical non-neutral, positive-ion Gaussian seed with a peak density N 0 = 5 × 10 18 m −3 and a e-folding length σ = 0.4 mm.In this setup photo-ionization is neglected and the streamer propagates due to a background ionization density of 10 13 m −3 .[28] provides further details. In our case, since our code is purely three-dimensional, we replaced the domain geometry by a square prism with side length 2L r and located the initial seed in the center.Figure A1 shows the results of our simulation.The axial field as well as the location of the streamer head as a function of time closely reproduce the results of [28]. Figure 1 . Figure 1.Two-dimensional example of tree-based SAMR grid used in our simulation code.Notice the use of ghost cells which are used to communicate data between the patches. Figure 2 . Figure 2.Example of adaptive mesh refinement (AMR) in our code.For one of our simulations with the system and configuration described in section 2.2 (in this case four streamer seeds) we plot the electric field in the plane x = 1.955 cm at time t = 30 ns, together with the intersections with the oct-tree cube blocks that form the mesh.Each block consists of 8 × 8 × 8 grid cells.The plot represents the same simulation and time instant as figure5. Figure 3 . Figure3.Scheme of the internal representation of the computational grid.At each refinement level grid is divided into blocks with identical size (left) that are then arranged into a tree structure (right).Given a tree structure, we compute the necessary data structures to store the connections between nodes.These structures are used to communicate the boundary conditions for blocks efficiently in a single kernel launch for each computational level. Figure 4 . Figure 4. Sketch of the geometry used in our simulations.We consider an infinite, periodically arranged system of positive streamers propagating almost in parallel between two electrodes separated a distance H.The system is composed by a square cell of side L that repeats indefinitely in the two directions perpendicular to the propagation; this is modeled by considering a simulation domain with size L × L × H on which we impose periodic boundary conditions in the lateral boundaries (left picture).The sketch represents a configuration of two streamers per cell. Figure 5 . Figure 5. State of a simulation with four streamers (a surface density of 4/L 2 ≈ 0.24 cm −2 ) at time t = 30 ns.The upper left panel shows the maximum of the electron density along lines aligned with the x axis.The upper right panel shows the average magnitude of the electric fields along these same lines.The lower two panels contain cuts of the electron density (left) and eclectic field magnitude (right) at horizontal planes indicated by the dashed lines in the upper panels. Figure 6 . Figure6.Length of the longest streamer for each configuration.The inset shows the location z of the streamer that has travelled the furthest away from the anode whereas the main plot shows the difference between this distance and the mean distance travelled in simulations with a single streamer.For each configuration we plot the mean within four samples with the same configuration but different streamer starting positions (thick line) as well as the standard deviation of these four samples (shaded areas).Differences between simulations with the same number of streamers are in part due to numerical errors, which depend on the relative location between the seeds and the grid. Figure 7 . Figure 7.Comparison of simulations with three different streamer surface densities, parametrized by the number of streamers inside a periodic cell (2, 4 and 5 streamers).Each plot shows quantities of interest derived from cross-sections perpendicular to the axis of propagation.See the text for precise definitions of each of these quantities. Figure 8 . Figure 8. Different limits of the electrostatic interaction as seen by a slab of electrons in a streamer (green disks).At short distances (a) only charges in the same streamer interact significantly with the streamer: in this range the presence of neighboring streamers is irrelevant.At longer distances (b) electrons are affected by interactions from all streamers and the presence of neighboring streamers becomes more relevant. Figure 9 . Figure 9.Simulation of a streamer model with a simplified one-dimensional model.We plot results for N = 2 and N = 5 streamers.These results can be compared with figure7but note the slightly different snapshot times. Figure 9 shows simulation results from our simplified model for N = 2 and N = 5 streamers in the same domain size (L = 4.096 cm, H = 8.192 cm) and background electric field E B = 2.5 MV m −1 as our detailed, microscopic simulations.Although crudely, our simplified model captures the essential features of the electrostatic interactions in a streamer front.We reproduce the two most remarkable features of the microscopic simulation: that denser fronts are slower and exhibit stronger screening inside the streamers. Table 1 . Performance of the code for different number of streamers in the simulation box.Second columns is the average simulation time in minutes on a Nvidia P100.Third column is the average number of grid points at the end of the simulation.Number of streamers Simulation time (min.)Gridpoints(10 6 )
8,654.8
2023-09-06T00:00:00.000
[ "Physics", "Engineering" ]
New advances in Instrument Detection and Control Recent developments in computation and network technologies have contributed much to the successful handling of complex problems in biology, physics, engineering, mining, economics, etc. It is worth mentioning that the recent technological advanceshave further enhanced the integration of the real-time cyber and the physical world, which would lead to more reliable, productive, and efficient industries and businesses. Generally, these complex system problems tend to share a number of interesting properties from the theoretical analysis and design viewpoint. The key features of such systems are that the nonlinear interactions among its components can lead to interesting emergent behaviour and incomplete measurements affect the overall system performance. The main aim of this special issue is to bring together the latest/innovative knowledge and developments for handling complex systems in instrument, detection and control domains. Topics include, but are not limited to: (1) control systems theory; (2) networked control and estimation; and (3) system reliability and safety. The solicited submissions to this special issue are from the researchers in engineering and mathematics. After a rigorous peerreview process, 19 papers have been selected that provide solutions, or early promises, to modelling, analysis, measure, detection, control and estimation problems of real-world complex systems, such as time-delay systems, nonlinear systems, power systems, economic systems, electromechanical systems, the gas–liquid twophase flow, the gas turbine, and so on. Introduction Recent developments in computation and network technologies have contributed much to the successful handling of complex problems in biology, physics, engineering, mining, economics, etc. It is worth mentioning that the recent technological advances have further enhanced the integration of the real-time cyber and the physical world, which would lead to more reliable, productive, and efficient industries and businesses. Generally, these complex system problems tend to share a number of interesting properties from the theoretical analysis and design viewpoint. The key features of such systems are that the nonlinear interactions among its components can lead to interesting emergent behaviour and incomplete measurements affect the overall system performance. The main aim of this special issue is to bring together the latest/innovative knowledge and developments for handling complex systems in instrument, detection and control domains. Topics include, but are not limited to: (1) control systems theory; (2) networked control and estimation; and (3) system reliability and safety. The solicited submissions to this special issue are from the researchers in engineering and mathematics. After a rigorous peerreview process, 19 papers have been selected that provide solutions, or early promises, to modelling, analysis, measure, detection, control and estimation problems of real-world complex systems, such as time-delay systems, nonlinear systems, power systems, economic systems, electromechanical systems, the gas-liquid twophase flow, the gas turbine, and so on. On control systems theory The stability analysis and control design have been the important topics in both communities of dynamical system and control engineering. In recent years, some new results have been proposed for time-delay systems, nonlinear systems, finance systems and anti-angiogenic systems, etc. In the paper entitled 'Stability Analysis for Delayed Neural Networks based on a Generalized Free-weighting Matrix Integral Inequality' by Z. Zhao et al., by using more information of time delay, a new augmented Lyapunov-Krasovskii functional is constructed. A generalized free-weighting matrix integral inequality is chosen to estimate the derivative of single integral terms more accurately. Meanwhile, the Jensen integral inequality and the improved convex combination are combined to estimate integral terms with an activation function. As a result, a novel stability criterion with less conservatism is established. In the work entitled 'Estimating the Boundary of the Region of Attraction of Lotka-Volterra System with Time Delays' by J. Yang et al., the local stability problem and estimates of the region of attraction (RA) is considered for the Lotka-Volterra (L-V) competitive system with time-delays. Based on the quadratic system theory, the appropriate Lyapunov-Krasovskii functional and the less conservative integral inequalities, a local stability condition is obtained and the estimate of the RA is discussed. Furthermore, the corresponding optimization problem about the estimate of the RA is proposed. In the paper entitled 'Asymptotic Dynamics of an Antiangiogenic System in Tumour Growth' by X. Yu and Q. Zhang, the Neumann initial-boundary problem is studied for the anti-angiogenic system in tumour growth. The known results show that the problem possesses a unique global-in-time bounded classical solution for some sufficiently smooth initial data. For the large time behaviour of the global solution, by establishing some estimates based on semigroup theory, the authors prove that the solution approaches the homogeneous steady state as the time tends to infinity. In the work entitled 'Further Results on Delay-Dependent Robust H∞ Control for Uncertain Systems with Interval Time-Varying Delays' by H. Liu et al., the robust H∞ control problem is considered for uncertain linear systems with interval time-varying delays. The key features of the proposed method are the employment of a tighter integral inequality and the construction of an appropriate Lyapunov-Krasovskii functional. Using the proposed method, delay-dependent conditions with less conservatism are first derived. Then, the robust H∞ controller design and performance analysis are discussed. In the work entitled 'H ∞ Control for a Hyperchaotic Finance System with External Disturbance based on the Quadratic System Theory' by E. Xu et al., using the quadratic system theory, an augmented Lyapunov functional, some integral inequalities and rigorous mathematical derivations, a sufficient condition is first established for a hyperchaotic system under the delayed feedback controller in terms of linear matrix inequalities under which the closed-loop system can achieve some desirable performances including the boundedness, the H ∞ performance and the asymptotic stability. Moreover, several convex optimization problems are formulated to obtain the optimal performance indices. On networked control and estimation Over the past several decades, the networked control and estimation problems have gained significant attention in control and signal processing communities due to its clear engineering background in the hot-rolled strip, the gas turbine, and the gas-liquid two-phase flow, etc. In recent years, many important techniques and algorithms have been proposed for the networked control and the state estimation. In the work entitled 'Reduced-order Observer-based Interval Estimation for Discrete-time Linear Systems' by Y. Chen et al., the discrete-time linear system is transformed into a reduced-order system via a special equivalent transformation, which depends on the orthogonal procedure on the output matrix. Then, a kind of robust reduced-order observer is developed such that the states of the original discrete-time system can be indirectly estimated, where the H∞ technique is used to attenuate the effects of disturbances. Based on the estimated states provided by the reduced-order observer, the interval estimation can be obtained by reachability analysis. In the hot rolling process, the mechanical properties of steel materials are important to steel quality. The bendability is one of the key parameters to evaluate the formability of the strip. In the paper entitled 'Causes Detection of Unqualified Bendability of Hot Rolled Strip via Improved RankBoost with Multiple Feature Ranking Algorithms' by F. He et al., a model to find the causes of bendability of hot-rolled strip is built based on an improved RankBoost with multiple feature selection algorithms using historical data. Firstly, the related process variables and bendability results are collected. And then, seven feature ranking methods are used to rank the significance of features individually. Finally, to summarize the results of the seven methods, the total importance of every feature can be obtained using the improved Rank-Boost method to select the most important features as the major causes. The two-phase flow system widely exists in industrial production processes such as petroleum, chemical industry, nuclear power and metallurgy. It has been identified that the two-phase flow system often exhibits the complex non-linear characteristics. In the work entitled 'Feature Extraction and Identification of Gas-liquid Twophase Flow based on Fractal Theory' by C. Fan et al., based on the fractal theory, the authors characterize the fractal characteristics of the two-phase flow system. The experimental results show that the proposed method can effectively identify signals of different flow patterns, especially the transitional flow pattern, and reflect the complexity of gas-liquid two phases. The existing short-term load forecast methods for power systems will lead to the low accuracy or even failure of the load prediction method since the multi-stage load change and weather fluctuation factors are not considered. In the paper entitled 'Electric Short-term Load Forecast Integrated Method based on Time-segment and Improved MDSC-BP' by R. Wang et al., based on multiresources data, an integrated forecast method is proposed for the electric short-term load forecast, which improves the maximum deviation similarity criterion of time-segment BP neural network. Finally, a load forecast of a certain area shows that the prediction accuracy of different types of days can reach more than 96%. In the past couple of decades, the problem of direction of arrival (DOA) estimation for narrowband sources has been studied extensively because of its wide application in many fields such as radar, navigation and wireless communication. In the paper entitled 'A New Model for DOA Estimation and Its Solution by Multi-target Intermittent Particle Swarm Optimization' by L. Cui et al., a new method based on the Vector Error Model (VEM) is proposed for estimating the DOAs, which do not need the source number in advance. The algorithm of multitarget intermittent particle swarm optimization (MIPSO) was adopted to solve the VEM, and the performance of the VEM-MIPSO method was analysed through simulations for a uniform linear array and an L-shaped array respectively. On system reliability and safety Recently, more and more attention has been paid to the reliability and safety of complex systems (such as the gas turbine, the electromechanical system, the electric vehicle alarm system, etc). The fault detection and diagnosis of a gas turbine are of great significance for guaranteeing the complicated dynamic systems working normally and safely. Most of the existing fault diagnosis methods, based on convolutional neural networks, have certain limitations in extracting correlations of multi-channel data features. In the work entitled 'Fault Diagnosis of Gas Turbine based on Matrix Capsules with EM Routing' by Y. Zhao et al., based on matrix capsules with EM routing, a fault diagnosis approach is proposed for a gas turbine. First of all, three channels data, which respectively represent acceleration, pressure and pulse, are integrated into one image to feed into the network. Secondly, network models based on the matrix capsules start to be trained by using the input dataset which contains fault image and normal image. Finally, the pre-trained capsules model is used to diagnose the state of testing data. In the paper entitled 'A Hierarchical Fuzzy Comprehensive Evaluation Algorithm for Running States of an Electromechanical System' by F. Wang et al., a fuzzy hierarchical comprehensive evaluation algorithm including fuzzy matrix and comprehensive evaluation matrix calculation is proposed to accurately evaluate the running state of an electromechanical system. The analytic hierarchy process is included in the algorithm, which calculates the state of each subsystem from top to bottom and the states are used as the evaluation factor of the upper system. The idea of degradation degrees is introduced to standardize the indicators. The experimental results show that the established state evaluation model can accurately judge the operation state of the system with not too much data. Aiming at the problem of most electric vehicles (EV) sounding false alarms due to touches, in the work entitled 'Electric Vehicle Regional Management System based on the BSP Model and Multi-Information Fusion' by Z. Zeng et al., an EV alarm regionalization management control system is designed based on multi-information fusion and the BSP model. The proposed system uses multiple sensors to detect the state of the EV, and realizes multisensor information fusion by the Lagrange interpolation method. The ZigBee networking technology is used to carry out the regional management of EV alarms, establish a BSP model, realize the synchronous transmission of system status detection signals, and finally complete the alarm function. The experimental results show that the false alarm rate of the proposed alarm system that performs the multi-information fusion is greatly reduced. On other fields In recent years, many improved algorithms have been proposed for the video fingerprinting, the object detection, the Image dehazing and software vulnerability mining, etc. Such algorithms play a vital important role in identification, detection, and control of complex systems. In order to reduce the computer memory and accelerate retrieval, video fingerprinting has gradually developed into an important part of video copy detection. In the paper entitled 'Compact Video Fingerprinting via an Improved Capsule Net' by L. Wei et al., an end-to-end fingerprinting via a capsule net is proposed. In order to capture video features, a capsule net, based on a 3D/2D mixed convolution module, is designed, which maps raw data to compact real vector directly. A new designed adaptive margin triplet loss function is introduced, and it can automatically adjust the loss according to the sample distance. In the work entitled 'Video Fingerprinting based on Quadruplet Convolutional Neural Network' by X. Li et al., the authors propose a compact video fingerprinting based on a quadruplet convolutional neural network. The algorithm consists of four branch networks with shared weights, each branch network contains feature extraction and quantization coding. The experimental results performed on the public dataset show that the algorithm can effectively improve the robustness and distinctiveness. In the paper entitled 'An Improved YOLOv3 Model based on Skipping Connections and Spatial Pyramid Pooling' by X. Zhang et al., an improved YOLOv3 model with skipping connections is proposed for the object detection. Firstly, a dataset is created by a web crawler and annotated it, and then the dataset is clustered to optimize the anchor parameters. Due to the DenseNet structure and the SPPNet structure are introduced, deep features fused with shallow features and network accuracy are improved. Finally, the multi-objective loss function combined with mean square error loss and cross-entropy loss is used to regress and correct the prediction frame, so the accuracy of network detection is improved. In the work entitled 'Single Image Dehazing Algorithm based on Pyramid Multi-Scale Transposed Convolutional Network' by K. Wang et al., the authors design a real end-to-end image dehazing network to directly learn the mapping relationship between hazy images and the corresponding clear images. In this network, the cascaded feature extraction blocks extract the diversified feature information of the input images by multi-channel concatenation structure. In order to reconstruct high-quality dehazed images relieving the colour distortion, a multiscale transposed convolution block is designed to gradually expand the resolution of the obtained feature maps, and the skip connections from the feature extraction module are introduced to supplement the detailed information of the feature map pyramid. Voxel grid is widely used in point cloud segmentation due to its regularity. However, the memory consumption caused by high resolution restricts the performance of voxel grid. In the work entitled 'An Improved Volumetric Grid Deep Network Model for Point Cloud Segmentation' by X. Zhang et al, an improved voxel grid deep network model is proposed to represent more comprehensive point cloud features at the same resolution. Firstly, the point cloud data are structured within a voxel bounding box to correspond with the three-dimensional convolution kernel, and a fixed number of point coordinates are selected to generate the point feature vector. Then, in order to consider the distribution characteristics, the reliability coefficient is used as an equivalent descriptor of the point cloud distribution density. Finally, a corresponding deep network is constructed to deal with the above features. Fuzzy testing is the most effective method for vulnerability mining, which can deal with complex programmes better than other vulnerability mining techniques and has strong scalability. However, in the large-scale vulnerability analysis test, the fuzzy test input sample set faces the challenges of low quality, high repeatability and low availability etc. In the work entitled "Research on Reducing Fuzzy Test Sample Set based on Heuristic Genetic Algorithm" by Z. Wang et al., the heuristic genetic algorithm is proposed. By using 0-1 matrix, the genetic algorithm is improved with a consideration of practical problems and the execution path for sample set is selected and compressed through approximation algorithm, thus obtaining a smallest sample set and accelerating the efficiency of fuzzy test.
3,749.2
2021-04-01T00:00:00.000
[ "Computer Science" ]
KHNYN is essential for the zinc finger antiviral protein (ZAP) to restrict HIV-1 containing clustered CpG dinucleotides CpG dinucleotides are suppressed in most vertebrate RNA viruses, including HIV-1, and introducing CpGs into RNA virus genomes inhibits their replication. The zinc finger antiviral protein (ZAP) binds regions of viral RNA containing CpGs and targets them for degradation. ZAP does not have enzymatic activity and recruits other cellular proteins to inhibit viral replication. We found that KHNYN, a protein with no previously known function, interacts with ZAP. KHNYN overexpression selectively inhibits HIV-1 containing clustered CpG dinucleotides and this requires ZAP and its cofactor TRIM25. KHNYN requires both its KH-like domain and NYN endonuclease domain for antiviral activity. Crucially, depletion of KHNYN eliminated the deleterious effect of CpG dinucleotides on HIV-1 RNA abundance and infectious virus production and also enhanced the production of murine leukemia virus. Overall, we have identified KHNYN as a novel cofactor for ZAP to target CpG-containing retroviral RNA for degradation. Introduction A major component of the innate immune system are cell intrinsic antiviral proteins. These act at multiple steps in viral replication cycles and some are induced by type I interferons (Schneider et al., 2014). Many viruses have evolved mechanisms to evade inhibition by these proteins. First, viruses can encode proteins that counteract specific antiviral factors. Examples of this mechanism in HIV-1 are the accessory proteins Vif and Vpu that counteract APOBEC3 cytosine deaminases and Tetherin, respectively (Malim and Bieniasz, 2012). Second, viral protein or nucleic acid sequences can evolve to prevent recognition by antiviral factors. The abundance of CpG dinucleotides is suppressed in many vertebrate RNA virus genomes and when CpGs are experimentally introduced into picornaviruses or influenza A virus, replication is inhibited Burns et al., 2009;Gaunt et al., 2016;Karlin et al., 1994;Tulloch et al., 2014). This shows that CpG suppression in diverse RNA viruses is required for efficient replication. CpG dinucleotides are also suppressed in the HIV-1 genome and multiple studies have shown that they are deleterious for replication (Antzin-Anduetza et al., 2017;Kypr et al., 1989;Shpaer and Mullins, 1990;Takata et al., 2017;Theys et al., 2018;Wasson et al., 2017). Recently, the cellular antiviral protein ZAP was shown to bind regions of HIV-1 RNA with high CpG abundance and target them for degradation, which at least partly explains why this dinucleotide inhibits viral replication (Takata et al., 2017). ZAP (also known as ZC3HAV1) is a component of the innate immune response targeting viral RNAs in the cytoplasm to prevent viral protein synthesis (Li et al., 2015). ZAP inhibits the replication of a diverse range of viruses including retroviruses, alphaviruses, filoviruses, hepatitis B virus and Japanese encephalitis virus as well as retroelements (Bick et al., 2003;Chiu et al., 2018;Gao et al., 2002;Goodier et al., 2015;Mao et al., 2013;Moldovan and Moran, 2015;Müller et al., 2007;Takata et al., 2017;Zhu et al., 2011). There are two human ZAP isoforms, ZAP-L and ZAP-S (Kerns et al., 2008). Both isoforms contain a N-terminal RNA binding domain containing four CCCH-type zinc finger motifs but ZAP-L also contains a catalytically inactive C-terminal poly(ADP-ribose) polymerase (PARP)-like domain (Chen et al., 2012;Guo et al., 2004;Kerns et al., 2008). Importantly, neither isoform of ZAP has nuclease activity and it likely recruits other cellular proteins to degrade viral RNAs. Identifying and characterizing these cofactors for ZAP is essential to understand how it restricts viral replication. ZAP requires the E3 ubiquitin ligase TRIM25 for its antiviral activity against Sindbis virus and HIV-1 with clustered CpGs (Li et al., 2017;Takata et al., 2017;Zheng et al., 2017). While ZAP has been reported to interact with several components of the 5'À3' and 3'À5' RNA degradation pathways, depletion of these proteins did not substantially increase infectious virus production for HIV-1 containing clustered CpG dinucleotides (Goodier et al., 2015;Guo et al., 2007;Takata et al., 2017;Zhu et al., 2011). This suggests that additional proteins may be required for ZAP to inhibit viral replication. Herein, we identify KHNYN as a cytoplasmic protein that interacts with ZAP and is necessary for CpG dinucleotides to inhibit HIV-1 RNA and protein abundance. KHNYN interacts with ZAP and selectively inhibits HIV-1 containing clustered CpG dinucleotides in a ZAP-and TRIM25-dependent manner To identify candidate interaction partners for ZAP, a yeast two-hybrid screen was performed for fulllength ZAP-S and ZAP-L using prey fragments from a mixed Pam3CSK4-induced and IFNb-induced human macrophage cDNA library. Candidate interacting proteins were assigned a Predicted Biological Score (PBS) of A to F (Formstecher et al., 2005): A = very high confidence in the interaction, B = high confidence in the interaction and C = good confidence in the interaction. Scores of D to F eLife digest Like many viruses, the genetic information of the human immunodeficiency virus (or HIV for short) is formed of molecules of RNA, which are sequences of building blocks called nucleotides. Once the virus is inside human cells, a protein called ZAP can identify viral RNAs by binding to a precise motif, a combination of two nucleotides called CpG. This allows the cell to destroy the viral RNA, thus preventing the virus from multiplying. However, HIV and other viruses that infect mammals are often able to 'hide' from ZAP because their genetic codes have many fewer CpG nucleotides than what would be expected by chance. ZAP by itself does not appear to be able to cut up RNA, so it is thought that it recruits other, as yet unidentified, proteins to destroy the genome of viruses. Here, Ficarelli et al. used genetic techniques to identify a new human protein called KHNYN that interacts with ZAP. First, a new version of the RNA genome of HIV was engineered, which contained higher numbers of CpGs: this CpG-enriched virus could be inhibited by ZAP in human cells. The experiments showed that increasing the amount of KHNYN protein led to lower levels of HIV genomes enriched in CpG. However, increasing the levels of KHNYN protein in mutant cells without ZAP had no effect on how well CpG-enriched HIV multiplied. CpG-enriched HIV and another related virus with many CpG nucleotides were able to multiply more successfully in mutant cells lacking the KHNYN protein than in normal cells. Further experiments also suggested that mutating a region of KHNYN which is likely to cut RNA prevented it from inhibiting HIV enriched with CpGs. Artificially manipulating the CpG nucleotide content of viral sequences could help create viruses useful for human health. For instance, weakened viruses could be designed for use in vaccines. Some human tumors have decreased levels of ZAP, and it could therefore be possible to build viruses that healthy cells can destroy, but which could multiply in and kill cancer cells. However, before these approaches can be developed, exactly how ZAP and KHNYN degrade strands of viral RNA needs to be characterized. are low confidence interactions, non-specific interactions or proven technical artifacts. For ZAP-S, 11 clones were obtained from 60.4 million tested interactions. 10 of these contained a prey fragment encoding KHNYN (Figure 1) and had a PBS = A. One clone had an insert encoding MARK3 but this was in the antisense orientation and therefore did not receive a score. For ZAP-L, two positive clones were analyzed from 104 million tested interactions. Both of these had an insert encoding KHNYN and had a PBS = C. KHNYN has two isoforms (KHNYN-1 and KHNYN-2) that contain a N-terminal KH-like domain and a C-terminal NYN endoribonuclease domain (Figure 1) (Anantharaman and Aravind, 2006). The selected interaction domain, which is the amino acid sequence shared by all prey fragments matching KHNYN, comprised amino acids 572-719 of KHNYN-2 for the clones identified in both screens. A yeast two-hybrid screen was then performed using the same library with full length KHNYN-2 as the bait. Nine clones were isolated that encode ZAP and these had a PBS = A. The selected interaction domain was amino acids 4-352, which is present in both isoforms (Figure 1). Supporting the reproducibility of this interaction, KHNYN has also been identified as a ZAP-interacting factor in large-scale affinity purification-mass spectrometry and in vivo proximity-dependent biotinylation (BioID) screens (Huttlin et al., 2017;Youn et al., 2018). We first confirmed the interaction between ZAP and KHNYN by co-immunoprecipitation and found both KHNYN isoforms interacted with both isoforms of ZAP (Figure 2A and B). This interaction was RNase insensitive ( Figure 2C). Since ZAP mediates degradation of HIV-1 RNAs with clustered CpG dinucleotides in the cytoplasm (Takata et al., 2017), its cofactors are likely to be localized in this compartment. Therefore, we analyzed the subcellular localization of KHNYN and observed that it localizes to the cytoplasm similar to ZAP ( Figure 2D). Its localization was not affected when ZAP was knocked out using CRISPR-Cas9-mediated genome editing. The mechanisms that allow a virus to escape the innate immune response often have to be inactivated to study the effect of antiviral proteins. For example, HIV-1 Vpu or Vif have to be mutated to allow Tetherin or APOBEC3 antiviral activity to be analyzed (Malim and Bieniasz, 2012). Since CpG dinucleotides are suppressed in HIV-1, endogenous ZAP does not target the wild-type virus (Takata et al., 2017). However, a ZAP-sensitive HIV-1 can be created by introducing CpGs through synonymous mutations into the env open-reading frame in the viral genome. This makes HIV-1 an excellent system to study the mechanism of action of this antiviral protein because isogenic viruses can be analyzed that differ only in their CpG abundance and therefore ZAP-sensitivity (Takata et al., 2017). To determine if KHNYN overexpression inhibited wild-type HIV-1 or HIV-1 with 36 CpG dinucleotides introduced into env nucleotides 86-561 (HIV-1 EnvCpG86-561 ) (Figure 2-figure supplement 1), each isoform was overexpressed in the context of a single cycle replication assay. As expected, transfection of the HIV-1 EnvCpG86-561 provirus into HeLa cells yielded substantially less infectious virus than wild-type HIV-1, which was accounted for by reduced expression of Gag and Env proteins ( Figure 2E and F). While KHNYN-1 or KHNYN-2 overexpression decreased wild-type HIV-1 infectivity by~5 fold, they decreased HIV-1 EnvCpG86-561 infectivity by~400 fold ( Figure 2E). The inhibition of Figure 1. KHNYN is a ZAP-interacting factor identified by yeast two-hybrid screening. A yeast two-hybrid screen for ZAP-S and ZAP-L interacting factors identified a region in KHNYN-1 and KHNYN-2. The selected interaction domain (SID) is the amino acid sequence shared by all prey fragments and is shown in magenta. A reciprocal yeast two-hybrid screen using KHNYN-2 as the bait identified a region in ZAP-S and ZAP-L. infectivity by KHNYN-1 or KHNYN-2 correlated with decreases in Gag expression, Env expression, and virion production ( Figure 2F). Overall, KHNYN appeared to selectively inhibit HIV-1 EnvCpG86-561 infectious virus production. We then determined whether ZAP is necessary for KHNYN to inhibit HIV-1 with clustered CpG dinucleotides. Control or ZAP knockout cells ( Figure 3A) were co-transfected with pHIV-1 or pHIV-1 EnvCpG86-561 and increasing amounts of pKHNYN-1. Wild-type HIV-1 infectious virus production was not affected by ZAP depletion and HIV-1 EnvCpG86-561 infectivity was restored in ZAP knockout cells (Figures 3B, 0 ng of KHNYN-1), confirming that ZAP is necessary to inhibit HIV-1 with CpGs introduced in env (Takata et al., 2017). At low levels of KHNYN-1 overexpression (such as 62.5 ng), there was no substantial decrease in infectivity for wild-type HIV-1 while HIV-1 EnvCpG86-561 infectivity was inhibited in a ZAP-dependent manner ( Figures 3B and 4A). The decrease in infectivity for HIV-1 EnvCpG86-561 in control cells transfected with pKHNYN-1 correlated with decreases in Gag expression, Env expression and virion production ( Figure 3C). Next, we analyzed how ZAP and KHNYN regulate HIV-1 genomic RNA abundance in cell lysates and media. As expected (Takata et al., 2017), HIV-1 EnvCpG86-561 genomic RNA abundance was decreased in control cells but was similar to wild-type HIV-1 in ZAP knockout cells ( Figure 4B-C, compare GFP samples). In control cells, 62.5 ng of KHNYN-1 or KHNYN-2 inhibited HIV-1 EnvCpG86-561 genomic RNA abundance compared to the GFP control ( Figure 4B-C). Importantly, KHNYN-1 and KHNYN-2 did not affect wild-type HIV-1 genomic RNA levels and did not substantially inhibit HIV-1 EnvCpG86-561 genomic RNA abundance in ZAP knockout cells. This demonstrates that KHNYN targets HIV-1 RNA containing clustered CpG dinucleotides in a ZAP-dependent manner. TRIM25 is required for ZAP's antiviral activity, although the mechanism by which it regulates ZAP is unclear (Li et al., 2017;Zheng et al., 2017). To determine if TRIM25 is necessary for the antiviral activity of KHNYN, 62.5 ng of pKHNYN-1 or pKHNYN-2 was co-transfected with pHIV-1 or pHIV-1 EnvCpG86-561 in control and TRIM25 knockout cells. Both isoforms of KHNYN inhibited HIV-1 EnvCpG86-561 much less potently in TRIM25 knockout cells than control cells and had no effect on wild-type HIV-1 in either cell line ( Figure 5A-B). One possible reason that TRIM25 is necessary for KHNYN antiviral activity could be that it regulates the interaction between ZAP and KHNYN. We pulled down KHNYN-FLAG and western blotted for ZAP in control and TRIM25 knockdown cells ( Figure 5C). Both isoforms of KHNYN pulled down ZAP in both cell lines, indicating that TRIM25 is not required for the interaction between these proteins. Interestingly, KHNYN also pulled down TRIM25 in control cells ( Figure 5C). Therefore, we analyzed whether KHNYN interacted with TRIM25 in control and ZAP knockout cells and observed that both isoforms of KHNYN-FLAG immunoprecipitated TRIM25 in the presence and absence of ZAP ( Figure 5D). In sum, KHNYN requires TRIM25 to inhibit HIV-1 containing clustered CpG dinucleotides, but TRIM25 is not necessary for the interaction Figure 2 continued nuclear supernatants and immunoprecipitation samples were analyzed by immunoblotting for HSP90, KHNYN-FLAG and ZAP. * indicates a non-specific band. (C) Lysates of HEK293T cells transfected with pZAP-L and either pGFP-FLAG, pKHNYN-1-FLAG or pKHNYN-2-FLAG were treated with RNase and then immunoprecipitated with an anti-FLAG antibody. Post-nuclear supernatants and immunoprecipitation samples were analyzed by immunoblotting for HSP90, KHNYN-FLAG and ZAP. * indicates a non-specific band. (D) Panels show representative fields for the localization of KHNYN-1-FLAG or KHNYN-2-FLAG and endogenous ZAP in either 293T Control CRISPR cells expressing a guide RNA targeting LacZ or 293T ZAP guide 1 (ZAP-G1) CRISPR cells. Cells were stained with an anti-FLAG antibody (red), anti-ZAP antibody (green) and DAPI (blue). The scale bar represents 10 mM. (E-F) HeLa cells were transfected with 500 ng pHIV-1 or pHIV-1 EnvCpG86-561 and 500 ng of pGFP-FLAG, pKHNYN-1-FLAG or pKHNYN-2-FLAG. See also Figure 2-figure supplement 1. Culture supernatants were used to infect TZM-bl reporter cells to measure infectivity (E). The bar charts show the average values of three independent experiments normalized to the value obtained for HeLa cells co-transfected with pHIV-1 and pGFP-FLAG. Data are represented as mean ± SD. *p<0.05 as determined by a two-tailed unpaired t-test. p-values for GFP verses KHNYN-1 and KHNYN-2 for wild-type HIV-1 are 2.76 Â 10 À9 and 2.20 Â 10 À6 , respectively. p-Values for GFP verses KHNYN-1 and KHNYN-2 for HIV-1 EnvCpG86-561 are 1.50 Â 10 À3 and 1.51 Â 10 À3 , respectively. Gag expression in the media as well as Gag, Hsp90, Env, Actin, KHNYN-FLAG and GFP-FLAG expression in the cell lysates was detected using quantitative immunoblotting (F). DOI: https://doi.org/10.7554/eLife.46767.004 The following figure supplement is available for figure 2: between ZAP and KHNYN. Furthermore, ZAP, KHNYN and TRIM25 appear to be in a complex together. The KH-like and NYN domains are necessary for KHNYN antiviral activity As its name implies, KHNYN contains a KH-like domain and a NYN domain ( Figure 6A). The KH-like domain differs from canonical KH domains due to a potential small metal chelating module containing two cysteines and a histidine inserted into the central region of the domain (Anantharaman and Aravind, 2006). Since this has diverged substantially from a standard KH domain, it has also been called a CGIN1 domain and is only known to be present in two other proteins (Marco and Marín, 2009). While most KH domains bind nucleic acids (Nicastro et al., 2015), the insertion in the KH-like domain in KHNYN may disrupt RNA binding and indicate that it has a different function. To analyze the functional importance of the KH-like domain, we deleted it and found that KHNYN-1DKH and KHNYN-2DKH had reduced antiviral activity compared to the wild-type protein ( Figure 6B-C). These mutant proteins localized to the cytoplasm and formed foci that were not present for wild-type KHNYN-1 or KHNYN-2 ( Figure 6-figure supplement 1). NYN domains have endonuclease activity and belong to the PIN nuclease domain superfamily. There are at least eight human proteins with a potentially active NYN domain and they have been structurally characterized in several proteins including ZC3H12A and MARF1 (Matelska et al., 2017;Matsushita et al., 2009;Nishimura et al., 2018;Xu et al., 2012;Yao et al., 2018). These domains contain a negatively charged active site with four aspartic acid residues coordinating a magnesium ion, which activates a water molecule for nucleophilic attack of the phosphodiester group on the target RNA. Mutation of these acidic residues inhibits nuclease activity by disrupting the bonds that directly or indirectly interact with the magnesium ion. ZC3H12A (also known as MCPIP1 and Regnase) is a RNA binding protein that, similar to ZAP, contains a CCCH zinc finger domain and degrades cellular and viral RNAs (Takeuchi, 2018). The NYN domain in ZC3H12A has 56% identity to the NYN domain in KHNYN ( Figure 6-figure supplement 2A). ZC3H12A containing a D141N mutation in the NYN domain had decreased endonuclease activity and did not degrade RNA containing the IL-6 3' UTR (Matsushita et al., 2009). MARF1 is required for meiosis and retrotransposon silencing in oocytes and a D426A/D427A mutation inhibited its endoribonuclease activity (Nishimura et al., 2018;Su et al., 2012). We made these equivalent mutations in KHNYN ( Figure 6A and KHNYN is necessary for CpG dinucleotides to inhibit HIV-1 RNA and protein expression To determine if KHNYN is required for CpG dinucleotides to inhibit infectious HIV-1 production, we depleted it using CRISPR-Cas9-mediated genome editing using single-guide RNAs (sgRNAs) targeting two independent sequences in KHNYN ( Figure We also analyzed the effect of KHNYN depletion on murine leukemia virus (MLV). While most retroviruses are suppressed in CpG abundance, the degree of this suppression varies between the different genera (Berkhout et al., 2002). HIV-1 NL4-3 is highly suppressed (9 CpGs/kb; 0.2 observed/ expected), which is conserved in HIV-1 (Berkhout et al., 2002;Kypr et al., 1989;Shpaer and Mullins, 1990). However, the CpG abundance in MLV is much less suppressed (35 CpGs/kb; 0.5 observed/expected) and ZAP was initially identified as an antiviral protein based on its ability to bind MLV RNA and target it for degradation (Gao et al., 2002;Guo et al., 2004;Guo et al., 2007). To determine if KHNYN inhibits MLV, control, ZAP and KHNYN CRISPR cells were co-transfected with pMLV, p2.87 Vpu (which encodes a highly active HIV-1 Vpu to counteract endogenous Tetherin expression in these cells [Neil et al., 2008;Pickering et al., 2014]) and pGFP. MLV Gag expression and virion production were measured by immunoblotting. Since ZAP is a type I interferon-stimulated gene (Shaw et al., 2017), MLV Gag expression and virion production were also analyzed after type I interferon treatment. There was a small but reproducible increase in MLV Gag expression and virion production in the ZAP and KHNYN CRISPR cells in the absence of type I interferon ( Figure 7D). However, after type I interferon treatment, MLV virion production was decreased to almost undetectable levels in the control CRISPR cells but was substantially higher in the ZAP and KHNYN CRISPR cells ( Figure 7E). The Sindbis virus genome is not substantially depleted in CpG dinucleotides (58 CpGs/kb; 0.9 observed/expected) and is restricted by ZAP (Bick et al., 2003). However, unlike retroviruses, the predominant ZAP antiviral activity for alphaviruses is to inhibit viral RNA translation, although there may be an additional effect on RNA stability (Bick et al., 2003;Kozaki et al., 2015). As expected (Bick et al., 2003;Kozaki et al., 2015;Li et al., 2017;Zheng et al., 2017), depletion of TRIM25 and ZAP substantially increased Sindbis virus replication (Figure 7-figure supplement 3). In contrast, there was no substantial increase in Sindbis virus replication in the KHNYN CRISPR cells. Thus, KHNYN appears to be required for the restriction of retroviral genomes, but not all ZAP-sensitive RNA viruses. We then analyzed HIV-1 genomic RNA abundance in the KHNYN CRISPR cells. Similar to HIV-1 protein expression and infectivity, HIV-1 EnvCpG86-561 genomic RNA abundance was similar in the cell lysate and media to wild-type HIV-1 ( Figure 8A and C), indicating that the CpG dinucleotides no longer inhibited RNA abundance. As expected, nef mRNA abundance was not affected by the five independent experiments normalized to the value obtained for HeLa Control CRISPR cells co-transfected with pHIV-1 and pGFP-GFP. Data are represented as mean ± SD. *p<0.05 as determined by a two-tailed unpaired t-test. p-values for GFP verses KHNYN-1 and KHNYN-2 for HIV-1 EnvCpG86-561 in Control cells are 6.78 Â 10 À3 and 7.20 Â 10 À3 , respectively. p-Values for GFP verses KHNYN-1 and KHNYN-2 for HIV-1 EnvCpG86-561 in ZAP-G1 cells are 3.22 Â 10 À1 and 5.33 Â 10 À1 , respectively. RNA was extracted from cell lysates (B) and media (C) and genomic RNA (gRNA) abundance was quantified by qRT-PCR. The bar charts show the average value of five independent experiments normalized to the value obtained for HeLa Control CRISPR cells co-transfected with pHIV-1 and pGFP-GFP. Data are represented as mean ± SD. *p<0.05 as determined by a two-tailed unpaired t-test. For HIV-1 EnvCpG86-561 genomic RNA in Control cell lysates, the GFP verses KHNYN-1 and KHNYN-2 p-values are 2.14 Â 10 À2 and 2.30 Â 10 À2 , respectively. For HIV-1 EnvCpG86-561 genomic RNA in ZAP-G1 cell lysates, the GFP verses KHNYN-1 and KHNYN-2 p-values are 1.01 Â 10 À1 and 4.33 Â 10 À2 , respectively. For HIV-1 EnvCpG86-561 genomic RNA in Control cell media, p-values for GFP verses KHNYN-1 and KHNYN-2 are 8.97 Â 10 À4 and 9.38 Â 10 À4 , respectively. For HIV-1 EnvCpG86-561 genomic RNA in ZAP-G1 cell media, p-values for GFP verses KHNYN-1 and KHNYN-2 are 6.09 Â 10 À1 and 1.87 Â 10 À1 , respectively. introduction of CpG dinucleotides in env or by KHNYN depletion since it is a fully spliced mRNA that does not contain the introduced CpGs ( Figure 8B). The wild-type HIV-1 genomic RNA abundance was not altered in the KHNYN CRISPR cells compared to the control cells, further showing the specific effect of KHNYN for viral RNA containing CpG dinucleotides. To determine the specificity of the KHNYN knockdown, we titrated CRISPR-resistant pKHNYN-1 or pKHNYN-2 into the KHNYN CRISPR cells. Even very low levels of KHNYN-1 or KHNYN-2 restored selective inhibition of HIV-1 EnvCpG86-561 in these cells ( Figure 9A-B) and KHNYN-1 was consistently slightly more active than KHNYN-2. This shows that both isoforms are capable of inhibiting infectious virus production of HIV-1 containing clustered CpG dinucleotides. We also analyzed whether KHNYN with the KH-like domain deleted or the putative catalytic mutations in the NYN domain could inhibit HIV-1 EnvCpG86-561 infectious virus production in the CRISPR cells. Expression of 31.25 ng of KHNYN-1 in the KHNYN CRISPR cells inhibited HIV-1 EnvCpG86-561 ( Figure 9C-D) and all of the mutations substantially reduced KHNYN antiviral activity. In sum, endogenous KHNYN is required for CpG dinucleotides to inhibit HIV-1 infectious virus production. Discussion Several members of the CCCH zinc finger domain protein family target viral and/or cellular mRNAs for degradation (Fu and Blackshear, 2017). For example, ZC3H12A degrades pro-inflammatory cytokine mRNAs and also inhibits the replication of several viruses, including HIV-1 and hepatitis C virus (Lin et al., 2013;Lin et al., 2014;Liu et al., 2013;Matsushita et al., 2009). It contains a CCCH zinc finger domain as well as a NYN endonuclease domain, which allows it to degrade specific RNAs (Matsushita et al., 2009;Xu et al., 2012). ZAP has four CCCH zinc finger domains and specifically interacts with CpG dinucleotides in RNA (Gao et al., 2002;Guo et al., 2004;Takata et al., 2017). However, it does not contain nuclease activity. While ZAP has been reported to directly or indirectly interact with components of the 5'À3' and 3'À5' degradation pathways including DCP1-DCP2, XRN1, PARN and the exosome, knockdown of several proteins in these pathways did not substantially rescue infectious virus production of HIV-1 containing clustered CpG dinucleotides (Goodier et al., 2015;Guo et al., 2007;Takata et al., 2017;Zhu et al., 2011). Therefore, we hypothesized that ZAP may interact with additional unidentified proteins that regulate viral RNA degradation. Herein, we have identified that KHNYN is an essential ZAP cofactor that inhibits HIV-1 gene expression and infectious virus production when the viral RNA contains clustered CpG dinucleotides. KHNYN overexpression inhibits genomic RNA abundance, Gag expression, Env expression and infectious virus production for HIV-1 containing clustered CpG dinucleotides. This activity requires ZAP and TRIM25. Furthermore, depletion of KHNYN using CRISPR-Cas9 specifically increased genomic RNA abundance, Gag expression, Env expression and infectious virus production for HIV-1 containing clustered CpG dinucleotides. This indicates that KHNYN is essential for CpG dinucleotides to inhibit infectious virus production. Similarly, KHNYN depletion increased MLV Gag expression and virion production. However, Sindbis virus replication was not substantially increased in the KHNYN knockout cells. The difference between the requirement for KHNYN to inhibit retroviruses versus Sindbis virus may be because the antiviral effect for retroviruses is predominantly at the level of RNA stability and for alphaviruses it is predominately at the level of translation (Bick et al., 2003; represented as mean ± SD. *p<0.05 as determined by a two-tailed unpaired t-test. p-values for GFP verses KHNYN-1 and KHNYN-2 for HIV-1 EnvCpG86-561 in Control cells are 8.95 Â 10 À4 and 5.42 Â 10 À4 , respectively. p-values for GFP verses KHNYN-1 and KHNYN-2 for HIV-1 EnvCpG86-561 in TRIM25-G1 CRISPR cells are 1.78 Â 10 À3 and 1.01 Â 10 À4 , respectively. Gag expression in the media as well as Gag, Hsp90, Env, Actin, KHNYN-FLAG and GFP-FLAG expression in the cell lysates was detected using quantitative immunoblotting (B). (C) Lysates of Control and TRIM25 CRISPR HEK293T cells transfected with pGFP-FLAG, pKHNYN-1-FLAG or pKHNYN-2-FLAG were immunoprecipitated with an anti-FLAG antibody. Post-nuclear supernatants and immunoprecipitation samples were analyzed by immunoblotting for HSP90, KHNYN-FLAG, TRIM25 and ZAP. The blots are representative of two independent experiments. (D) Lysates of Control and ZAP CRISPR HEK293T cells transfected with pGFP-FLAG, pKHNYN-1-FLAG or pKHNYN-2-FLAG were immunoprecipitated with an anti-FLAG antibody. Post-nuclear supernatants and immunoprecipitation samples were analyzed by immunoblotting for HSP90, KHNYN-FLAG, TRIM25 and ZAP. * indicates a non-specific band. The blots are representative of two independent experiments. DOI: https://doi.org/10.7554/eLife.46767.008 Gao et al., 2002;Guo et al., 2004;Guo et al., 2007;MacDonald et al., 2007). A mechanistic explanation for why the major antiviral effect of ZAP appears to be promoting RNA degradation for some viruses and inhibiting translation for other viruses remains unclear, although ZAP has been reported to inhibit translation initiation by interfering with the interaction between eIF4A and eIF4G (Zhu et al., 2012). Therefore, KHNYN may not be required for ZAP to inhibit translation. We hypothesize that a complex containing ZAP and KHNYN binds HIV-1 CpG-containing RNA. ZAP and KHNYN could directly interact to form a heterodimer or there could be other factors mediating this interaction. The interaction between ZAP and KHNYN has been detected using several different assays including yeast-two-hybrid, co-immunoprecipitation, affinity purification-mass spectrometry (Huttlin et al., 2017) and BioID . If there is an unknown factor mediating this interaction, it would have to present in the yeast-two-hybrid assay. It remains unclear how TRIM25 regulates ZAP, but it is not required for ZAP and KHNYN to interact. Interestingly, TRIM25 co-immunoprecipitates with KHNYN and the ZAP antiviral complex may simultaneously consist of all three proteins. ZAP and TRIM25 are interferon-stimulated genes while KHNYN is not induced by interferon in human cells (Shaw et al., 2017). Whether KHNYN is regulated by type I interferons or viral infection in a different way, such as post-translational modification, is not known. The zinc finger RNA binding domains in ZAP could target KHNYN to CpG regions in viral RNA. This would allow the endonuclease domain in KHNYN to cleave this RNA, thereby inhibiting viral RNA abundance. Conceptually, the ZAP-KHNYN complex could function similarly to ZC3H12A, but with the RNA binding and endonuclease domains divided between the two proteins. The NYN domain in KHNYN could cleave HIV-1 RNA containing CpG dinucleotides similar to how ZC3H12A cleaves a specific site in the 3' UTR of the IL-6 mRNA (Matsushita et al., 2009). While we do not yet have evidence that the NYN domain in KHNYN is an active endonuclease domain, it is highly conserved with the active NYN domain in ZC3H12A and is required for KHNYN antiviral activity. Strikingly, mutation of two conserved aspartic acid residues in the NYN domain predicted to coordinate a magnesium ion necessary for nucleophilic attack of the target RNA eliminated KHNYN antiviral activity. However, biochemical and structural studies will be necessary to determine the specific nature of the interaction between ZAP, KHNYN, TRIM25 and RNA and how these interactions promote viral RNA degradation. Another important area of future research will be to determine how KHNYN and other cellular proteins that contain an NYN endonuclease domain inhibit the replication of different viruses in different cell types with and without interferon treatment. The interferon-stimulated gene N4BP1, which is a KHNYN paralog (Anantharaman and Aravind, 2006), was recently identified to genetically interact with ZAP in a CRISPR-based screen to identify interferon-induced antiviral proteins targeting HIV-1 (OhAinle et al., 2018). In the monocytic THP-1 cell line, depletion of N4BP1 led to a small increase in wild-type HIV-1 replication. However, N4BP1 depletion did not inhibit replication of the alphavirus Semliki Forest virus, indicating that it may have a virus specific effect. While ZAP inhibits a range of viruses in different cell types, it remains unknown whether its cofactor requirements are cell type dependent. In this study, we have analyzed the antiviral activity of ZAP and KHNYN on HIV-1 and MLV in HeLa cells, but the role of NYN domain-containing proteins in targeting viral RNAs for degradation may be an important component of the antiviral innate immune response in a variety of cell types. It will also be interesting to determine if proteins containing an endonuclease domain other than KHNYN interact with CCCH zinc finger proteins to mediate antiviral activity. There are 57 human CCCH zinc finger proteins (Fu and Blackshear, 2017). At least 15 of these proteins are known to promote RNA decay and, including ZAP, six human CCCH zinc finger proteins are antiviral (Fu and Blackshear, 2017). Identifying the full complement of CCCH zinc finger proteins that inhibit viral replication and determining whether they require proteins containing endonuclease domains such as KHNYN or N4BP1 for this activity will increase our understanding of antiviral responses targeting viral RNA. Cell lines HeLa and HEK293T cells were obtained from the ATCC and were maintained in high glucose DMEM supplemented with GlutaMAX, 10% fetal bovine serum, 20 mg/mL gentamicin or 100 U/ml penicillin and 100 mg/ml streptomycin and incubated with 5% CO 2 at 37˚C. BHK-21 cells were obtained from ATCC and were maintained in GMEM supplemented with 10% fetal bovine serum, 10% tryptose Figure 9 continued on next page phosphate broth, and penicillin/streptomycin. Their identity has not been authenticated and they are routinely tested for mycoplasma contamination with all tests negative. CRISPR guides targeting the firefly luciferase gene, lacZ gene, human TRIM25, ZAP (also known as ZC3HAV1), and KHNYN genes were cloned into BsmBI restriction enzyme sites in the lentiviral vector genome plasmid lentiCRISPRv2 (Sanjana et al., 2014). (Neil et al., 2008) and 4 mg of pCMV-VSV-G (Neil et al., 2008). Lentiviral vectors encoding guide RNAs targeting ZAP or TRIM25 were produced by transfecting HEK293T cells seeded in a six-well plate using 10 ml PEI with 0.5 mg pVSV-G (Fouchier et al., 1997), 1.0 mg pCMVDR8.91 (Zufferey et al., 1997), and 1.0 mg LentiCRISPRv2-Guide. Virus containing supernatant was harvested 48 hr after transfection, rendered cell-free via filtration through 0.45 mM filters (Millipore) and used to transduce HeLa or HEK293T cells followed by selection in puromycin. Yeast two-hybrid screen The yeast two-hybrid screen was performed by Hybrigenics Services, S.A.S., Paris, France (http:// www.hybrigenics-services.com). Full length ZAP-S, ZAP-L and KHNYN-2 were PCR-amplified and cloned into pB27 as a C-terminal fusion to LexA. These constructs were used as a bait to screen a random-primed Induced Macrophages cDNA library constructed into pP6. pB27 and pP6 derive from the original pBTM116 (Vojtek and Hollenberg, 1995) and pGADGH (Bartel and Fields, 1995) plasmids, respectively. Clones were screened using a mating approach with YHGX13 (Y187 ade2-101::loxP-kanMX-loxP, mata) and L40DGal4 (mata) yeast strains as previously described (Fromont-Racine et al., 1997). His + colonies were selected on a medium lacking tryptophan, leucine and histidine. Because KHNYN-2 had some autoactivating activity, the selection medium was supplemented with 10 mM 3-Aminotriazol. The prey fragments of the positive clones were amplified by PCR and sequenced at their 5' and 3' junctions. The resulting sequences were used to identify the corresponding interacting proteins in the GenBank database (NCBI) using a fully automated procedure. A confidence score (PBS, for Predicted Biological Score) was attributed to each interaction as previously described (Formstecher et al., 2005). The PBS relies on two different levels of analysis. First, a local score takes into account the redundancy and independency of prey fragments, as well as the distribution of reading frames and stop codons in overlapping fragments. Second, a global score takes into account the interactions found in all the screens performed at Hybrigenics using the same library. This global score represents the probability of an interaction being nonspecific. The scores were divided into six categories: A (highest confidence) to D (lowest confidence) plus category E that demarcates interactions involving highly connected prey domains previously found several times in screens performed on libraries derived from the same organism and category F that indicates highly connected domains that have been confirmed as false-positives. The PBS scores have been shown to positively correlate with the biological significance of interactions (Rain et al., 2001;Wojcik et al., 2002). shows the average value of three independent experiments normalized to the value obtained for HeLa Control CRISPR cells co-transfected with pHIV-1 and pGFP. Data are represented as mean ± SD. Gag expression in the media as well as Gag, Hsp90, Env, Actin, KHNYN-FLAG and GFP-FLAG expression in the cell lysates was detected using quantitative immunoblotting (B). (C-D) HeLa KHNYN-G1 CRISPR clone B cells were transfected with 500 ng pHIV-1 or pHIV-1 EnvCpG86-561 , 468.75 ng of pGFP-FLAG and 31.25 ng of pKHNYN-1-CRG1-FLAG expressing either wild-type or mutant proteins. Culture supernatants from the cells were used to infect TZM-bl reporter cells (C). Each point shows the average value of seven independent experiments normalized to the value obtained for HeLa Control CRISPR cells co-transfected with pHIV-1 and pGFP. Data are represented as mean ± SD. *p<0.05 as determined by a two-tailed unpaired t-test. p-Values for HIV-1 EnvCpG86-561 with KHNYN-1-CRG1 versus DKH-KHNYN-1-CGR1, D443N-KHNYN-1-CGR1 and D524A/D525A-KHNYN-1-CGR1 are 2.56 Â 10 À5 , 1.95 Â 10 À4 , and 2.12 Â 10 À4 , respectively. Gag expression in the media as well as Gag, Hsp90, Env, Actin, KHNYN-FLAG and GFP-FLAG expression in the cell lysates was detected using quantitative immunoblotting (D Transfections HeLa and HEK293T cells were grown to 70% confluence in six-well plates. HeLa cells were transfected according to the manufacturer's instructions using TransIT-LT1 (Mirus) at the ratio of 3 mL TransIT-LT1 to 1 mg DNA. HEK293T cells were transfected according to the manufacturer's instructions using PEI (1 mg/mL) (Sigma-Aldrich) at the ratio of 4 mL PEI to1 mg DNA. For the HIV-1 experiments, 0.5 mg pHIV-1 and the designated amount of pKHNYN-FLAG, pGFP-FLAG or pGFP (Swanson et al., 2010) were transfected for a total of 1 mg DNA. For the MLV experiments, 0.65 mg pMLV, 0.25 mg pCR3.1 2.87 Vpu and 0.10 mg pGFP were transfected. The transfection medium was replaced with fresh medium after a 6 hr incubation (HEK293T) or 24 hr incubation (HeLa). TZM-bl infectivity assay Media was recovered approximately 48 hr post-transfection and cell-free virus stocks were generated by filtering the media through 0.45 mM filters (Millipore). The TZM-bl indicator cell line was used to quantify the amount of infectious virus (Derdeyn et al., 2000;Platt et al., 1998;Wei et al., 2002). TZM-bl cells were seeded at 70% confluency in 24-well plates and infected by overnight incubation with virus stocks. 48 hr post infection, the cells were lysed and infectivity was measured by analyzing b-galactosidase expression using the Galacto-Star System following manufacturer's instructions (Applied Biosystems). b-galactosidase activity was quantified as relative light units per second using a PerkinElmner Luminometer. Immunoprecipitation assays HEK293T cells in six-well plates were transfected with 800 ng of pKHNYN-1-FLAG, pKHNYN-2-FLAG, pGFP-FLAG as a control using 3 mL TransIT-LT1 per 1 mg of DNA added. For the experiments in which lysates were treated with Ribonuclease A (RNase A), 500 ng of pHA-ZAP-L (Kerns et al., 2008) was also added. The cells were lysed on ice in lysis buffer (0.5% NP-40, 150 mM KCl, 10 mM HEPES pH 7.5, 3 mM MgCl 2 ) supplemented with complete EDTA-free Protease inhibitor cocktail tablets (Sigma-Aldrich), 10 mM N-Ethylmaleimide (Sigma-Aldrich) and PhosSTOP tablets (Sigma-Aldrich). The lysates were sonicated and then centrifugated at 20,000 x g for 5 min at 4˚C. 50 ml of the post-nuclear supernatants was saved as the input lysate and 450 ml were incubated with either 18 mg of anti-Flag antibody (Sigma-Aldrich, F7425) or 4.275 mg of anti-ZAP antibody (Abcam) for one hour at 4˚C with rotation. Protein G Dynabeads (Invitrogen) were then added and incubated for 3 hr at 4˚C with rotation. The lysates were then washed four times with wash buffer (0.05% NP-40, 150 mM KCl, 10 mM HEPES pH 7.5, 3 mM MgCl 2 ) before bound proteins were eluted with Laemmli buffer and boiled for 10 min. When indicated, RNase A (Sigma-Aldrich) was added to the postnuclear supernatant and incubated for 30 min at 37˚C. Protein expression was analyzed via western blot as described above. RNA purification and quantitative RT-PCR Total RNA was isolated from transfected HeLa cells using a QIAGEN RNeasy kit accordingly with the manufacturer's instructions. Viral RNA was extracted from cell supernatants using a QIAGEN QIAamp Viral mini kit accordingly with the manufacturer's instructions. 500 ng of purified cellular RNA was reverse transcribed using random hexamer primers and a High-Capacity cDNA Reverse Transcription kit (Applied Biosystems). Quantitative PCR was performed using a QuantiStudio 5 System (Thermo Fisher). For genomic RNA and nef mRNA in the cell lysate, the HIV-1 RNA abundance was normalized to GAPDH levels using the GAPDH Taqman Assay (Applied Biosystems, Cat# Hs99999905_m1). For the genomic RNA in the media, absolute quantification was determined using a standard curve of the HIV-1 provirus DNA plasmid. The genomic RNA primers were GGCCAGG-GAATTTTCTTCAGA/TTGTCTCTTCCCCAAACCTGA (forward/reverse) and the probe was FAM-ACCAGAGCCAACAGCCCCACCAGA-TAMRA. The nef mRNA primers were GGCGGCGAC TGGAAGAAGC/GATTGGGAGGTGGGTTGCTTTG-3' (forward/reverse) (Jablonski and Caputi, 2009). Microscopy Cells were seeded on 24-well plates on coverslips pre-treated with poly-lysine. HEK293T cells expressing a control guide RNA targeting the LacZ gene or a guide RNA targeting ZAP were transfected with 250 ng of pKHNYN-FLAG. 24 hr post-transfection, the cells were fixed with 4% paraformaldehyde for 15 min at room temperature, washed with PBS, and then washed with 10 mM glycine. The cells were then permeabilized for 15 min with 1% BSA and 0.1% Triton-X in PBS. Mouse anti-FLAG (1:500) and rabbit anti-ZAP (1:500) antibodies were diluted in PBS/0.01% Triton-X and the cells were stained for 1 hr at room temperature. The cells were then washed three times in PBS/ 0.01% Triton-X and incubated with Alexa Fluor 594 anti-mouse and Alexa Fluor 488 anti-rabbit antibodies (Molecular Probes, 1:500 in PBS/0.01% Triton-X) for 45 min in the dark. Finally, the coverslips were washed three times with PBS/0.01% Triton-X and then mounted on slides with Prolong Diamond Antifade Mountant with DAPI (Invitrogen). Imaging was performed on a Nikon Eclipse Ti Inverted Microscope, equipped with a Yokogawa CSU/X1-spinning disk unit, under 60-100x objectives and laser wavelengths of 405 nm, 488 nm and 561 nm. Image processing and co-localization analysis was performed with NIS Elements Viewer and Image J (Fiji) software. Analysis of CpG frequency in HIV-1, MLV and Sindbis virus The 'analyze base composition' tool in MacVector was used to calculate the CpG frequencies for the HIV-1 NL4-3 (NCBI accession number M19921) genomic RNA, MLV genomic RNA (J02255) and Sindbis virus (NC_001547). The CpG frequencies were calculated using the following formula: number of CpG occurrences / (frequency of C * frequency of G) where frequency of the base is the number of occurrences of the base/total number of bases in sequence. Sindbis virus replication assays Sindbis virus (SINV), a kind gift from Penny Powell (University of East Anglia), was expanded and titrated in BHK-21 cells (Mazzon et al., 2018). Control, ZAP, TRIM25 or KHNYN HeLa cells were plated at 100,000 cells/well in 12-well plates. The following day, the cells were infected with Sindbis virus at a multiplicity of infection of 0.005 pfu/cell. After 90 mins, the infectious media was removed, the cells were washed once with PBS and then incubated with 1 ml of media. The media from the infected cells was harvested at 8, 16, 24 and 32 hr post-infection. 100 ml of serial diluted media from the cells (from 10 À1 to 10 À8 ) were added onto BHK-21 cells in 96 well plates (8,000 cells/well plated the previous day). After 90 min, the media was removed and replaced with fresh media. An MTT assay was carried out on each plate 24 hr later. Briefly, 20 ml of 50 mg/ml Thiazolyl Blue Tetrazolium Bromide in PBS were added onto the cell media for 2 hr at 37˚C, after which the supernatant is removed and replaced with 40 ml of a 1:1 solution of isopropanol and DMSO. 20 min later, 35 ml of the supernatant are transferred onto a 96 well plate and signal read at 570 nm. Values from this assay were used to determine the TCID50 and pfu/ml. Statistical analysis Statistical significance was determined using unpaired two-tailed t tests calculated using Microsoft excel software. Data are represented as mean ± SD. Significance was ascribed to p values p < 0.05.
9,788.8
2019-07-09T00:00:00.000
[ "Biology", "Medicine" ]