text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
THE METAPHYSICS OF AUGUSTINE AND THE FOUNDATION OF THE CARTESIAN SCIENCE
: The aim of this paper is to show to what extent Descartes can be situated within the Augustinian metaphysical tradition and to what extent he has departed from it. To this end, we will argue that Descartes has borrowed his main Meditations ’ arguments from Augustine’s philosophy. However, in spite of all factual and textual evidence we will provide against the originality of Descartes’ metaphysical discussions, it will be stressed , on the other hand , that in borrowing not only the cogito argument, but also some general features of his philosophy from Augustine’s works, Descartes intends to frame a metaphysics which will be the ground on his new mechanistic physics. Having this in mind, we will hold that no claim can be put forward against the originality and far-reaching scope of Descartes’ philosophical intentions. Indeed, Descartes’ purpose is to build a new science under a metaphysics, even though this metaphysics is the Augustianian one.
what extent he has departed from it. To this end, we will argue that Descartes has borrowed his main Meditations' arguments from Augustine's philosophy. However, in spite of all factual and textual evidence we will provide against the originality of Descartes' metaphysical discussions, it will be stressed, on the other hand, that in borrowing not only the cogito argument, but also some general features of his philosophy from Augustine's works, Descartes intends to frame a metaphysics which will be the ground on his new mechanistic physics. Having this in mind, we will hold that no claim can be put forward against the originality and far-reaching scope of Descartes' philosophical intentions. Indeed, Descartes' purpose is to build a new science under a metaphysics, even though this metaphysics is the Augustianian one.
Descartes' relationship with the thought of Augustine or with the Augustinian tradition is problematic. Although Descartes asserts that he refuses to follow any philosophical tradition because, he tells us, " [...] since my earliest youth I have accepted many falses opinions as true ones" (descartes, 1996, at 7 [Meditations], p. 17 ) 1 , he nonetheless seems to have employed some philosophical theses already found in the work of the bishop of Hippo. In fact, according to Menn, "[...] Descartes' philosophy bears many resemblances to the thought of Augustine […]" (menn, 1998, p. 4); but rather more important than this is the fact that, "[…] Descartes did intend to build his new philosophy (including his physics) on the old Augustinian metaphysics […]" (menn, 1998, p. 16).
To bring to light the resemblances of Descartes's thought to that of Augustine is the first goal of this paper. Arguably, as we will see, on the basis of these resemblances we could call Descartes an Augustinian author, as we do in the case of Arnauld and Malebranche, for instance.
Next, we will show what Descartes aims at when walking along the Augustinian metaphysical path. It will become manifest, with the claim that Descartes establishes his new mechanistic physics on a metaphysics of Augustinian inspiration, that Descartes is part of a deeper and wider philosophical project in Early Modern Philosophy.
In the message attached to the text of the Meditations, Descartes says to the Theologians of the Faculty of Paris that his metaphysics is grounded on the demonstration of God's and of the human soul's existence: "I have always considered that the questions concerning God and the soul were the main among those which are to be demonstrated by philosophical rather by theological argument" (descartes, 1996, at 7[Meditations], p. 1) 2 . Augustine has also attached great importance on the same two issues. As a philosopher and a clergyman, he emphasizes the priority of these topics on his theoretical researches: "I desire to know God and the soul" (augustine, Soliloquia, i, 7) 3 , he says. With this being stated, we may reasonably argue that the convergence of Descartes' and Augustine's purposes lies in the fact that both philosophers employ the 'method of introspection'. The 'method of introspection' consists in putting empirical-corporeal considerations aside and in concentrating on the analysis of the 'inner man', that is to say, of the human soul. In Augustine's words: "Do not go abroad. Return within itself. In the inward man dwells truth" (augustine, De vera religione,xxxix,72) Descartes has, in turn, adopted a similar approach to address the philosophical problems which he was going to deal with, as suggested by the title of his masterpiece: Meditations on First Philosophy. After being used in the first two Meditations, the 'method of introspection' is also evoked at the beginning of the Third meditation: "talking just to myself and considering more deeply my own nature, I shall try little by little to reach a better knowledge of and a more familiarity with myself " (descartes, 1996, at 7 [Meditations], p. 34). 5 Yet if these evidences are not enough to demonstrate the real ties of Descartes with the Augustinian philosophy, there is still a great 2 "Semper existimavi duas quaestiones, de Deo et de Anima, praecipuas esse ex iis quae Philosophiae, quam Theologiae ope sunt demonstrandae". 3 "Deum et animam scire cupio". 4 "Noli foras ire, in teipsum redi; in interiore homine veritas habitat". 5 "[...] meque solum alloquendo et penitius inspiciendo, meipsum paulatim mihi magis notum et familiarem reddere conabor". amount of other arguments displayed in the Meditations that bears an undeniable resemblance, beyond the aforementioned metaphysical and methodological similarities, with those arguments discussed by the Bishop of Hippo many centuries before. In fact, we have already shown that, in Augustine's work, the act of doubting is a previous condition for reaching any certainty. In his work On true relligion, Augustine asserts that "everyone who knows that he has doubts knows with certainty something that is true; he is certain about this truth [that he has doubts].
Hence, everyone who doubts whether there is such thing as the truth has a truth about which he cannot doubt" (augustine, De vera religione, xxxix, 73) 6 . The postulate that the 'natural light' provided by God allows us to reach the truth is also present in Augustine's epistemology: "[...] imbued in some way and illumined by him [God] with intelligible light, [the rational soul] discerns, not with physical eyes, but with its own highest part in which lies its excellence, i.e., with its intelligence, those reasons [...]" (augustine, De Diversis Quaestionibus Octoginta Tribus, q. 46, 2) 7 . Augustine had also struggled to refute the sceptical arguments concerning the dream, the madness, and the denial of the senses as a trustworthy source of knowledge, as we can see in the long quote below: You will ask me, "Is what you see the world even if you are asleep?". It has already been said that I call 'world' whatever seems to me to be such. But if it pleases him [the Academician] to call 'world' only what seems so to those who are awake or to those who are sane, then maintain this if you can: that those who are asleep or insane are not asleep or insane in the world. For this reason, I state that this whole mass of bodies in which we exist -whether we be asleep, insane, awake, or sane, or asleep -either is one or is not one. Explain how this view can be false. Now if I am asleep, it might be that I had said nothing; or if the words scape from my mouth while I am asleep, as it sometimes happens, it might be that I did not say them here, sitting as I am, to this audience. Yet the claim itself cannot be false. Nor do I say that I have perceived this because I am awake. You can say that this also could seem so to me while I was asleep, and thus it can be very like what is false. If, however, there is one world and six worlds, then, whatever condition I may be in, it is clear there is seven worlds, and it is not presumptuous of me to affirm that I know this. Accordingly, prove that either this inference or those disjunctions given above can be false because of sleep, madness, or the unreliability of the senses. If I remember them when I awake up, I will admit that I have been beaten. I think it is now sufficiently clear what falsehoods seem to be so through sleep and madness, namely, those that pertain to the bodily senses (augustine, Contra academicos, iii, 11, 25) 8 .
8 "Etiamne, inquies, si dormis, mundus est iste quem vides? Iam dictum est, quidquid tale mihi videtur, mundum appello. Sed si eum solum placet mundum vocare, qui videtur a vigilantibus vel etiam a sanis; illud contende, si potes, eos qui dormiunt ac furiunt, non in mundo furere atque dormire. Quamobrem hoc dico, istam totam corporum molem atque machinam in qua sumus, sive dormientes, sive furentes, sive vigilantes, sive sani, aut unam esse, aut non esse unam. Edissere, quomodo possit ista esse falsa sententia. Si autem unus et sex mundi sunt; septem mundos esse, quoquo modo affectus sim, manifestum est, et id me scire non impudenter affirmo. Quare vel hanc connexionem, vel illas superius disiunctiones doce somno aut furore aut vanitate sensuum posse esse falsas. Si enim dormio, fieri potest ut nihil dixerim; aut si etiam ore dormientis verba, ut solet, evaserunt, potest fieri ut non hic, non ita sedens, non istis audientibus dixerim: ut autem hoc falsum sit, non potest. Nec ego illud me percepisse dico, quod vigilem. Potes enim dicere, hoc mihi etiam dormienti videri potuisse; ideoque hoc potest esse falso simillimum. ; et me, si expergefactus ista meminero, victum esse concedam. Credo enim iam satis liquere quae per somnium et In the same way, if we take a careful look at the Meditations, we will see that both the problems and the discussions put forward by Descartes and Augustine are very close to one another. In fact, like Augustine, Descartes fights against the sceptical doctrine. For this reason, he applies his 'method of doubt' to the most traditional arguments delivered by the sceptics. So he will challenge the sceptical argument about the fallibility of the senses -"All that up to the present time I have admitted as the most true and certain I have learned either from the senses or through the senses; but it is proved to me that these senses are sometimes deceptive, and it is wise not to trust entirely in those by which we have once been deceived 9 " (descartes, 1996, at 7 [Meditations], p.18.) 10 , the argument about the dreaming illusions -"How often actually has it happened to me that in the night I dreamt that I found myself in this particular place, that I was dressed and seated near the fire, while in reality I was lying in the bed undressed" (descartes, 1996, at 7 [Meditations], p. 20.) 11 , so that, he concludes, "[...] I realize that dementiam falsa videantur, ea scilicet quae ad corporis sensus pertinent". 9 Although it is not our subject here, we agree with Gilson's claim that the First meditation is aimed at being a critique to the scholastic empiricism. Descartes himself hints this interpretation in the synopsis of the Meditations, when he says about his 'method of doubt' that "[...] the utility of a doubt that is so general does not appear at first; it is nonetheless very great, inasmuch as it delivers us from all prejudices [gotten through or by the senses] and set out for the mind a way by which it can detach itself from the senses" -"[...] tantae dubitationis utilitas prima fronte non appareat, est tamen in eo maxima quod ab omnibus praejudiciis nos liberet viamque facillimam sternat ad mentem ab sensibus abducendam" (descartes, 1996, at 7 [Meditations], p. 12). For more details on this issue, see gilson, 1951, pp. 184-190. 10 "Nempe quidquid hactenus ut maxime verum admisi, vel a sensibus, vel per sensus accepi; hos autem interdum fallere deprehendi, ac prudentiae est nunquam illis confidere qui nos vel semel deceperunt". 11 "Quam frequenter vero usitata ista, me hic esse, toga vestiri, foco assidere, quies noturna persuadet, cum tamen positis vestibus jaceo inter strata!". there is never any reliable way of distinguishing being awake from being asleep" (descartes, 1996, at 7 [Meditations], p. 19.) 12 , and the argument about madness -"How could I deny that these hands and this body are mine? Maybe I would compare myself to certain people denied of senses, whose cerebella are so troubled by the violents vapours of black bile Among all those elements which could be mentioned in order to demonstrate the similarities between Descartes' and Augustine's argumentation, the one that has mostly impressed his contemporaries was the 'je pense, donc je suis'. Mersenne, after his reading of the Discourse on Method (1637), that is to say, before the Meditations (1641) had been published, call the attention of Descartes as to the striking resemblance of his so-called 'cogito argument' with the famous Augustinian thesis 'si enim fallor, sum' 16 . Afterwards, having the Meditations been printed, it is the time of Arnauld, a follower of Augustine, to tell Descartes that "the 12 "[...] video nunquam certis indiciis vigiliam a somno posse distingui [...]". 13 "Manus vero has ipsas, totumque hoc corpus meum esse, qua ratione posset negari? Nisi me forte comparem nescio quibus insanis, quorum cerebella tam contumax vapor ex atra bile labefactat [...]". 14 For instance, the important truth that God is not a deceiver comes from the 'natural light':"the natural light teaches us that all fraud and deception necessarily proceed from some defect" -"Omnem enim fraudem et deceptionem a defectu aliquo pendere, lumine naturali manifestum est" (descartes, 1996, at Thus, from Arnauld's comment on, it has begun the long history of the affirmation that Descartes had 'borrowed' the cogito argument from Augustine, and, for this reason, he would be in some sense taking back the philosophy of the Bishop of Hippo. In truth, all the essential questions about the striking and undeniable resemblances between the cogito, ergo sum and the si enim fallor, sum have already been raised by Descartes' contemporaries. The recognition of this fact has allowed Etienne Gilson to state that "though other texts of secondary importance have been taken into consideration since that time, nothing has been added to the facts already known" (gilson, 1951, p. 191.) (1628), which we will designate as a 'pre-Augustinian' work, with a 'post-Augustinian' one, like the Meditations (1641). In chapter XII of the Regulae, Descartes discusses the conception of what he calls 'simple nature' (natura simplicissima; res sim-plex) 24 . He divides them under three headings: intellectual, material, and common simple natures. Although, as holds Marion 25 , these concepts give some idea of what the mature metaphysics of Descartes will look like, they would never work as a metaphysical condition by themselves.
To become metaphysical entities they would need a previous metaphysical doctrine, which would unify them and subordinate each one to a real substance. Taking Marion's own examples, one substance would be responsible for unifying one intellectual simple nature, like cogitare (to think) or dubitare (to doubt), with a common simple nature, like existere (to exist), and the result would be the notion of res cogitans. Since the res cogitans is a substance, all the intellectual simple natures would be subordinated to it. The same is true of the material simple natures, which would be unified with and subordinated to another substance, the res extensa. Thus, under a metaphysical background, the doctrine of simple natures undergoes a great deal of simplification and cohesion. This synthesis of the wide range of simple natures under ontological, more fundamental principles is not due, as Marion supposes, to just 'ordering' them 26 . In fact, ordering could by no means change the epistemological notions of simple natures into the ontological conceptions of substance, as the res cogitans and res extensa are thought to be.
In opposition to Marion's theses, we believe that the true cause lying behind the transformation of the doctrine of simple natures into 24 See marion, 1992, pp. 115-139. 25 "With the doctrine of the simple natures, the Regulae is already equipped with all the elements required for articulating the first proposition of metaphysics [i.e., the cogito argument]" (marion, 1992, p. 119). 26 "What is missing is simply the capacity to establish a necessary order between the simple natures to make up the Cogito" (marion, 1992, p. 119).
the mature metaphysics of Descartes should be assigned to his assimilation of the philosophy of Augustine, probably from his contact with the Oratorians and the Cardinal Bérulle 27 , as pointed out above. The 'method of introspection' was the principal tool Descartes borrowed from Augustine's thought. All the other striking resemblances between their philosophies can be rightly conceived of as a consequence of the application of the 'method of introspection' to solve philosophical problems, like those delivered by the sceptics. So before leaving France and arriving at Holland, Descartes already knew how to achieve his project of working out a new philosophy.
Descartes was many times warned by his readers about the similarity between his cogito argument and Augustine's si enim fallor, sum.
For his part, he never denied categorically his acquaintance with the texts of the Bishop of Hippo. His most commom reaction towards these comments was always to emphasize that the purpose and aim of his cogito was thoroughly different from Augustine's 28 . We already find this attitude of Descartes in his response to Mersenne's early observation on this issue, after his reading of the Discourse on Method (1637): "[...] Saint 27 Theses contacts took place after Descartes had worked out his Regulae and before the letter sent to Mersenne of April 15, 1630, in which the importance of metaphysics to his physics is for the first time spoken out. 28 There are at least two reasons that explain why Descartes did not admit his ties with the philosophy of Augustine. First, we can say that, as an author who had a 'foundationalist' project and, for this reason, wanted to settle a new beginning in philosophy, the idea of relating his philosophy to that of other philosophers would not be suitable for his purpose. Secondly, but not lesser important, it is the fact that the admission of his proximity to Augustine could get Descartes into trouble with the Aristotelian official authorities. Descartes's reaction to Galileo's condemnation shows pretty well his anxiety over the disapproval of his works by the Church (that is why he sent a preliminary version of the Meditations to the theologians of the Faculty of Paris). This passage makes clear that, for Descartes, the parallel between the two arguments is not meaningful, because, although the utterances are very similar to one another, the meaning assigned to them by each author is quite distinct. In other words, this can be expressed by saying that the external resemblance of their 'formula' conceals intended thoughts that are, in each case, essentially different.
Whatever one might say, it is undeniable that Descartes -even though he does not admit it openly -has borrowed some fundamental theses from Augustine. For both of them, scepticism has its roots On the other hand, Descartes seems to be quite right when calling our attention to the fact that Augustine's metaphysical argument and his own do not have the same purposes. As Gilson puts it, "[...] under no circumstances can one expect to find in St. Augustine the je pense as the foundation of a mechanistic physics of the Cartesian type" (gilson, 1998, p. 194.) 32 . Menn, who agrees with Gilson, reinforces Descartes' point: "the aim of the Meditations is to show that God and the soul are better known than bodies; but this demonstration has not only the religious and moral utility that Descartes stresses to the doctors of Sorbonne, but also a scientific utility" (menn, 1998, p. 57.). The 'scientific utility' of the Meditations that Menn is talking about comes from the fact that it is also the foundation of the Cartesian mechanistic physics 33 . 31 See gilson, 1951, p. 198. 32 "[…] en aucun cas on ne peut s'attendre à retrouver chez saint Augustin le je pense comme fondement d'une physique mécaniste de type cartésien". 33 The first appearance of this theme dates from a letter Descartes sent to Mersenne on 04/15/1630 (descartes, 1996, AT 1, p. 144). In our understanding, this letter is extremely important, insofar as it indicates a turning-point in Descartes' philosophy: "[...] To try to know him [God] and to know oneself. It is through this that I have endeavored to begin my studies, and I will tell you that I would not have known how to find the foundations of physics if I had not sought them by this means " -" [...] That is why we give reason to Descartes when he stresses the fact that he and Augustine have a distinct aim when putting forward the cogito argument. We can find further evidence for this difference of purpose in a passage on a letter that Descartes sent to Mersenne, in which he asserts the 'scientific utility' of his Meditations as well as its anti-scholastic content: [...] I will tell you, among us, that these six Meditations contain all the foundations of my Physics. Nevertheless, it should not be said, if you please; for those who favor Aristotle would be more difficult to approve of them; and I hope that those who read them will unconsciously acquiesce in my principles, and will recognize their truth before they perceive that they destroy those of Aristotle (descartes, 1996, at 3 [letter to Mersenne], p. 297-8) 34 35 .
Even though Descartes' mature philosophy has metaphysical Tâcher à le [Dieu] connaître et à connaître soi-même. C'est par là que j'ai tâche de commencer mês études, et je vous dirai que j'eusse su trouver les fondements de la Physique, si je ne les eusse cherchés par cette voie". At this moment, Descartes came to realize that he could 'instrumentalize' Augustine's thought in order to build a new philosophy, and construct a new mechanistic science based on the old metaphysics of the African philosopher. Thus, the 'pre-Augustinian' system of the Regulae is abandoned and we see the new and mature, 'post-Augustinian' thinking of Descartes emerging. 34 "[...] Finally, as I believe that it is necessary to have understood the principles of Metaphysics once in his life, because they are the ones who give us the knowledge of God and of our soul, I also believe that it would be very detrimental to occupy too frequently his understanding with meditating on them, because it could not so well be employed in the functions of the imagination and the senses; rather, it is better to content himself with retaining in his memory and in his belief the conclusions which have been drawn from them, and then to employ the rest of the time for study in the thoughts in which the understanding acts with the imagination and the senses (descartes, 1996, at 3 [letter to Elizabeth] p. 695) 37 .
Turning upside down the most common interpretation of his philosophy, which tends to emphasize its metaphysical theses, focusing above all on the cogito argument, Descartes stresses, in this passage, what he is mostly concerned with (science), and calls our attention to the 'danger' (detrimental; nuisible) of metaphysics.
Moreover, more than generally agreed on, it is indeed taken for granted that in the mind-body distinction carried out in the Second meditation the main concern of Descartes is to prove that the mind is an immaterial, self-contained entity, which requires no material substrate to exist. In fact, the emphasis given throughout the Meditations to the cogito argument leads us to this seemingly obvious conclusion. But, in spite of all evidence, what Descartes really strives to demonstrate in the whole work is that the essence of the body is the material extension and, consequently, that there is no soul intrinsically attached to it.
In other words, Descartes had engaged in breaking with the scholastic doctrine of hylemorfism, since it was an obstacle to his project of establishing a new science that could be based only on the geometrical and mechanical properties of the nature. Arguably, the breaking with hylemorfism was the only way to legitimate, metaphysically, the foundation of a mathematical physics. connaissance de Dieu et de nôtre âme, je crois aussi qu'il serait trés nuisible d'ocuper souvent son entendement à les mediter, à cause de qu'il ne pourrait si bien vacquer aux fonctions de l'imagination et des sens; mais que le meilleur est de se contenter de retenir en sa mémoire et en sa créance les conclusions qu'on en a une fois tirées, puis employer le reste du temps qu'on a pour l´étude, aux pensées où l'entendement agit avec l'imagination et les sens".
That is why, after laying down the principles of his metaphysics, Descartes does not go on deepening and developing his new conception of the soul -the res cogitans -into a rational psychology; likewise, his proof of God's existence is not driven to further considerations which would result in a theology. On the contrary, what we always find in Descartes's works after the presentation of his metaphysics is the turning of his attention to scientific issues. This very planning is displayed in the most important works of Descartes: the Discourse on Method, the Meditations, and the Principles of Philosophy. According to Descartes himself, his metaphysics must be followed neither by a science of the soul (rational psychology) nor by a science of God (theology), but rather by a science of the body (res extensa), that is to say, a philosophy of nature or a physics. It is in light of these facts that we can assert that the metaphysics of Descartes is a 'propaedeutic' discipline for his mechanistic science.
Bearing all these discussions in mind, we can say that the conclusions drawn by the critic of Descartes' philosophy who focuses his attention exclusively on the Cartesian metaphysics, particularly on the cogito argument, are quite shortcomings -but not false at all -, given that he would not be taking into account its most fundamental part, i.e., Descartes' natural philosophy or science. Gaukroger puts forward persuasive reasons to explain why some critics have behaved this way towards Descartes' philosophy: [...] Descartes' foundationalist metaphysics is so notoriously problematic that it is difficulty to get beyond it to what it is supposed to provide the foundation for, and, in any case, if the foundations are not viable, there would seem to be little to be gained in asking what plausible systematic connection there could be between them and what is built upon them" (gaukroger, 2002, p. 1).
We are in agreement with Gaukroger's claims. But we also believe that any serious attempt to assess a systematic philosophy, like Descartes', should contemplate the whole body of works. This seems to be the most suitable approach to enable one to bring to light the meaning of Descartes' philosophical intentions. Therefore, the great mistake of Descartes' critics is precisely to carry out a partial analysis of his philosophy.
In what concerns the relationship between Descartes and Augustine, no fair and reasonable statement can be made by a scholar acting so.
Even though we cannot prove that Descartes happened to have direct contact with Augustine's works, we must finally conclude that it seems to us undeniable that he incorporated some arguments and theses of the bishop of Hippo. Facing a sceptical environment, both of them found the weapons to fight this doctrine in the evidence of pure thought, which led them to employ the 'method of introspection': my external senses can always deceive me, but I can never be deceived in thinking that I exist. On this truth, the sceptic can cast no doubt. It is on the basis of this Augustinian argument that Descartes will create his famous cogito argument, the 'Archimedean point' of his philosophy.
But what distinguishes Descartes' purposes from those of Augustine is the fact that the French philosopher does not restrain his investigations within the boundaries of metaphysics. For him, metaphysics is just a first step in the direction of what really matters: natural philosophy or physics. In his view, it is not worth wasting one's lifetime reflecting about metaphysical questions. Instead, after having reflected on them, it is just necessary to keep the metaphysical conclusions in mind and move on to the practical matters, that is to say, science. For these reasons, we have argued that Descartes' philosophical project as a whole is quite distinct from that of Augustine and, except for what relates to metaphysics, cannot be confused with the theological-philosophical project of the bishop of Hippo. In fact, we cannot find in Augustine a metaphysics sustaining a mechanistic system of sciences. That is the point. | 6,742.4 | 2017-12-28T00:00:00.000 | [
"Philosophy"
] |
Study on the effect of raindrops on the dynamic stall of a NACA-0012 airfoil
In this study the pure effect of raindrops on dynamic stall of a pitching airfoil has been investigated. The simulation was performed at Reynolds number of 106\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${10}^{6}$$\end{document} with raindrop diameter equal to 10-5m\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${10}^{-5}\text{m}$$\end{document}. A couple of multiphase models based on Eulerian and Lagrangian frames of reference have been implemented to simulate the raindrops. In the first step the accuracy of each multiphase model has been appraised. As a result, the Lagrangian multiphase model, which is called Discrete Phase Model, has been proven to be of better accuracy. It has been concluded that in general raindrops has negative effects on the lift coefficient of the pitching airfoil. In addition, a lead in aerodynamic phenomena has been observed due to the presence of water drops. This phenomenon has also been observed in the formation and separation of Leading Edge and Trailing Edge vortices which come to existence in advance of the dry case. Finally, it has been illustrated that the main effect of raindrops is on the phase of force oscillation rather than the force amplitude.
Introduction
Owing to their vast industrial applications, pitching motion and dynamic stall behavior of airfoils have been of great interest to many researchers. To illustrate, output power of wind turbines could be controlled either by pitch to feather or stall where the latter, as the name suggests, brings about dynamic stall on the blade [1,2]. Furthermore, airplane maneuver, MAVs motion and turbomachines performance are highly affected by dynamic stall. As a result, in the last decades some theoretical [3][4][5], experimental [6][7][8][9] and Technical Editor: André Cavalieri.
3
203 Page 2 of 15 numerical [10][11][12][13][14] attempts have been made to investigate dynamic stall. Many researchers attempted to investigate different aspects of dynamic stall through understanding effects of geometrical parameters [15][16][17], turbulence [18][19][20] and vortex structures formation [20,21]. For instance, Visbal and Garmann [22] studied the dynamic stall of a 3-D wing by the employment of large eddy simulation (LES) method. It was shown that before the formation of the dynamic stall vortex (DSV) a laminar separation bubble (LSB) appears on the leading edge of the airfoil.
Due to the fact that stall leads to a drop in the lift coefficient and the subsequent detrimental effects, postponement of flow separation by various control methods has been a topic of interest for the researchers in this field as well. Active flow controllers like synthetic jets and plasma as well as passive controllers have been studied for dynamic stall control in recent years owing to their high performance [23][24][25][26][27][28][29][30]. In a study, Tadjfar and Asgari [31] attempted to find the optimal location of a synthetic jet. It was reported that at optimum location the drag coefficient decreased significantly. It was found that the employment of synthetic jets at lower frequencies delays the dynamic stall better compared to higher amplitudes.
Rain as a sporadic meteorological phenomenon could impose penalties to the aerodynamic efficiency of an airborne device which cannot be thoroughly considered in the design process. As a result, the investigation of rain effect on the aerodynamic characteristics has been an ongoing issue explained in different studies. For instance, the work of Rhode [32], Luers [33] and Hess and Spectron [34] could be regarded as the first attempts to shed light to the physics of rain and air flow interaction. In the following years, Bezos et al. [35,36] performed an experimental study on the effect of raindrop on the aerodynamic performance of an airfoil under heavy rain condition. Thompson et al. [37] derived the correlation between the surfacefilm behavior and aerodynamic coefficients for a wing with a NACA-4412 airfoil profile by means of experimental methods. It should be noted with the advances in the computational resources, numerical techniques began to be implemented to describe the physics of raindrop on the aerodynamic behavior of an airborne. The Eulerian-Lagrangian method is the general framework of modelling raindrops in the literature which will be discussed in details in the following sections. In this regard, the study conducted by Valentine and Decker [38] is one of the very first attempts to numerically simulate the raindrop effect on an airfoil. Wu et al. [39], Wu and Cao [40] and Ismail et al. [41] also implemented a numerical study to find the penalty in the aerodynamic coefficients as a result of the raindrops and the airfoil interaction. Fatahian et al. [42,43] studied the performance of gurney flaps and slatted airfoils under rainy condition. In another study, Fatahian et al. [44] investigated the effect of rain on aeroacoustics behavior of an airfoil employing Volume of Fluid (VOF), an Eulerian-Eulerian approach, for the film formation on the airfoil, along with Eulerian-Lagrangian reference of frame to take the raindrops into consideration. It should be noted that there is a common finding in in the literature that indicates the presence of rain leads to a decrease in the aerodynamic performance of an airfoil. Another aspect of rainy condition which needs to be taken care of with more details is on airfoils utilized in energy production machines e.g., wind turbines. Cai et al. [45] developed a Eulerian-Lagrangian method to study the performance of a wind turbine airfoil under rainy condition. It was reported that the two-phase model successfully simulated the rain effect on the airfoil and the results suggested a considerable penalty on the aerodynamic performance. Aiden and Arastoopour [46] studied the effect of rain and the water-film formation on the aerodynamic performance of an airfoil of a wind turbine blade by Discrete Phase Model (DPM) and Volume of Fluid (VOF), respectively. It should be noted that DPM is a Eulerian-Lagrangian method which solves the discrete phase in the Lagrangian frame of reference moving along with the particles e.g., raindrops, and the primary phase is solved in the stationary frame of reference known as Eulerian. Wu et al. [47] investigated the effect of rain on the performance of vertical axis wind turbine. Similar to the previous studies, it was demonstrated that the presence raindrops leads to the exacerbation of aerodynamic performance the wind turbine. Nevertheless, airfoil has not been the only topic of interest for researchers. To illustrate, Jian et al. [48] investigated the effect of rain drop on the liquid film thickness formed on the windshield of a train. Yu et al. [49] investigated the aerodynamics of an Ahmet body under heavy rain condition.
Generally, most of the research in this field focused on the dynamic stall in the dried conditions, and moreover, the raindrop investigations were mainly aimed to understand the physics of static aerodynamic bodies. However, moving airborne devices, such as pitching airfoil, may undergo dynamic stall in the rainy conditions as well. Having considered the studies conducted so far, one comes to this conclusion that the effect of raindrops on the characteristics of the dynamic stall, which is very common in real life applications of this phenomenon, has remained unnoticed. In fact, the presence of raindrops in the upstream flow can affect the aerodynamic loads and flow characteristics downstream of the pitching airfoil. Due to the presence of the raindrops which adds more momentum to the upstream flow, and also considering the interaction between the raindrops and the turbulent flow structures, some variations in flow characteristics compared to the baseline case could be expected. In the following sections, the effect of the rainy condition on the dynamic stall of a NACA-0012 airfoil and the mechanisms leading to aerodynamic characteristics variation will be investigated by means of numerical methods namely, twophase flow methods and turbulence modelling technique.
Problem description
In this study the effect of the rain on flow characteristics over a pitching airfoil oscillating around its one quarter chord in dynamic stall condition has been numerically investigated by means of Ansys Fluent v18. It should be noted that the geometry and the pitching were set according to that of [24]. The oscillating airfoil is a NACA-0012 profile with a chord length of 0.58 m which has been subjected to a uniform flow with the inlet velocity being equal to 24.93 (m/s). Therefore, the mentioned parameters result in a Reynolds number equal to 10 6 In addition, the NACA-0012 airfoil undergoes a sinusoidal motion according to Eq. 1 as follows: where 0 is the initial incidence angle of the airfoil being equal to 14.98 • . Moreover, A is the amplitude of the motion which was set to be 10 degrees. As a result, the amplitude of the airfoil fluctuation ranges from 4.98 • to 24.98 • . Also, f is the frequency of motion however, as demonstrated in the literature, reduced frequency, which is denoted by k, is preferred to frequency as it also takes the free stream velocity into consideration which leads to a more thorough description of the motion. Reduced frequency is calculated by Eq. 2 and it should be mentioned that for this study its value was considered to be equal to 0.15.
In addition, because the main non-dimensional number ruling the physics of incompressible flows is the Reynolds number, this number was set to be equal for both the dry and rainy cases. The parameter which needs to be adjusted to make the Reynolds number of the wet and dry cases equal is the inlet velocity as the density and viscosity for both the wet and dry cases are obtained by equations as follows: where α, ρ and μ are the mass fraction of water, density, and viscosity, respectively. The simulation was decided to be performed at a relatively low amount of liquid water content water + (1 − ). air to avoid much difference in the inlet velocity as this study is intended to take the effect of the waterdrops into consideration only. Therefore, the water liquid content was set to 2.1 g/m 3 as suggested in [34]. The lift and drag coefficients were calculated, respectively, based on the density obtained from Eq. 3 as presented in Eqs. 5-6. It should be noted that the vertical relative velocity component of raindrops was neglected to study the pure effect of drops interacting with the airfoil and the airflow. The role of the vertical relative velocity component could be investigated in future studies.
Obviously, the mass fraction of water for the dry case was set to zero.
where L and D correspond to lift and drag force magnitude, respectively. Also, the reference area as denoted by A , is the surface of a rectangle which is parallel to the flow confined to the chord and span lines of the airfoil.
Numerical method
Considering the fact that the Reynolds number lies in the range of turbulent flow regime, 2-D unsteady Reynolds averaged Navier-Stokes (URANS) presented in Eq. 7 was numerically solved by ANSYS FLUENT Version 18.0 commercial code.
To solve Eq. 7 k−ω Shear-Stress Transport (SST) method was utilized owing to its widely known capabilities in precisely localizing the separation and also predicting the flow characteristics both in pre-and post-stall condition. This method. i.e., k−ω SST, is a two-equation URANS model which solves the region in the vicinity of the wall by standard k−ω turbulence model, while the flow variables of the far field region are obtained by k−ɛ model, thanks to their capabilities in predicting the boundary layer separation and the free stream characteristics, respectively. In order for the turbulence modelling equation to be able to switch from k−ω to k−ɛ interchangeably, it is multiplied by a blending function whose value is considered to be one in the near wall and null in the far field region.
The boundary conditions applied to the systems of equations has been presented in Fig. 1. The airfoil surface was considered to be wall with no-slip condition. In addition, at the upper and lower bounds of the numerical domain, as seen in Fig. 1, the gradients of all the variables were assumed to be zero. This boundary condition has been specified as "symmetry" as shown in Fig. 1. Moreover, to impose pitching motion to the airfoil, the domain has been decomposed into two separate regions called inner and outer region. It should be mentioned that the pitching motion was imposed to the inner region using the sliding mesh technique. As a result, no mesh stretching/compression and mesh destruction/generation was employed which makes the solution far more accurate. Also, note that the type of the surrounding circle is "interface" meaning that conservation of the flux of flow variables on either side of the boundary would be guaranteed. For solving mentioned equations, a pressure-based solver using SIMPLE pressure-velocity coupling was implemented. In addition, all the spatial and temporal terms in the equations presented above were discretized by second order upwind and second order method. In addition, least square cell base method was employed to discretize the gradients.
To take the effect of raindrops into consideration, two multiphase models which were DPM and Dispersed Multiphase (DMP), which is a Eulerian-Eulerian method which models a dispersed phase (rain) in a primary phase (air) both in a stationary frame of reference, were employed. DPM is a Lagrangian-Eulerian method which regards the primary phase i.e., the phase whose volume fraction is far more dominating than that of the other phase, to be solved by Eulerian approach. As a result, the primary phase flow field is obtained by working out Eq. 7 presented earlier. Further, in DPM, the behavior of secondary phase, which is the phase with considerably lower amount of volume fraction, is described by Lagrangian approach. To predict the trajectory of the secondary phase, the resulting force acting on the particles needs to be calculated and by striking the force balance, the inertial force will be equal to the force exerted by the primary phase. As a result, Eq. 8 presented below will be the governing equation of the Lagrangian equation whose result specifies the particles trajectories.
where the first term on the right-hand side of the equation above is the drag force, and F x represents additional forces acting on the particles. It should be noted that r is the droplet or particle relaxation time obtained as follows: A further discussion of the relaxation time is found in Sect. 4.4.
Many attempts have been made to present a relationship between the drag function, f r , and the primary flow characteristics. Nevertheless, Schiller and Naumann correlation has been employed in this study to predict the drag function: In Discrete Phase Model (DPM) it is assumed that the dispersed phase, which possesses a lower amount of volume fraction, has been scattered throughout the flow field. DMP, unlike DPM, includes an Euler-Euler system of equation. In addition, in each cell the volume fraction is obtained through the continuity equation. On the other hand, the continuity equation needs to be calculated for each phase. As a result, the set of equations for DMP are as follows: where α i , ρ i and v i are volume fraction, density and velocity component of the dispersed phase i . Moreover, F D ij is the drag forced exerted by the primary phase to the dispersed phase j . Also, p stands for the static pressure which includes the buoyancy force.
Parameters independency and validation
To verify the performance and the choice of numerical parameters in the numerical simulation the parameters affecting the results need to be thoroughly investigated. The parameters which are going to be studied in this section are mesh and time step independency, numerical solution validation against experimental results and multiphase accuracy. Having performed the mentioned studies, one may ensure the validity of the numerical solution to draw reasonable conclusions in the following sections.
Mesh independency
To investigate the sensitivity of the numerical solution to the mesh size, three different meshes with 50,000, 150,000 and 250,000 cells were generated and the variation of lift and drag coefficient with the mesh size is plotted in Fig. 2. It is evident in Fig. 2 that after 1,50,000 cells the variation in lift and drag coefficients is less than one percent. Therefore, the mesh with 1,50,000 cells was selected to perform the further numerical simulations on it. Moreover, the value of y + was calculated all over the airfoil and its maximum number is 1.8. In Fig. 3 the graphical and qualitative presentation of the mesh generated around the airfoil can be observed. It should (11) C D = 24 1 + 0.15Re 0.687 , Re < 1000 0.44, Re ≥ 1000 be noted that the Angle of Attack (AOA) has been presented in degrees in all the figures of this paper.
Time step independency
To implement time step independency study, three different time steps were considered. The time step size was determined with respect to the amount of time the airfoil needs to sweep one period of the oscillating movement. As a result, considering the angular velocity of the airfoil, which is the time derivation of the pitching motion presented in Eq. 1, time steps as long as one period T divided by 180, 360 and 720 were generated to perform the time step independency study. The hysteresis curve of the lift coefficient for all time steps is plotted in Fig. 4. It is evident that the discrepancy among the time step lengths is quite negligible and the results for the finest time steps i.e., T∕360 and T∕720 , are so close that the curves are on top of the other It should be noted that T stands for the time required for the completion of one period of movement which is equal to 0.49(s) . As a result, the time step sizes for the implementation of this study are 2.72 × 10 −3 , 1.36 × 10 −3 and 6.8 × 10 −4 s. It should be noted that the simulations for the time step study were performed for the dry case.
Numerical solution validation
The numerical results obtained for the dry case, using the parameters which were proven to satisfy the independency conditions, were compared with both experimental (McAlister et al. [7]) and numerical data (Tadjfar and Asgari [31]) and presented in Fig. 5. It is evident that the numerical results could capture the trend of the aerodynamic forces as well as location of dynamic stall occurrence properly. It should be noted that there are some non-smooth changes in the experimental results indicating lack of sampling. As a result, the experimental data could not capture the lift oscillations at the beginning of the dynamic stall, while both numerical results show some oscillations.at the same angles of attack. The largest amount of difference between numerical results with the experimental data occurs during the downstroke near AoA = 18 • . At this point, both numerical studies fail to capture physics of dynamic stall. This point corresponds to separation of Trailing Edge Vortex (TEV) which will be discussed in details in the following sections. As discussed in the introduction part, DPM is mainly employed as the main method for the rain simulation in different applications. In this work, in order to perform a validation study on the DPM simulation, the wet results of a static NACA-0012 airfoil at different angles of attacks were compared with that of the experimental and numerical studies conducted respectively by Bezos et al. [35] and Ismail et al. [41]. In Fig. 6 the aerodynamic coefficients of the airfoil under rainy condition have been presented. As seen in Fig. 6 the wet results of the current simulation are in a good agreement with that of the experimental work of Bezos et al. [35]. It should be noted that the same DPM settings will be used for the investigation of the rain effect on the dynamic stall.
Multiphase model accuracy
As mentioned at the beginning of Sect. 3, in order to find the most accurate multiphase model to predict the behavior of pitching airfoil in the rainy weather condition, DPM and DMP methods were employed. For the implementation of DPM, a two-way coupling method was utilized. Particle relaxation time is defined as follows in Eq. 14 [50]: where in this study the slip correction factor, C c , is about unity. Regarding the droplet density, air density, droplet diameter and air viscosity as p , f , d and respectively, the relaxation time of particles will be 3 × 10 −4 s . To be on the safe side, only three percent of the calculated relaxation time was considered as the particle time step, i.e., Δt p = 10 −5 s . Another parameter which is required to be determined is the order of coupling. Fig. 7 demonstrates the particles coupling order in terms of volume fraction, P , and relative relaxation time, P ∕ f presented by Elghobashi [51]. Considering the amount of volume fraction and the relaxation time of flow and particle, this problem requires to be modeled by twoway coupling. In Fig. 8 the lift, drag and moment coefficients have been presented under both rainy and dry condition.
Unlike DPM, there have been few reports on utilizing Eulerian-Eulerian reference of frame for modelling raindrops in the air [52], which may be associated with the disperse nature of water drops. However, one should keep in mind that less significant deal of computational cost is imposed to the system in this method. Hence, it is wise to compare the results obtained from Eulerian-Eulerian and Eulerian-Lagrangian references of frame for simulating water drops in air. The accuracy of the Eulerian-Lagrangian method, namely DPM, has been validated in Sect. 4.3. In addition, based on the literature, the presence of raindrops cannot make significant alterations in the trend of aerodynamic forces. However, as seen in Fig. 8, the results of the Eulerian-Eulerian method, i.e., DMP, demonstrate a totally different patterns especially at the angles of attack near the dynamic stall. Thus, the DMP method is not appropriate in capturing the physics of dynamic stall of a pitching airfoil under the rainy condition. Also, as can be observed Fig. 8a at the lift coefficient, DMP method fails to capture the physics of dynamic stall. On the other hand, DPM demonstrates an excellent capability to yield physically meaningful results in both the pre-and post-stall conditions. Ergo, in the following steps, DPM method was chosen to study the effect of the raindrops. Also, it should be noted that the liquid water content and droplet diameter were assumed to be 2.1g∕m 3 and 10 −5 m respectively as suggested in [53]. In addition, as [53] suggests when the We number ranges from five to ten, which is the working range of this study, the interaction between the airfoil and the raindrops is assumed to be of the rebound type.
Results and Discussion
Having evaluated the accuracy of the numerical simulation, we discuss the effect of raindrops on the aerodynamic characteristics by comparing lift and drag coefficients with those of the dry case. To achieve this goal, the lift and drag coefficient hysteresis curves of the dry and the wet cases are presented in Fig. 9. Also, to understand the physics of this phenomenon, the vorticity contours of the dry and the wet cases at different angles of attack are presented in Fig. 10. It should be noted that the points on Fig. 9 correspond to the angles of attack in Fig. 10. Also, the upwards and downwards arrows in Fig. 9 indicate upstroke and downstroke motion, respectively. As stated in Sects. 2 and 4.4, the raindrop diameter and liquid water content were set to be 1e − 5 m and 2.1g∕m 3 .
Overall, both the dry and wet cases follow the same trend and the overall value of the aerodynamic coefficients of the wet case is smaller than that of the dry case. Nevertheless, it is evident in Fig. 9 that there is a lead in the trend of the wet case as peaks and valleys occurred ahead of that of the dry case. Due to the fact that both cases suggest a similar trend in the lift and drag coefficient variation, in the following the aerodynamic phenomena regarding the pitching motion both in the dry and the wet case is investigated. The vorticity contours at different angles of attack during up and downstroke have been presented in Fig. 10. Figure 10a shows that at the beginning of the pitching upstroke motion the vortical structure in both cases is quite narrow. As the angle of attack increases the vortical structure begins to grow as seen in Fig. 10b. As the angle of attack rises to about 23 degrees, in Fig. 10c, the first cores of the leading-edge vortex (LEV) start to appear on the leading edge of the airfoil. It should be noted that the LEVs are typical of dynamic stall phenomenon and they come into existence due to the difference between the free stream and the leading-edge relative velocities leading to Kelvin-Helmholtz instability. Also, the presence of the LEV benefits the lift growth and it is the main responsible for the stall prognostication as it causes the velocity to increase on the suction side and at the same time decelerates the flow passing the pressure side. Obviously, by increasing the angle of attack the LEV is intensified thanks to the rise in the relative velocity between the flow and the airfoil leading edge as demonstrated in Fig. 10d. Therefore, when the angle of attack exceeds a certain limit the LEV detaches, and the lift coefficient drops dramatically as can be seen in Fig. 10e. The detachment of the LEV leads to a sharp drop in the lift coefficient and a sudden rise in the drag coefficient and this observation corresponds to the point "e" in Fig. 9. Up to this angle of attack i.e. the LEV detachment at 25 degrees, the vortical structures formation in both dry and wet cases are similar. However, straight after the LEV Fig. 9 Points of interest detachment a lead in the formation of the vortical structures is witnessed. To illustrate, the same vortical structure reported in the wet case at 24.79 • (Fig. 10e-right) is exactly observed in the dry case at 22.70 • (Fig. 10f-left). Moreover, the same trend is observed until the airfoil reaches very low angles of attacks as illustrated in Fig. 10j. Therefore, the flow separation and the succeeding phenomena, in the wet case, happen at a lower angle of attack, during the upstroke motion, compared to the dry case.
It should also be noted that during the downstroke motion another phenomenon known as trailing edge vortex (TEV) rises into importance as the downward motion favors its formation as a result of the velocity gradient at the trailing edge. Contrary to the LEV, TEV is a counterclockwise (CCW) vortex generated at the trailing edge of the airfoil and its formation is an adverse factor to the lift coefficient growth. This is ascribed to the fact that, because of the direction of its rotation, the TEV decelerates the flow on the suction side and accelerates the flow on the pressure side resulting in a lift drop. In order to better understand the effect of the TEV on the aerodynamic behavior of the airfoil, the vortical structures at the beginning of the TEV formation and its detachment need to be taken into consideration. To begin with, the first symptoms of the TEV formation can be observed at the beginning of the dynamic stall in Fig. 10e (shown in red) where the LEV starts to detach. A little after the LEV detachment, as demonstrated in Fig. 10f and g, the TEV grows and then detaches. Unlike the LEVs, TEV growth leads to a lift coefficient sharp drop and its detachment results in a rise in the lift coefficient as marked in Fig. 9a by the points "f" and "g". In other words, the LEV and TEV formation and separation correspond to a local minimum and maximum, respectively. As a result, the oscillations in the lift and drag coefficients, mainly at higher angles of attack during the downstroke motion, are due to the TEV formation and separation.
The main outcome of the description on the comparison between the vorticity contours and the aerodynamic characteristics is that the lift and drag coefficient of the rainy case is slightly lower than that of the dry case and also a lead in the aerodynamic phenomena is observed. At the first thought, these phenomena could be attributed to the lower Reynolds number of air in the wet case leading to lower amount of lift coefficient and a lead in the TEV separation. However, a numerical simulation was performed for the dry case with the air inlet velocity equal to that of the current wet case and the results showed almost the same value for the aerodynamic coefficients and with absolutely no lead or lag in their behavior. As a result, the explanation for these phenomena needs to be sought in the interaction between the primary and secondary phases. Therefore, the contour of velocity magnitude difference (U WET − U DRY ) between the wet and the dry case along with the vectorial difference is presented in Fig. 11 to have a better understanding of the interaction. It is evident in Fig. 11 that at the leading edge the difference vectors tend downwards suggesting that the airfoil sees a lower amount of angle of attack which corresponds to a lower amount of lift coefficient. Also, at the trailing edge, the difference vectors show a TEV like vortical structure. As discussed earlier, the presence of a CCW vortex at the trailing edge has an adverse effect on the lift generation of the pitching airfoil which is one of the mechanisms leading to a lower amount of lift coefficient of the wet case compared to the dry case. Further, regarding the vector differences, it is evident in Fig. 11 that the flow in the suction side experiences a deceleration, while on the pressure side it accelerates. In other words, an LEV with an unfavorable rotation direction (CCW) is seen at the leading edge. All the observations from Fig. 11 imply that due to the two-way coupling the air velocity vectors of the upstream flow undergoes an alteration due to the forces exerted by the secondary phase.
Conclusion and summary
In this study the effect of raindrops on the dynamic stall of a NACA-0012 pitching airfoil at a Reynolds number equal to 10 6 was investigated. The well-known k − SST model was utilized to account for the turbulent phenomena. In addition, to exploit the maximum accuracy of this model, y + ∼ 1 was considered for the airfoil boundary. The rain condition was set as suggested in [53]. Hence, the raindrop diameters and liquid water content were set 10 −5 m and 2.1g∕m 3 , respectively. To implement an accurate two-phase study two different well-known models were compared and the most accurate one called Discrete Phase Model (DPM) was selected as the main two-phase model. It was shown that the Disperse Multiphase (DMP) method is not capable of capturing the physics of the dynamic stall. As a result, DPM was employed to investigate the effect of raindrops on the pitching airfoil. Furthermore, to ensure the accuracy of the numerical results, the grid and time step independency were implemented, respectively. In the end, the numerical solution was validated against an experimental [7] and a numerical [31] studies.
The comparison between the lift and drag coefficients revealed that a lead and drop in both the coefficients occurred. More precisely, wet condition phenomena occurred in advance of that of the dry one with slightly lower magnitude. Firstly, it was discussed that Reynolds number does not play a pivotal role in the lift reduction or phase lead as the hysteresis curves of the aerodynamic coefficients for the dry cases with the inlet velocity being equal to the main dry case and the wet case were stated to be very close in value and no phase shift was observed. Secondly, the two-way coupling of air and raindrops, causes the flow to undergo an alteration in its direction near the leading edge. The velocity difference contours proved the direction of velocity in the rainy case lies in the adverse direction, as a result of two-way coupling, which leads to a lower amount of aerodynamic coefficients in general. The vorticity contours of the dry and the wet cases showed that the Leading Edge Vortex (LEV) and Trailing Edge Vortex (TEV) of the wet case tend to separate anterior to the dry case. Also, it was illustrated that the LEV formation contributed to lift augmentation, while the presence of TEV results in a reduction in the lift coefficient. Hence, their separation counteracts their positive or negative effects on the aerodynamic coefficients to a considerable extent. Therefore, to sum up, the main findings of this study are as follows: • DPM is a more accurate solution for modelling raindrops effects on a pitching airfoil than DMP is. Since the latter has been proven to be not able to capture the physics of the flow. • Raindrops cause a reduction in the magnitude of lift and drag coefficients. • Raindrops do not alter the main trend of the aerodynamic coefficients but they bring a lead in the trend. • The lower Reynolds number of the wet case was shown not to have a considerable effect on the lower amount of the aerodynamic coefficients and phase shift. • The two-way coupling between the primary and the secondary phases was found to change the local angle of attack affecting the aerodynamic of the airfoil. • The interaction between the primary and secondary phases was found to be the most important factor resulting in the observed phenomena.
Funding Open access funding provided by Politecnico di Milano within the CRUI-CARE Agreement.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 8,087.8 | 2022-04-24T00:00:00.000 | [
"Physics"
] |
Spectral Localization in the Hierarchical Anderson Model
We prove that a large class of hierarchical Anderson models with spectral dimension ${\rm d}\leq 2$ has only pure point spectrum.
introduction
This paper is devoted to study of the spectral properties of the hierarchical Anderson model and is motivated by the work of Molchanov [M2]. Before stating our results we recall the definition of the model and its basic properties. For additional information about the hierarchical structures and the hierarchical Anderson model we refer the reader to [D,BS,Bo,M1,M2].
Let X be an infinite countable set. Throughout the paper δ x will denote the Kronecker delta function at x ∈ X. A partition P of X is a collection of its disjoint subsets whose union is equal to X. Let n = (n r ) r≥0 be a sequence of positive integers and P = (P r ) r≥0 a sequence of partitions of X. The elements of P r are called "clusters" of rank r. We say that (X, P, n) is a hierarchical structure if the following hold: (1) n 0 = 1 and every Q ∈ P 0 has exactly one element.
(2) For r ≥ 1, every Q ∈ P r is a disjoint union of n r clusters in P r−1 .
(3) Given x, y ∈ X, there is a cluster Q of some rank containing both x and y.
Let us state some immediate consequences of this definition. Every cluster of rank r ≥ 0 has size N r := r s=0 n s . Given x ∈ X and r ≥ 0, there is a unique cluster of rank r containing x. We denote this cluster by Q r (x). The map d(x, y) := min {r : y ∈ Q r (x)} , is a metric on X and Q r (x) = {y : d(x, y) ≤ r}. Note that Q r (x) = Q r (y) whenever d(x, y) ≤ r. Given an integer n ≥ 2, a hierarchical structure is called homogeneous of degree n if n r = n for all r ≥ 1.
The free Laplacian on the hierarchical structure (X, P, n) is defined as follows. For each r ≥ 0, let E r : l 2 (X) → l 2 (X) be the averaging operator Let p = (p r ) r≥1 be a sequence of positive numbers such that ∞ r=1 p r = 1. In the sequel we set p 0 := 0 and The hierarchical Laplacian ∆ on l 2 (X) is defined by Clearly, ∆ is a bounded self-adjoint operator and 0 ≤ ∆ ≤ 1. A hierarchical model is a hierarchical structure (X, P, n) together with the hierarchical Laplacian ∆. The spectral properties of ∆ only depend on n and p and are summarized in: The spectrum of ∆ is equal to {λ r : r = 0, · · · , ∞}. Each λ r , r < ∞, is an eigenvalue of ∆ of infinite multiplicity. The point λ ∞ = 1 is not an eigenvalue.
(2) E r − E r+1 is the orthogonal projection onto the eigenspace of λ r and (3) For every x ∈ X, the spectral measure for δ x and ∆ is given by where δ(λ r ) stands for the Dirac unit mass at λ r . Note that µ does not depend on x.
The spectral measure µ can be naturally interpreted as the integrated density of states of the operator ∆. Let x 0 ∈ X be given and consider the increasing sequence of clusters Q r (x 0 ), r ≥ 0. Let P r be the orthogonal projection onto the N r -dimensional subspace Nr be the eigenvalues of the restricted Laplacian P r ∆P r acting on l 2 (Q r (x 0 )) and the corresponding counting measure.
Proposition 1.2. The weak-* limit lim r→∞ ν r exists and is equal to µ.
then the number d is called the spectral dimension of ∆. This definition is motivated by the analogy with the edge asymptotics of the density of states of the standard discrete Laplacian on Z d , for which the spectral and spatial dimensions coincide. The relation y∈X δ x |∆δ y = 1 yields that ∆ generates a random walk on X. We recall that the random walk on Z d generated by the standard discrete Laplacian is recurrent if d = 1, 2 and transient if d > 2. The corresponding result for the hierarchical Laplacian is: Proposition 1.3. Consider a homogeneous hierarchical structure of degree n ≥ 2. Suppose that there exist constants C 1 > 0, C 2 > 0 and ρ > 1 such that for r big enough. Then: (1) The spectral dimension of this model is d(n, ρ) = 2 log n log ρ .
We now define the hierarchical Anderson model associated to (X, P, n) and the hierarchical Laplacian ∆. Consider the probability space (Ω, F , P) where Ω := R X , F is the usual Borel σ-algebra in Ω, and P is a given probability measure on (Ω, F ).
For ω ∈ Ω, we set V ω is a self-adjoint (possibly unbounded) multiplication operator on l 2 (X). Let The family of self-adjoint operators {H ω } ω∈Ω indexed by the events of the probability space (Ω, F , P) is called the hierarchical Anderson model. Concerning the probability measure P, we will need only one technical assumption having to do with the notion of conditional density. Throughout the paper, m will denote the Lebesgue measure on R. For any x ∈ X, Ω can be decomposed along the x'th coordinate as Ω = R × Ω, Ω = R X\{x} . Let P x be the corresponding marginal of P defined by P x ( B) := P(R × B), where B ⊂ Ω is a Borel set. Then for P x -a.e. ω ∈ Ω, there is a probability measure P ω x on R s.t. the conditional Fubini theorem holds: for all f ∈ L 1 (Ω, P ) we have If for P x -a.e. ω ∈ Ω, P ω x is absolutely continuous (a.c.) with respect to m, then we say that P has a conditional density along the x'th coordinate. P is called conditionally a.c. if for every x ∈ X, P has a conditional density along the x'th coordinate. An important special case of a conditionally a.c. probability measure is the product measure P = ⊗ x∈X P x , where each P x is a probability measure on R a.c. with respect to m.
We denote by σ ac (H ω ) the absolutely continuous part of the spectrum of H ω and by σ cont (H ω ) the continuous part. Our main result is:
Remark 1. Theorem 1.4 and Proposition 1.3 allow to construct hierarchical models with spectral dimension d ≤ 2 that exhibit Anderson localization at arbitrary disorder. If (X, P, n) is a homogeneous hierarchical structure of degree n ≥ 2 and p r = Cρ −r with ρ > n, then the hypothesis (1.1) is fulfilled for u r = r 1+ε . Given 0 < d < 2, one can adjust ρ > n to make d(n, ρ) = d. If p r = Cr −3−ε n −r , then the model has spectral dimension d = 2 and (1.1) is verified for u r = r 1+ε/3 . One can also construct trivial models with d = 0 by taking p r to decrease faster than ρ −r for any ρ. We emphasize that homogeneity of the hierarchical structure is not required for Theorem 1.4. Remark 2. In [M2], Molchanov has proven that if the random variables ω(x) are i.i.d. with a Cauchy distribution, then Theorem 1.4 holds under the condition ∞ r=1 p r u r < ∞.
In particular, in this case the theorem holds for ∆ of any spectral dimension. Molchanov's argument is based on subtle properties of Cauchy random variables and cannot be directly extended to any other probability measure. In contrast, our proof of localization in spectral dimension d ≤ 2 is based on general arguments and is the first step in extending Molchanov's result to a more general class of probability measures. Remark 3. The fractional moments method of Aizenman and Molchanov [AM] allows to prove localization for ∆ + σV ω for large disorder σ or for large energies. One needs an extra decoupling hyphothesis on the random variables ω(x) and the condition on ∆ that The requirement (1.2) on the decay of p r is comparable to the hypothesis (1.1), while Theorem 1.4 is valid at arbitrary disorder or energy. Remark 4. Part (2) of Theorem 1.4 does not hold for all ω. Our method of proof combined with the general results of [DMS], [G] yields that H ω will have singular continuous spectrum for some ω's.
The free Laplacian
In this section, we prove Theorem 1.1, Proposition 1.2 and Proposition 1.3. Proof of Theorem 1.1. For r ≥ 0, let H r = Ran(E r ). H r is the closed subspace of l 2 (X) consisting of functions that are constant on each cluster of rank r. Note that l 2 (X) = H 0 ⊃ H 1 ⊃ H 2 ⊃ H 3 ⊃ . . . and that H r = {0} since a nonzero function constant on every cluster would have infinite l 2 norm. These observations yield that where L r is the orthogonal complement of H r+1 in H r . Note that L r is the infinite dimensional subspace of functions ψ s.t. E s ψ = ψ for 0 ≤ s ≤ r and E s ψ = 0 for s > r. Hence for every ψ ∈ L r , ∆ψ = λ r ψ, and this proves parts (1) and (2). The spectral measure µ x,∆ for δ x and ∆ is the unique Borel probability measure on R s.t.
for every bounded Borel function f : R → C. To compute µ x,∆ , we decompose δ x according to (2.1): Since 1 Nr 1 Qr (x) − 1 Nr+1 1 Qr+1(x) 2 = 1/N r − 1/N r+1 , (3) follows. The analysis of the density of states of ∆ is facilitated if one introduces the cut-off Laplacians It is technically easier to work with ∆ r than with P r ∆P r . Note that l 2 (Q r (x 0 )) is an invariant subspace for ∆ r . One can exactly compute the eigenvalues and eigenvectors of the restricted operator P r ∆ r acting on l 2 (Q r (x 0 )). If 0 ≤ s ≤ r, then every ψ ∈ L s ∩ l 2 (Q r (x 0 )) is an eigenvector of P r ∆ r with eigenvalue λ r . The subspace L s ∩l 2 (Q r (x 0 )) has dimension D (r) s := N r (1/N s −1/N s+1 ) for 0 ≤ s ≤ r−1, and the subspace L r ∩ l 2 (Q r (x 0 )) has dimension D (r) r := 1. Since r s=0 D (r) s = N r , the spectrum of P r ∆ r is equal to {λ s : s = 0, · · · , r} and each eigenvalue λ s has multiplicity D (r) s . Proof of Proposition 1.3 . Let ν * be a weak-* limit point of the sequence ν r . Let ν r k be a subsequence converging to ν * . We claim that for all s ≥ 0. Indeed, let δ := min j =s |λ s − λ j | /2 and 0 < ε < δ/3. Since P r ∆P r − P r ∆ r ≤ ∞ j=r+1 p j , we have that P r ∆P r − P r ∆ r ≤ ε for all r big enough. For such r, the spectrum of P r ∆P r is contained in r j=0 [λ j − ε, λ j + ε]. Let R be the spectral projection of P r ∆P r on [λ s − ε, λ s + ε] and T the spectral projection of P r ∆ r on the same interval. Let γ be the circle {z ∈ C : |z − λ s | = δ}, oriented counterclockwise. Then and thus R − T ≤ δ(2δ/3) −1 ε(2δ/3) −1 ≤ 3/4 < 1. It follows that Ran(R) and Ran(T ) have the same dimension and that (1/N s − 1/N s+1 ) = 1 and ν * is a probability measure, we must have that ν * = µ. Therefore µ is the unique weak-* limit point of the sequence ν r and lim r→∞ ν r = µ.
Proof of the localization theorem
This section is devoted to the proof of Theorem 1.4 and is organized as follows. We first derive a hierarchical approximation formula for the resolvent (H ω − z) −1 . Then we use the formula to obtain a bound on the resolvent matrix elements. This bound combined with the Simon-Wolff localization criterion yields the statement. Set Fix ω ∈ Ω. For any Q r ∈ P r , the subspace l 2 (Q r ) is invariant for H ω,r . Let σ(ω, Q r ) be the set of the eigenvalues of the restricted operator H ω,r ↾ l 2 (Q r ) and σ ω := σ(ω, Q r ) where the union is over all clusters of all ranks. Clearly, σ ω is a countable subset of R. For z ∈ C\σ ω , r ≥ 0, and x, y ∈ X, we set For z ∈ C\σ ω , r ≥ 0 and t ∈ X, let g ω,r (t; z) be the average of G ω,r (·, t; z) over the cluster Q r (t), i.e.
The key step in our proof is: Theorem 3.2. Suppose that p r and N r satisfy (1.1). Let ω ∈ Ω and x ∈ X be fixed. Then for m-a.e. e ∈ R\σ ω , Proof. We shall use the following general result, proven in [M2]: Let A be a hermitian N × N matrix and v ∈ C N . Then for all M > 0, where · 2 stands for the l 2 norm on C N . Since l 2 (Q r (x)) is an N r -dimensional invariant subspace for H ω,r and since 1 Qr(x) 2 = √ N r , we have from (3.5) that for M r > 0, Let M r > 0 be a sequence satisfying ∞ r=1 N r M −1/2 r < ∞. By the Borel-Cantelli lemma, for m-a.e. e ∈ R\σ ω , there exists a finite constant C e such that (3.6) (H ω,r − e) −1 1 Qr (x) 2 2 < C e M r , for all r ≥ 0. From now on, such an e ∈ R\σ ω is fixed. Using the representation formula (3.2), we get the estimate Combination of (3.7) with (3.9) and (3.8) yields the estimate y∈X |G ω,r (x, y; e)| 2 the result follows.
Let us recall the Simon-Wolff localization criterion. For x ∈ X and ω ∈ Ω, denote by µ ω x the spectral measure for ∆ + V ω and δ x , by µ ω x,cont the continuous part of µ ω x and by µ ω x,ac the a.c. part. Define the function G ω,x : R → [0, +∞] by By the Theorem of de la Vallé Poussin, Hence, if for a fixed ω ∈ Ω we have that G ω,x (e) < ∞ for m-a.e. e ∈ R, then µ ω x,ac = 0. The Simon-Wolff localization criterion is summarized in: Theorem 3.3. Assume that P has a conditional density along the x'th coordinate. Let B ⊂ R be a Borel set such that G ω,x (e) < ∞ for P ⊗ m-a.e. (ω, e) ∈ Ω × B.
very helpful discussions and encouragements during the author's visit at the University of Charlotte, NC. We also benefited from discussions with the following people: Kingwood Chen, Serguei Denissov, Marco Merkli, Juan-Manuel Perez-Abarca and Nicola Squartini. A very special thank goes to Kingwood Chen for hospitality during the author's visit at UNCC. | 3,885 | 2005-12-21T00:00:00.000 | [
"Mathematics",
"Physics"
] |
Conference-ALC 05-X-ray Emission Measurement from Organic and Insulating Materials with Low Energy Ga Ion Beams
We observed a phenomenon of X-ray emission from various types of organic and insulating materials under irradiation of a low energy (30 kV) gallium ion beam. Energy dispersive X-ray spectrometer combined with a focused ion beam instrument could detect light elements with low K-shell electron excitation energy such as carbon, oxygen, fluorine, sodium and silicon. Effect of the current intensity of gallium ion beam on the normalized X-ray yield was investigated. It was found that a strong irradiation beam current could effectively enhance the normalized X-ray yield. [DOI: 10.1380/ejssnt.2006.365]
I. INTRODUCTION
Organic and insulating materials are important constitutes of the natural world. To study the elemental composition information of them, X-ray emission spectroscopy has been established as a powerful analysis technique. From X-ray emission spectra, the compositions are identified by the characteristic X-ray energy position and the concentrations are deduced from their integral intensities. Among various types of detection methods based on X-ray emission, ions or particles induced X-ray emission has its unique advantages. The dominant merit of particle induced X-ray emission (PIXE) technique is that it is a highly sensitive detection method, so that this technique has been established as a routine analytical technique to characterize elemental composition [1][2][3][4][5][6][7][8][9][10]. It is already confirmed that PIXE analysis has important applications in various materials such as medieval stained glass [11], magnesium aluminate spinel [12], human skin sections [13], mineral assemblages and base-metal ores [14], proteins [15], trace elements inside plants [16], and so on.
However, conventional PIXE techniques need ions with primary energy of several millions of voltages to bombard the surface of the target samples, especially in case of use of light element gas ions such as proton and helium [17][18][19]. Compared to high energy ion irradiation, low energy ion irradiation technique has its unique characteristics. It was expected that ion bombardment with low kinetic energy only cause slight damage at the surface of specimen [22][23][24][25]. The low kinetic energy of the projectile limit the total path and region where the energetic ion penetrate a solid surface and therefore show different movement cascade from the case of high energy ion irradiation [26][27][28].
Recently, it was found that heavy ions enable to induce characteristic X-ray from the metal samples even with low primary energy [20]. Furthermore, it has been reported that the signals of the characteristic X-ray with energy less than about 2 keV could be enhanced if the samples are of insulating [21]. This enhancement was deduced to be due to the charge up effect, but the systematic studies and detailed analysis have not been carried out yet.
In the present work, characteristic X-ray emission from organic and insulating materials was measured using a low energy Ga ion beam with 30 kV primary energy to investigate the features of X-ray signals of light elements with K-shell electron excitation energy, and the influence of irradiation beam current on the X-ray yield was systematically examined.
II. EXPERIMENTAL
The ion beam irradiation experiments were carried out using a JEM-9310 focused ion beam (FIB) system equipped with an energy dispersive X-ray spectromenter (EDS) detector (EX-94013JNU analysis system with an energy resolution of 0.138 keV). X-ray emission measurements were carried out in the main chamber under a base pressure of 1×10 −4 Pa. The gallium ion beam of the FIB, with working voltages ranging from 5 kV to 30 kV and emission currents ranging from 0.8 × 10 −12 A to 6 × 10 −9 A, was utilized to irradiate different types of organic and insulating samples. Intensity counts of each X-ray spectrum were normalized by the dose of gallium ion (beam current multiplies irradiation time).
We chose a series of organic and insulating specimens for X-ray analysis. Leaf and wood was picked up from a fir tree and an ilex tree respectively and then dried at 100 • C for 12 hours. Teflon, SiO 2 , Al 2 O 3 and glass specimens were commercially bought. The glass specimen consists of silicon, oxygen and sodium elements. Each sample was cut into a rectangle shape using a slice saw. Organic samples and glass have a dimension of 2 mm (wide)×2mm (length)×1 mm (thick). SiO 2 and Al 2 O 3 specimens have a dimension of 2 mm (wide)×2mm (length)×0.2 mm (thick). Figures 1(a) and 1(b) show the experimental setup of JEM-9310 FIB system and the schematic diagram of its basic working principle, respectively. The gallium emitter is composed of a Ga liquid-metal ion source (LMIS). All of the ion beam induced X-ray emission measurements shown here were performed using this FIB-based system. The working distance used here is 40 mm and the angle between specimen surface and EDS detector is 40 • .
III. RESULTS AND DISCUSSION
Figures 2(a) to 2(f) show the results of X-ray measurement from teflon, wood, leaf, SiO 2 , Al 2 O 3 and glass using a 30 kV Ga ion beam. Each X-ray spectrum consists of several characteristic X-ray peaks superimposed on a background due to various atomic bremsstrahlung processes. The beam current used here was 1.185 × 10 −9 A and the acquisition time was 50 s (live time) for each sample. Thus, the X-ray counts of each spectrum could be normalized by dividing a factor of 1.185 × 50 = 59.25 × 10 −9 C (dose of gallium ion).
As shown in Figs. 2(a)-2(f), carbon-K peak at 0.27 keV, oxygen-K peak at 0.53 keV and fluorine-K peak at 0.67 keV could be found from Teflon; carbon-K peak at 0.27 keV and oxygen-K peak at 0.53 keV could be found from wood and leaf; oxygen-K peak at 0.53 keV and silicon-K peak at 1.74 keV could be found from SiO 2 sample; oxygen-K peak at 0.53 keV and aluminum-K peak at 1.49 keV could be found from Al 2 O 3 sample; oxygen-K peak at 0.53 keV, sodium-K peak at 1.04 keV and silicon-K peak at 1.74 keV could be found from glass sample. These data were summarized in Table I in detail. The weak carbon-K signals shown in the EDS spectra of SiO 2 , Al 2 O 3 and glass might come from the possible contamination.
Hence, X-ray emission phenomenon could be observed from various types of organic and insulating materials under irradiation of a low energy (30 kV) gallium ion beam. These EDS results clearly show that light elements from organic and insulating samples, especially for those elements with low K-shell electron excitation energy such as carbon, oxygen, fluorine, sodium and silicon could be detected.
To obtain a satisfied EDS spectra with high ratio of signal to background, important factor such as beam current, which would affect the X-ray yield, was investigated and the results were shown below.
During the process of gallium ion beam bombarding the surface of insulating samples, current intensity of irradiation ion beam is an important parameter determining the X-ray yield. To study the relationship between beam current and X-ray yield, we applied three different beam currents. The values of beam currents used were 0.402 × 10 −9 A, 1.185 × 10 −9 A and 3.09 × 10 −9 A, re- spectively. We kept the irradiation time (50 seconds) and the size of the scanned area to be constant, while only the irradiation current was changed. The X-ray counts of each set of data was divided by its corresponding ion dose, i.e., 0.402 × 50 = 20.1 × 10 −9 C, 1.185 × 50 = 59.25 × 10 −9 C and 3.09 × 50 = 154.5 × 10 −9 C, respectively. Hence, the normalized X-ray yield of the three spectra could be compared reasonably and the effect of current intensity on normalized X-ray yield will be known. Figure 3(a) is a combined X-ray spectra with normalized X-ray yield, which shows the detection results under all the three irradiation beam currents as listed above. Based on Fig. 3(a), the integral intensities of carbon-K http://www.sssj.org/ejssnt (J-Stage: http://ejssnt.jstage.jst.go.jp) peak and fluorine-K peak of each spectrum were calculated and the results were illustrated in Fig. 3(b). Briefly speaking, Fig. 3(b) shows X-ray yield as a function of ion beam current. The integral arithmetic is, in fact, an accumulation sum of intensity counts in the range of from 0.18 to 0.36 keV for C-K and from 0.58 to 0.76 keV for F-K after subtracting an average background, respectively. An average background is used for background subtraction when we calculate the integral X-ray yield of one characteristic peak such as C-K and F-K, because the characteristic peak shown in the EDS spectrum is a superimposed one, together with the background. For example, to compute an average background, at first, we accumulate the intensity values from 0.09 eV to 0.13 eV and from 0.41 eV to 0.45 eV and get an average value by dividing the total channel number (10, in this case). Second, after summing the intensity values from 0.14 eV to 0.40 eV for C-K peak, a background value resulting from the average background multiplying channel number (27, in the case) is subtracted. Thus, an actual integral area covered by the characteristic peak could be obtained, in which the background contribution was not included.
It is found from Figs. 3(a) and 3(b) that with the irradiation current increasing, the X-ray yields of both carbon-K peak and fluorine-K peak increase correspondingly. For example, while the beam current increases from 0.402×10 −9 A to 1.185×10 −9 A and 3.09×10 −9 A, the in-tegral counts of C-K peak increases from ∼ 658 to ∼ 1645 and ∼ 2070; the integral counts of F-K peak increases from ∼ 825 to ∼ 1690 and ∼ 2525, respectively. Thus, based on the above experimental data, it was concluded that current intensities of Ga ion beam have effects on the X-ray yield and intense ion beam can induce strong X-ray from specimen. The possible mechanism behind the current effect of gallium ion beam on the X-ray yield will be addressed in the future.
IV. CONCLUSIONS
In summary, X-ray emission was observed from the surface of various types of organic and insulating materials under irradiation of a low energy (30 kV) gallium ion beam. Light elements with low K-shell electron excitation energy such as carbon, oxygen, fluorine, sodium and silicon could be detected by an energy dispersive X-ray spectrometer together with a focused ion beam instrument. The influence of irradiation beam current on the normalized X-ray yield was studied. It was concluded that a strong beam current could evidently enhance the normalized X-ray yield from organic and insulating specimens. | 2,464 | 2006-01-01T00:00:00.000 | [
"Physics"
] |
Crystal structures of hibiscus acid and hibiscus acid dimethyl ester isolated from Hibiscus sabdariffa (Malvaceae)
The isolation and crystal structures of the title compounds from Hibiscus sabdariffa (Malvaceae) are described. Hibiscus acid dimethyl sulfoxide monosolvate forms a two-dimensional hydrogen-bonded motif, while hibiscus acid dimethyl ester (Z′ = 2) forms a one-dimensional hydrogen-bonded motif.
Chemical context
Lactone acid producing plants, including Hibiscus sabdariffa (Malvaceae), have been documented to have significant potential in the traditional treatment of various diseases. H. sabdariffa Linn is a species of hibiscus from the Malvaceae family, commonly known as 'Karkade' or 'red sorrel'. It is used in traditional medicine in the form of herbal teas or cold drinks for its hypotensive and diuretic effects and to lower body temperature and blood viscosity (Ali et al., 2005;Da-Costa-Rocha et al., 2014). Little attention has been paid to organic acids from H. sabdariffa, specifically hibiscus acid. However, studies have documented the activity of hibiscus acid and hibiscus acid methyl ester. These report an inhibitory effect against enzymes, such as -amylase and -glucosidase (Hansawasdi et al., 2000(Hansawasdi et al., , 2001. As these compounds are not available commercially and to enable a study of their biological activities, we report on the extraction of hibiscus acid and hibiscus acid dimethyl ester from H. sabdariffa (Malvaceae), and on their purification and characterization. The crystal structures of the acid, as the dimethyl sulfoxide monosolvate, (I), and the diester, (II), are reported herein.
Structural commentary
The crystal structures of the 1:1 dimethyl sulfoxide (DMSO) solvate of hibiscus acid, (I), and of hibiscus acid dimethyl ester, (II), are shown in Figs. 1 and 2. The COOR (R = H or Me) groups lie in equatorial positions on their rings and the absolute configuration of both species is confirmed by the Flack parameter values (Parsons et al., 2013), for arbitrarily ISSN 2056-9890 named atoms in (I) [C2(R),C1(S), 0.00 (4)] and both arbitrarily named equivalent atoms in (II) [C3(R),C4(S) and C11(R),C12(S), 0.08 (17)] ( Table 1). The absolute configuration found thus agrees with that originally proposed by Boll et al. (1969) for hibiscus acid. The structure of garcinia lactone, an epimer of hibiscus acid, has been reported (Mahapatra et al., 2007). The comparable molecular geometries of (I) and its epimer are similar. The five-membered ring of (I) adopts an envelope conformation, with the OH-bearing C2 atom 0.582 (6) Å out of the plane defined by the other four atoms.
The structure of (II) contains two crystallographically independent molecules (A and B) (Z 0 = 2), whose molecular geometries differ only by small deviations in torsion angles, for example, C3-C5-O5-C6 in A is 175.1 (4) , whilst the equivalent angle in B (C11-C13-O12-C-14) is 180.0 (4) . As with structure (I), the five-membered rings adopt envelope conformations, with the OH-bearing C atoms lying out of the plane of the other four atoms, here by 0.505 (5) and 0.530 (5) Å for molecules A and B, respectively.
Figure 1
The molecular structure of compound (I), with the atom labelling and 50% probability displacement ellipsoids.
Supramolecular features
Despite containing two carboxylic acid functionalities, the structure of (I) does not feature the classic R 2 2 (8) carboxylic acid dimer motif. Instead, each of the three potential hydrogen-bond donors of the acid molecule form interactions with a total of three separate neighbouring molecules (Fig. 3). The H atom of the carboxylic acid group (O3-H) adjacent to the ether forms a bifurcated hydrogen bond that is accepted by the ROH and C O functions (i.e. O4 i and O6 i ) of one neighbour, whilst the other two donors, the second carboxylic acid (O5-H) and the hydroxy group (O4-H), form hydrogen bonds with atoms O8 ii and O8 of DMSO solvent molecules, respectively (Table 2). These interactions combine to give a two-dimensional hydrogen-bonded layered structure, with DMSO and acid layers alternating along the c-cell direction (Fig. 4).
Both independent molecules in the structure of (II) donate single hydrogen bonds through their OH groups, but only one molecule (A) acts as a hydrogen-bond acceptor (O3-HÁ Á ÁO4 i and O10-HÁ Á ÁO2 ii ; Table 3). That a total of four carbonyl O atoms do not act as acceptors is probably related to the low ratio of classic hydrogen-bond donors to acceptors in this The crystal packing of compound (I), viewed along the a axis. Table 3 Hydrogen-bond geometry (Å , ) for (II).
Figure 2
The molecular structures of the two independent molecules comprising the asymmetric unit of (II), with the atom labelling and 50% probability displacement ellipsoids. Table 2 Hydrogen-bond geometry (Å , ) for (I). (4) 167 (5) compound. In (II), the hydrogen bonding combines to give a four-molecule-wide one-dimensional ribbon of linked molecules that propagates parallel to the a axis (Fig. 5). (Glusker et al., 1972) and of the diastereomer mentioned previously (Mahapatra et al., 2007) have been reported. The closest relative of (II) to have been structurally described is a derivative with additional OH and Me substituents on the fivemembered ring (Evans et al., 1997).
Synthesis and crystallization
Dried H. sabdariffa calyces were crushed to a powder (500 g) and extracted in a Soxhlet apparatus using 2500 ml each of hexane, ethyl acetate and methanol. The methanol extract was dried and concentrated at 313 K by rotatory evaporation, yielding about 125 g (25%) of crude extract. The methanol extract (2 g) was dissolved in about 2 ml of methanol and subjected to gel filtration chromatography (GFC) using a glass column packed with a wet slurry of 30 g of Sephadex LH20 in methanol. Vials were collected (5 ml each) after elution with 100% methanol, which led to isolation of pure hibiscus acid (0.5%). Crystals of (I) were obtained by recrystallisation from DMSO.
Refinement
Crystal data, data collection and structure refinement details are summarized in Table 1. For all structures, C-bound H atoms were placed in their expected geometrical positions and treated as riding, with C-H = 0.95-0.99 Å and U iso (H) = 1.5U eq (C) for methyl C atoms and 1.2U eq (C) for the other H atoms. The absolute configuraion was determined for the molecules in both acid (I) for arbitrarily named atoms [C2(R),C1(S), Flack parameter 0.00 (4) (Altomare et al., 1993); program(s) used to refine structure: SHELXL2014 (Sheldrick, 2015); molecular graphics: Mercury (Macrae et al., 2008); software used to prepare material for publication: SHELXL2014 (Sheldrick, 2015). Special details Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes.
Fractional atomic coordinates and isotropic or equivalent isotropic displacement parameters (Å 2 )
x y z U iso */U eq S1 0.90564 ( (17) Special details Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes. | 1,722 | 2017-08-21T00:00:00.000 | [
"Chemistry"
] |
Entity Tracking Improves Cloze-style Reading Comprehension
Recent work has improved on modeling for reading comprehension tasks with simple approaches such as the Attention Sum-Reader; however, automatic systems still significantly trail human performance. Analysis suggests that many of the remaining hard instances are related to the inability to track entity-references throughout documents. This work focuses on these hard entity tracking cases with two extensions: (1) additional entity features, and (2) training with a multi-task tracking objective. We show that these simple modifications improve performance both independently and in combination, and we outperform the previous state of the art on the LAMBADA dataset by 8 pts, particularly on difficult entity examples. We also effectively match the performance of more complicated models on the named entity portion of the CBT dataset.
Introduction
There has been tremendous interest over the past several years in Cloze-style (Taylor, 1953) reading comprehension tasks, datasets, and models (Hermann et al., 2015;Hill et al., 2016;Kadlec et al., 2016;Dhingra et al., 2016;Cui et al., 2016). Many of these systems apply neural models to learn to predict answers based on contextual matching, and have inspired other work in long-form generation and question answering. The extent and limits of these successes have also been a topic of interest Chu et al., 2017). Recent analysis by Chu et al. (2017) suggests that a significant portion of the errors made by standard models, especially on the LAMBADA dataset (Paperno et al., 2016), derive from the inability to correctly track entities or speakers, or a failure to handle various forms of reference.
This work targets these shortcomings by designing a model and training scheme targeted towards entity tracking. Specifically we introduce Figure 1: A LAMBADA example where the final word "julie" (with reference chain in brackets) is the answer, y, to be predicted from the preceding context x. A system must know the two speakers and the current dialogue turn, simple context matching is not sufficient. Here, our model's predictions before and after adding multi-task objective are shown.
two simple changes to a stripped down model: (1) simple, entity-focused features, and (2) two multi-task objectives that target entity tracking. Our ablation analysis shows that both independently improve entity tracking, which is the primary source of overall model's improvement. Together they lead to state-of-the-art performance on LAMBADA dataset and near state-of-the-art on CBT dataset (Hill et al., 2016), even with a relatively simple model.
Background and Related Work
Cloze-style reading comprehension uses a passage of word tokens x = x 1:n (the context), with one token x j masked; the task is to fill in the masked word y, which was originally at position j. These datasets aim to present a benchmark challenge requiring some understanding of the context to select the correct word. This task is a prerequisite for problems like long-form generation and document-based question answering.
A number of datasets in this style exist with dif-ferent focus. Here we considered the LAMBADA dataset and the named entity portion of the Children's Book Test dataset (CBT-NE). LAMBADA uses novels where examples consist of 4-5 sentences and the last word to be predicted is masked, x n . The dataset is constructed carefully to focus on examples where humans needed the context to predict the masked word. CBT-NE examples, on the other hand, include 21 sentences where the masked word is a named entity extracted from the last sentence, with j ≤ n, and is constructed in a more automated way. We show an example from LAMBADA in Figure 1. In CBT, as well as the similar CNN/Daily Mail dataset (Hermann et al., 2015), the answer y is always contained in x whereas in LAMBADA it may not be. Chu et al. (2017) showed, however, that training only on examples where y is in x leads to improved overall performance, and we adopt this approach as well.
Related Work
The first popular neural network reading comprehension models were the Attentive Reader and its variant Impatient Reader (Hermann et al., 2015). Both were the first to use bidirectional LSTMs to encode the context paragraph and the query separately. The Stanford Reader is a simpler version with fewer layers for inference. These models use an encoder to map each context token x i to a vector u i . Following the terminology of , explicit reference models calculate a similarity measure s i = s(u i , q) between each context vector u i and a query vector q derived for the masked word. These similarity scores are projected to an attention distribution α = softmax({s i }) over the context positions in 1, . . . , n, which are taken to be candidate answers. The Attention Sum Reader (Kadlec et al., 2016) is a further simplified version. It computes u i and q with separate bidirectional GRU (Chung et al., 2014) networks, and s i with a dot-product. It is trained to minimize: where θ is the set of all parameters associated with the model, and y is the correct answer. At test time, a pointer sum attention mechanism is used to predict the word type with the highest aggregate attention as the answer. The Gated Attention Reader (Dhingra et al., 2016) leverages the same mechanism for prediction and introduces an attention gate to modulate the joint context-query information over multiple hops.
The Recurrent Entity Networks (Henaff et al., 2016) uses a custom gated recurrent module, Dynamic Memory, to learn and update entity representations as new examples are received. Their gate function is combined of (1) a similarity measure between the input and the hidden states, and (2) a set of trainable "key" vectors which could learn any attribute of an entity such as its location or other entities it is interacting with in the current context. The Query Reduction Networks (Seo et al., 2016) is also a gated recurrent network which tracks state in a paragraph and uses a hidden query vector to keep pointing to the answer at each step. The query is successively transformed with each new sentence to a reduced state that's easier to answer given the new information.
Model In this work, we were particularly interested in the shortcomings of simple models and exploring whether or how much entity tracking could help, since Chu et al. (2017) has pointed out this weakness. As a result, we adapt a simplified Attention Sum (AttSum) reader throughout all experiments. Our version uses only a single bidirectional GRU for both u i and q. This GRU is of size 2d, using the first d states for the context and second d for the query. Formally, let For datasets using the last word, the query is con- When the masked word can be anywhere, the query is constructed Our main contribution is the extension of this simple model to incorporate entity tracking. Other authors have explored extending neural reading comprehension models with linguistic features, particularly Dhingra et al. (2017) who use a modified GRU with knowledge such as coreference relations and hypernymy. In Dhingra et al. (2018), the most recent coreferent antecedent for each token is incorporated into the update equations of the GRU unit to bias the reader towards coreferent recency. In this work, we instead use a much sim-
Learning to Track Entities
Analysis on reading comprehension has indicated that neural models are strong at matching local context information but weaker at following entities through the discourse Chu et al., 2017). We consider two straightforward ways for extending the Attention Sum baseline to better track entities.
Method 1: Features We introduce a short-list of features in Table 1 to augment the representation of each word in x. These features are meant to help the system to identify and use the relationships between words in the passage. 1 Features 2-5 apply only to words tagged PERSON by the NER tagger. Features 6-7 apply only to words between opening and closing quotation marks. Feature 6 indicates the index of the quote in the document, and Feature 7 gives the assumed speaker of the quote using some simple rules; we provide the rules in the Supplementary Material. Though most of these features are novel, they are motivated by recent analysis (Wang et al., 2015;.
All features are incorporated into a word's representation by embedding each discrete feature into a vector of the same size as the original word embedding, adding the vectors as well as a bias, and applying a tanh nonlinearity.
Method 2: Multitasking We additionally encourage the neural model to keep track of entities by multitasking with simple auxiliary entitytracking tasks. Examples such as Figure 1 suggest that keeping track of which entities are currently in 1 POS tags are produced with the NLTK library (Bird et al., 2009), and NER tags with the Stanford NER tagger (Finkel et al., 2005). We additionally found it useful to tag animate words as PERSONs on the CBT-NE data, using the animate word list of Bergsma and Lin (2006). scope is useful for answering reading comprehension questions. There, amy and julie are conversing, and being able to track that amy is the speaker of the final quote helps to rule her out as a candidate answer. We consider two tasks: For Task 1 (L 1 ) we train the same model to predict repeated named entities. For all named entities x j such that there is a x i = x j with i < j, we attempt to mask and predict the word type x j . This is done by introducing another Cloze prediction, but now setting the target y = x j , reducing the context to preceding words x 1:j−1 with u i = → h i , and the query q = → h j−1 . (Note that unlike above, both of these only use the forward states of the GRU). We use a bilinear similarity score s i = q T Q u i , for this prediction where Q is a learned transformation in R 2d×2d . This task is inspired by the antecedent ranking task in coreference (Wiseman et al., 2015(Wiseman et al., , 2016. For Task 2 (L 2 ) we train to predict the order index in which a named entity has been introduced. For example, in Figure 1, julie would be 1, amy would be 2, marsh would be 3, etc. The hope here is that learning to predict when entities reappear will help the model track their reoccurences. For the blue labeled julie, the model would aim to predict 1, even though it appears later in the context. This task is inspired by the One-Hot Pointer Reader of on the Who-did-What dataset (Onishi et al., 2016). Formally, letting idx(x j ) be the predicted index for x j , we minimize: where W ∈ R |E|×2d and E is the set of entity word types in the document. Note that this is a simpler computation, requiring only O(|E| × n) predictions per x, whereas L 1 requires O(n 2 ).
The full model minimizes a multi-task loss: L 0 (θ) + γ 1 L 1 (θ) + γ 2 L 2 (θ). Using L 1 and L 2 simultaneously did not lead to improved performance however, and so either γ 1 , γ 2 is always 0. We believe that this is because, while the learning objectives for L 1 and L 2 are mathematically different, they are both designed to similarly track the entities mentioned so far in the document and thus do not provide complementary information to each other.
We found it useful to have two hyperparameters per auxiliary task governing the number of distinct named entity word types and tokens used in defining the losses L 1 and L 2 . In particular, per document these hyperparameters control in a top-tobottom order the number of distinct named entity word types we attempt to predict, as well as the number of tokens of each type considered.
Experiments
Methods This section highlights several aspects of our methodology; full hyperparameters are given in the Supplementary Material. For the training sets, we exclude examples where the answer is not in the context. The validation and test sets are not modified however and the model with the highest accuracy on the validation set is chosen for testing. For both tasks, the context words are mapped to learned embeddings; importantly, we initialize the first 100 dimensions with the 100dimensional GLOVE embeddings (Pennington et al., 2014). Named entity words are anonymized, as is done in the CNN/Daily Mail corpus (Hermann et al., 2015) and in some of the experiments of . The model is regularized with dropout (Srivastava et al., 2014) and optimized with ADAM (Kingma and Ba, 2014). For all experiments we performed a random search over hyperparameter values (Bergstra and Bengio, 2012), and report the results of the models that performed best on the validation set. Our implementation is available at https://github.com/ harvardnlp/readcomp. Table 2 shows the full results of our best models on the LAMBADA and CBT-NE datasets, and compares them to recent, best-performing results in the literature.
Results and Discussion
For both tasks the inclusion of either entity features or multi-task objectives leads to large statistically significant increases in validation and test score, according to the McNemar test (α = 0.05) with continuity correction (Dietterich, 1998). Without features, AttSum + L 2 achieves the best test results, whereas with features AttSum-Feat + L 1 performs best on CBT-NE. The results on LAMBADA indicate that entity tracking is a very important overlooked aspect of the task. Interestingly, with features included, AttSum-Feat + L 2 appears to hurt test performance on LAMBADA and leaves CBT-NE performance essentially unchanged, amounting to a negative result for L 2 . On the other hand, the effect of AttSum-Feat + L 1 is pro- (Dhingra et al., 2017) 51.10 51.60 MAGE (64) (Dhingra et al., 2017) 52.10 51.10 GA + C-GRU (Dhingra et al., 2018 (Dhingra et al., 2016) 78.50 74.90 EpiReader (Trischler et al., 2016) 75.30 69.70 DIM Reader (Liu et al., 2017) 77.10 72.20 AoA (Cui et al., 2016) 77.80 72.0 AoA + Reranker (Cui et al., 2016) 79 nounced on CBT-NE, and while our simple models do not increase the state-of-the-art test performance on CBT-NE, they outperform "attentionover-attention" in addition to reranking (Cui et al., 2016), and is outperformed only by architectures supporting "multiple-hop" inference over the document (Dhingra et al., 2016). Our best model on CBT-NE test set, AttSum-Feat + L 1 , is very close to the current state-of-the-art result. On the validation sets for both LAMBADA and CBT-NE, the improvements from adding features to AttSum + L i are statistically significant (for full results refer to our supplementary material). On LAMBADA, the L 1 multi-tasked model is a 3.5-point increase on the state of the art.
Our method also employs fewer parameters than other richer models such as the GA Reader in (Dhingra et al., 2016). More specifically, in terms of number of parameters, our models are very similar to a 1-hop GA Reader. In contrast, all published experiments of the latter use 3 hops where each hop requires 2 separate Bi-GRUs, one to model the document and one for the query. This constitutes the largest difference in model size between the two approaches. Table 3 considers the performance of the different models based on a segmentation of the data. Here we consider examples where: (1) Entityif the answer is a named entity; (2) Speaker -if the answer is a named entity and the speaker of quote; (3) Quote -if the answer is found within a quoted speech. Note that Speaker and Quote categories, while mutually exclusive, are subsets of the overall Entity category. We see that both the additional features and multi-task objectives independently result in a clear improvement in all categories, but that the gains are particularly pronounced for named entities and specifically for Speaker and Quote examples. Here we see sizable increases in performance, particularly in the Speaker category. We see larger increases in the more dialog heavy LAMBADA task.
As a qualitative example of the improvement afforded by multi-task training, in Figure 1 we show the different predictions made by our model with and without L 1 (colored as blue and red, respectively). Note that amy and julie are both entities that have been repeated twice in the passage. In addition to the final answer, our model with the L 1 loss was also able to predict these entities (at the colored locations) given preceding words. Further qualitative analysis reveals that these augmentations improved the model's ability to eliminate non-entity choices from predictions. Some examples are shown in Figure 2.
Conclusion
This work demonstrates that learning to track entities with features and multi-task learning significantly increases the performance of a baseline reading comprehension system, particularly on the difficult LAMBADA dataset. This result indicates that higher-level word relationships may not be modeled by simple neural systems, but can be incorporated with minor additional extensions. This work hints that it is difficult for vanilla models to learn long-distance entity relations, and that these may need to be encoded directly through features or possibly with better pre-trained representations. | 4,008.2 | 2018-10-05T00:00:00.000 | [
"Computer Science"
] |
A Post-Mortem Evaluation of an IT project A Case Study of a Process Enhancement IT-Project In a Maintenance, Repair and Overhaul Company
The present work represents a post mortem evaluation of an SAP-IT project. It focuses on critical success factors (CSF) in order to establish an appropriate guideline for the project evaluation. A review of contemporary project management literature identifies general project CSF and SAP-project specific CSF and provides a brief theoretical overview of the purpose of project reviews and points out difficulties regarding performance measurements. The review of the project is a qualitative evaluation based on selected CSF. Project evaluation is an ongoing, multidimensional process and can be used to measure success and to learn from previous experience. The CSF used to measure the project success, need to be well defined and constantly checked as they can change over time. For project success a good communication with all stakeholders involved is fundamental. Constant review becomes necessary in today’s complex project world in order to engage in total quality management and continuous improvement.
BACKGROUND OF THE CASE STUDY PROJECT
In 2004 a one year IT project was conscribed at a business unit of German MRO Company in order to improve the overall material process.The project had a budget of about €1Mio ( £0.68 Mio) and aimed for the conversion of previous IT solutions into the SAP-standard system to simplify, enhance and condense the material ordering process.The project not only introduced a new IT system but also meant organizational change.The main stakeholders of the project were the Engineering Department, Disposition, Purchasing, Production and Warehousing.
AIM AND STRUCTURE OF THE STUDY
The aim of the present study is to conduct a post mortem analysis and final project evaluation of the case project.A focus is given on the determinants of project success and failure (Critical Success Factors -CSF).Firstly a literature review is given of the contemporary project management literature about CSF and project evaluation.This includes a project definition and the project lifecycle theory.An appropriate framework is presented in order to understand CSF, to enable a qualitative evaluation of the project and as an effort to narrow down the complexity of the topic.
Secondly the framework for evaluating project success is applied to the project.This analysis consists of a personal review and feedback collected from previous colleagues and project stakeholders.
Finally, some of the learning outcomes of the post mortem evaluation are summarized and recommendations for future project work are derived.
The author was involved in the case study project from the start up phase to the close phase and used the newly introduced IT-system for approximately one year after the project closure.The analysis is qualitative and reflects the personal opinion.
PROJECTS AND THEIR CRITICAL SUCCESS FACTORS (CSF)
Traditionally a project is defined as "an undertaking to achieve a defined objective, and goes on to state that 'generally all projects evolve through a similar "lifecycle" sequence during which there should be recognized start and finish points" [Turner and Cochrane, 1993].
This and similar definitions are based on the assumption that the project objectives are clearly defined.The project success could then be measured against the agreed objectives which are usually centered on the iron triangle of project management: Quality, Cost and Time (see Figure 5-1).
"Prior to the 1980s it was common to focus exclusively on project performance, which was defined narrowly as meeting cost and time objectives and adhering to a product specification" [Bryde, 2003].But project success is multidimensional [Bryde, 2003, p.229] and "in the late 1980s, after the introduction of TQM [Total Quality Management], a project was considered to be a success by not only meeting the internal performance measures…."[Tukel and Rom, 2001, p.400]."For example, in Wateridge's [1995] study of the impact of success criteria on a number of information technology (IT) projects, he concludes that the customer and other stakeholders, such as users, will define what they mean by quality" [Bryde, 2003, p.230].
A very general framework to analyze performance, and therewith linking to key dimensions of project success, is the EFQM model [Bryde, 2003, p.232] which originates in quality management concepts.The model is visualized in Figure 5-2 and it is used by organizations to evaluate quality aspects of processes, leadership and for project reviews.
Project management embraces various schools of thought; thus many different ways of how to approach a project review and of how to evaluate project success can be found in the academic literature.An overview of the development of the CSF-research is given in Table 5-6.This present research focuses on approaches that establish a CSF list and are appropriate to analyze the project performance of the case study.A different angle for instance on project success (strategic approach) was investigated by Jugdev and Müller [2005], suggesting that a successful project must add product, service and strategic value to the company.Some literature also distinguishes clearly between project performance and project manager performance.This present work recognizes this approach but does not make this differentiation in order to simplify the analysis and to provide a holistic view of project success factors.Rather, the work utilizes leadership performance as one CSF to evaluate project success.Again, "methods and techniques for evaluating projects have appeared in the literature for at least 40 years in hundreds of articles.Approaches tend to be either quantitative or qualitative, ranging from rigorous operations research to social-science-based interactive techniques (Henriksen and Traynor, 1999;Danila, 1989;Schmidt and Freeland, 1993).[…] "It is a tremendous task to evaluate the value of a project in detail."[…] "It should be noted that the first step in implementing project evaluation is to determine the factors against the projects" [Liang, 2003, p.446].Table 5-1, Table 5-2 and Table 5-3 list and further describe CSF.Some of the criteria which can be evaluated by the author's observations will then be used to review the present case study project (see chapter 0).The tables of CSF are overlapping, clarify the complexity of this topic and show links to the quality model: EFQM (see Figure 5-2).
The initial definition of a project (see page 58) also includes the project life cycle, which is visualized in Figure 5-3."Previous research results indicate that the relative importance of several of the critical factors changes significantly, based on life-cycle stages (Pinto & Prescott, 1988)" [Hyväri, 2006].This is also indicated in the tables (Table 5-1 or Table 5-2) as certain CSF (e.g.Project Schedule and Plan) belong to certain project stages.Gardiner [2005, p.297] emphasizes on the wide variety and types of projects."Consequently, any list of success or failure factors should be used as a guiding principle only and modified according to the nature and context of each project …." Therefore and in order to evaluate the case -project with adequate and specific variables, this succeeding chapter 0 includes some CSF for SAP projects.
CSF FOR SAP PROJECTS
The study of Vidyaranya [2005] analyzed 44 published articles of companies that implemented the SAP system.He "identifies six common factors that are indicative of successful or non-successful SAP implementations.It has been found that the lack of appropriate culture and organizational (internal) readiness as the most important factor contributing to failure of SAP implementations in 15 companies."A summary of the six CSF for SAP Implementations is compiled in Table 5-3.
THE PURPOSE OF PROJECT REVIEW AND EVALUATION
"The processes of review and evaluation are applied at different stages throughout a project …" [Gardiner, 2005, p.296].Types of project evaluation are [Cicmil, 2007, b]: (1) Pre-project evaluation (2) On-going project evaluation (3) Project completion evaluation (4) Post-project evaluation (5) Post-mortem evaluation "The end of a project marks the last major milestone and provides an important opportunity to capture lessons learned during the project…."This is the motivation for the present work."It is also an opportunity to revisit the project's critical success factors" [Gardiner, 2005, p.296].The idea of the review also includes the continuously improvement approach."Evaluation is an objective, periodic stock taking to determine the status of a project in relation to its specific goals, taking into account project success criteria and recommendations for improvements of ongoing and future projects" [Cicmil, 2007, b].
Although one can find distinctions between project control and evaluation, Figure 5-4 visualizes the project evaluation/control cycle (also compare to: Figure 5-5 and Figure 5-6).Project control and evaluation are irreplaceable for project success as the planning can always only be a "good guess".
The next part identifies some general difficulties with (performance) measurements, which have to be taken into account for project evaluation.The following chapter then (chapter 0) analyses and evaluates the case study project against the CSF from the previous literature review.Conclusions are then drawn from this post mortem evaluation containing learning outcomes and future managerial implications.
THE TROUBLE WITH (PERFORMANCE) MEASUREMENTS
This subchapter refers to the article of Hammer [2007] "The 7 Deadly Sins of Performance Measurement" and provides fundamental criteria for effective and objective measurements.The findings are useful to identify suitable CSF and to evaluate project success appropriately.
According to Hammer [2007], the seven most common measurements mistakes are: Vanity: "measures that will inevitably make the organization, its people and especially its managers look good" Provincialism: "measuring narrowly in organizational boundaries" Narcissism: "measuring from own point of view rather from customer/stakeholder point of view" Laziness: "assuming one knows what is important to measure without giving it adequate thought or effort" Pettiness: "measure only a small component of what matters" Inanity: "Many companies seem to implement metrics without giving any thought to the consequences of these metrics on human behavior and ultimately on enterprise performance." Frivolity: "not being serious about measurements, passing the blame to others" Summarizing one can say: Identifying the right CSF and measuring/evaluating them is associated with great effort but inevitable for project success.Creating a measurement friendly culture and creating the right metrics is another challenge for a project manager.
Evaluation of the Case Study Project
The three project goals of the case study project were: (1) Optimization of the overall process of materials allocation (2) Conversion of all past IT solutions to SAP-Standard by February 2005 (3) Continuous illustration of the materials allocation process in SAP -from the parts list to the supply stock storage These three project goals were achieved within the time frame, the budget and with an appropriate quality.However, to evaluate the overall success of the project some critical success factors have to be reviewed (see Table 5-4).
Table 5-4 represents a personal, qualitative review of the case study project.For an evaluation ten appropriate CSF identified previously in the literature were selected.Then a descriptive evaluation for each individual CSF is given and the performance-level of each factor is evaluated on a scale from one to ten, with ten meaning that the CSF was fulfilled 100%.
Finally an overall project success evaluation is provided and the personal opinion is compared to some feedback given from colleagues that are currently working with the IT-system.
Overall one can say that the case study project was successful.The project objectives were met within the Iron Triangle (cost, quality, time) and most of the CSF reviewed (see Table 5-4) have been considered during the project execution.However an average "performance -score" of 5.23 out of possible 10 reveals that not all the potential of the project was exploited successfully.The project had a difficult delayed start and was executed within a difficult environment of uncertainty, missing trust, unclear requirements and low commitment of the Engineering Department (part of the stakeholders).
Due to the time pressure adequate testing and extensive training of the SAP system was minimized which explains today's difficulties in using the system among the stakeholders (see feedback Table 5-5).Communications across the various departments and different stakeholders is still far from optimal, although the new SAP layout simplified the processes (see feedback Table 5-5).
The change of the external environment and a missing continuously improvement program, including teaching and system adaptation, causes frustration and a blame culture among the users (see feedback Table 5-5).
Compared to the previous "IT system maze" and the complex material ordering process of this MRO-Company, the new SAP system certainly increased the performance and quality of the internal processes and material traceability, which is the biggest argument for the project success.
On the other hand a majority of the stakeholders are still either not familiar with the system or annoyed by its limitations and inflexibility which indicates that not all CSF were met or regarded with the same importance.
Conclusion
Projects are used to manage all different kinds of change.Critical Success Factors can be used as a framework to measure project success and are a very useful tool for project managers to effectively manage projects.CSF change over time, can require high skills and expertise and furthermore depend on the type of project.The project evaluation is a process going through all phases of the project life cycle (project control) and can also be used in a post mortem evaluation to learn from the previous experience and to engage in a continuously improvement process (TQM).
Important lessons learned by the literature review and future implications could be summarized as follows (also see [Jugdev and Müller 2005, p.29]: (1) Define a certain CSF framework to be able to measure the project success throughout the various phases of the project cycle (2) Identify key project stakeholders and allocate them to a certain category of the CSF (3) Project success is multidimensional and CSF need to include efficiency and effectiveness measurements regarding all project phases and all stakeholder (4) CSF may change over time between initial phase and closure phase (5) A good relationship and good communication with all stakeholders, including teamwork, is essential for the project success The SCOPE post mortem evaluation clarified the importance to break down a project in certain aspects (CSF) in order to evaluate the overall project success.Achieving time, cost and quality objectives does not necessarily mean that all stakeholders are satisfied with the project.Also, a project that is called "successful" does not coevally mean that all requirements are met.
The complexity of today's projects and the constantly changing environment create a situation in which it is fundamental to have a set of critical factors (clearly defined goals, milestones, objectives, CSF) against which the project success can be measured.
Constant review and evaluation becomes necessary in order to establish a continuously improvement process (TQM) for the organization and for the project manager himself.Note 1."Project manager who employ transformational leadership and, more specifically, idealized influence, in conjunction with a relationship-oriented approach enjoy more project success …" [Prabhakar, 2005, p.57] Note 2. "Effective project manager leadership is an important success factor on projects.The capabilities of the people involved resolving extraordinary situations and unforeseen problems are an important key for project success…" [Prabhakar, 2005, p. 53].Technical Tasks Availability of the required technology and expertise to accomplish the specific technical action steps.
Client Acceptance
The act of 'selling' the final project to its ultimate intended users
Monitoring and Feedback
Timely provision of comprehensive control information at each phase in the implementation process
Communication
The provision of an appropriate network and necessary data to all key factors in the project implementation
Troubleshooting
The ability to handle unexpected crises and deviations from the plan
Additional four factors 'beyond the control of the project team'
Characteristics of the project leader Competence of the project leader (administrative, interpersonally and technically) and the amount of authority available to perform his/her duties
Power and Politics
The degree of political activity within the organization and perception of the project as furthering the self-interests of an organization's members
Environmental Events
The likelihood of external organizational factors impacting on the operations of the project team, either positively or negatively
Urgency
The perception of the importance of the project or the need to implement the project as soon as possible -How well are the requirements defined?-Clean up operations to implement "Vanilla-SAP" -Ability to maintain scope, related to the planning -2project team/management support/consultants -successful project team is cross-functional, -must be dedicated solely to the project -high-level executives have a strong commitment to the project -incentives for the team member and open internal communication channels -technical and people goals must be met -3internal readiness/training -People element and training aspect -Long run effects -Difficult to measure -Employees must be trained on system for day to day operations -Managers must know the implications of the system (enthusiasm) -reinforcement of a "team environment" is critical to the overall success -Readiness for change (cultural change by new system -control etc..) -4deal with organizational diversity -Individual branches, individual procedures in different departments -diversity can be obstacle to success -to re-engineer their processes and remove idiosyncrasies -both cultural and procedural -Before any company can be linked effectively to world-class supply chains, their internal processes must be world-class (Ptak, 2000).-Many large companies, Amoco and Chevron, for example, successfully reengineered their business and overcame the problem of organizational diversity.
-5planning/development/ budgeting -complex task -enormous potential costs -Major expenses incurred by companies that were unable to fully develop a comprehensive plan.-Planning should be closely identified with maintaining scope during an implementation.-Some companies in the midst of an implementation were forced to scuttle the operations and make quick fixes to their legacy systems.-Developmental delays can also lead to resource attrition, which in turns affects the learning curve and completes the vicious cycle by creating additional obstacles to obtaining cut-over.-Budget-plan: Only one-sixth of projects are completed on time and within budget (May, 1998).-6adequate testing -the key element of success for some companies, and a direct cause of failure for others => long run effects -risk: attitude of "just finish it", project-tiredness -testing and red flags ignored, pressure to meet timelines, top management support needed!The case project was closed within the agreed budget but the project also included unnecessary costs (e.g.personnel costs …) 6
Quality
The system fulfilled all the initial requirements; yet because of urgency, lack of testing and changing requirements (see below) the full system potential was not exploited and bugs were included.
Time
The planned project start was delayed by about two months because of "doubts" in the review board.However the project was finished and the IT system used on the agreed time.
Leadership
The project leader proved a great administrative, interpersonally and technical competence.He was committed, experienced, built up trust and had good communication skills.Also his situational management was excellent.However, due to power and politics in the company (see below) (and maybe confidence) he was missing authority and could not accomplish all the goals.7 Project Team/ Personnel The project team was cross functional as the stakeholders came from various departments.But, because of capacity reasons not all of them were solely committed to the project.
The team consisted of a lot of students which were highly motivated and committed but lacked of project experience and skills.Incentives for the team member were created and there was a good open communication within the team.However the team also included very low interested stakeholders which slowed down the project annoyed other members and increased the project costs.Technical knowledge and expertise was provided by consultants and programmers.
Organizational Factors
The company was involved in a lot of projects during the 1990s as part of the restructuring of the airline and various cost cutting programs.Overall it was proven that the company is ready for change.However this particular business unit, due to its pride and unsuccessful previous projects did not show much interest in and motivation for the project.The IT implementation was more complicated due to retracted, obsolescent and very bureaucratic processes.
External Environment
The requirements for the IT system changed during and after the project.A quick adaptation was impossible.The biggest change was that the company changed the way of production.The material ordering system of SAP was not designed/prepared for this change in the "external environment" 2 Client Acceptance The client acceptance was/is very different: Engineering: Low acceptance because of the dislike of a further IT system and the fear of being controllable Management: High acceptance but little interest in learning the system themselves -most of the management (lower and higher) does not know how the new system works Disposition: This position was newly created as a link between Engineering and Purchasing -highest acceptance and key position of new system Purchasing: High acceptance as the new SAP system simplified and structured their work compared to the previous processes Power and Politics The company is coined by a lot of politics and bureaucracy.It is very difficult to implement change and to accomplish goals as a new leader without much power.Furthermore the project included people only working for their personnel aims. 3
Urgency
The project was perceived as highly important for the management and there was enormous pressure to finish the project on time.Because of the delayed start by the review board the project process had to be accelerated and for example testing and "change management" suffered in the end.The project leader provided a risk analysis for finishing the project on time and the decision was made to stay in schedule and accept bugs and teething problems.
SAP Functionality/ Requirements/ Testing
The system requirements were not clearly defined by the clients.A lot of meetings were necessary to gather the information.The specification book was written by a student more as a summary than as a guideline for the programmers and consultants.Training and a training book was provided for the stakeholders in a fairly good quality.However the system testing actually started once the finished system was switched on.
Communication
The communication within the project team was very good (Teambuilding Events!).The communication outside the team suffered a little bit from missing trust, respect and commitment towards the project team and the new members.
(This changed when e.g.students proved their competencies) 7
Table 5 -
5: Some Interview Answers from Previous Research | 5,030.2 | 2009-02-10T00:00:00.000 | [
"Computer Science"
] |
Hyperspectral imaging for small-scale analysis of symptoms caused by different sugar beet diseases
Hyperspectral imaging (HSI) offers high potential as a non-invasive diagnostic tool for disease detection. In this paper leaf characteristics and spectral reflectance of sugar beet leaves diseased with Cercospora leaf spot, powdery mildew and leaf rust at different development stages were connected. Light microscopy was used to describe the morphological changes in the host tissue due to pathogen colonisation. Under controlled conditions a hyperspectral imaging line scanning spectrometer (ImSpector V10E) with a spectral resolution of 2.8 nm from 400 to 1000 nm and a spatial resolution of 0.19 mm was used for continuous screening and monitoring of disease symptoms during pathogenesis. A pixel-wise mapping of spectral reflectance in the visible and near-infrared range enabled the detection and detailed description of diseased tissue on the leaf level. Leaf structure was linked to leaf spectral reflectance patterns. Depending on the interaction with the host tissue, the pathogens caused disease-specific spectral signatures. The influence of the pathogens on leaf reflectance was a function of the developmental stage of the disease and of the subarea of the symptoms. Spectral reflectance in combination with Spectral Angle Mapper classification allowed for the differentiation of mature symptoms into zones displaying all ontogenetic stages from young to mature symptoms. Due to a pixel-wise extraction of pure spectral signatures a better understanding of changes in leaf reflectance caused by plant diseases was achieved using HSI. This technology considerably improves the sensitivity and specificity of hyperspectrometry in proximal sensing of plant diseases.
Background
The reflectance of leaves is the result of multiple interactions between incoming irradiation and biophysical (e. g. leaf surface, tissue structure) and biochemical characteristics (e.g. content of pigments and water) of plants [1][2][3]. Several studies have described the prospects of sensing leaf reflectance in the visible (VIS, 400-700 nm), near infrared (NIR, 700-1000 nm) and short wave infrared (SWIR, 1000-2500 nm) for detecting changes in plant vitality with emphasis on fungal plant diseases using non-imaging spectroradiometers [4][5][6][7]. Disease symptoms result from physiological changes in plant metabolism due to activities of pathogens [8]. The impact on physiology and phenology of plants varies with the type of host-pathogen interaction and may cause modifications in pigments, water content, and tissue functionality of plants or in the appearance of pathogen-specific fungal structures [9,10]. All these factors may change the spectral characteristics of plants.
Knowledge on the effects of pathogens on the metabolism and structure of plant tissue is therefore essential for hyperspectral discrimination of healthy and diseased leaf and canopy elements [11].
Hyperspectral imaging is an innovative technology with high potential for non-invasive sensing of the physiological status of vegetation [12][13][14] and may allow an objective and automatic assessment of the severity of plant diseases in combination with continuative data analysis methods [15]. Further to spectral information from non-imaging spectroradiometers, hyperspectral cameras enable the detection of spectral and spatial information of objects of interest. Hyperspectral imaging is expected to improve disease detection through a better examination of host-pathogen interactions [15,16]. Imaging sensor systems allow a pixel-wise attribution of disease-specific symptoms and healthy tissue and improve both, the specificity and sensitivity of disease detection by technical sensors [13].
In most studies using hyperspectral imaging, the spectral signature of tissue colonized by a pathogen is compared to the spectral signature of healthy tissue and plant canopies. Bravo et al. [17] used in-field spectral images for an early detection of yellow rust in wheat, Nansen et al. [18] analyzed hyperspectral data cubes for the detection of insect-induced stress in wheat plants, and Polder et al. [19] combined different optical sensors for the detection of tulip breaking virus. Hyperspectral imaging has recently become more common in monitoring of the quality and security of fruits and food. Balasundaram et al. [20] and Qin et al. [21] developed a hyperspectral imaging approach to detect canker lesions on citrus fruits. Disease assessment of plants by technical sensors may be differentiated into detection (i.e. deviation from healthy), identification (i.e. diagnosis of specific symptoms among others, differentiation of various diseases) and quantification (i.e. measurement of disease severity, e.g. percentage leaf area affected). The type and amount of sensor information vary with the objective. Sensors have to be sensitive to the effect of fungal colonization of plant tissue during pathogenesis [8].
The advancement from non-imaging spectroradiometry to hyperspectral imaging enables the pixel-wise attribution of spectral signatures suitable for the assessment of modifications on a small scale, typical for early stages of plant diseases. Significant changes in spectral signature of host tissue may be detected not only for the tiny spots of primary symptoms and during further disease stages, but also for localized effects in pre-symptomatic stages and different effects on plant tissue. This information may be linked to fundamental processes of plant biology and fungal pathogenesis.
The potential of hyperspectral imaging for small-scale analysis of symptoms of plant diseases was explored using three foliar diseases of sugar beet as a model system. Cercospora leaf spot (CLS), powdery mildew (PM) and sugar beet rust (SBR) are caused by the fungal pathogens Cercospora beticola (Sacc.), Erysiphe betae (Vanha) Weltzien and Uromyces betae (Persoon) Lev., respectively. Spectral signatures of disease-specific symptoms in different developmental stages and of different regions of typical symptom were evaluated on pixel basis. The detection, differentiation and quantification of diseases were realized using an automatic classification algorithm. Morphological changes of the leaves due to pathogen colonisation were described microscopically and leaf structure was linked to spectral reflectance patterns.
Results
Effect of fungal colonization on structure and reflectance of leaves Symptoms differed between the foliar diseases and within their developmental stages during pathogenesis. First symptoms of CLS were small grey and sunken spots (Figure 1A). With on-going pathogenesis these spots became necrotic and a reddish-brown margin was formed ( Figure 1D, G). Primary symptoms of SBR were tiny chlorotic spots ( Figure 1B). At later stages the epidermis is ruptured by amber uredospores ( Figure 1E, H). The first symptoms of PM are small mycelial colonies on the upper side of sugar beet leaves ( Figure 1D). These colonies expand rapidly over the leaf surface and mycelial density increased during pathogenesis ( Figure 1F, I).
Light microscopy visualized the modifications in the tissue structure of sugar beet leaves resulting from the activities of the fungal pathogens C. beticola, E. betae, and U. betae; obvious differences to the morphology of healthy leaves occurred ( Figure 2). Cercospora beticola penetrated the leaf through stomata. Intercellular hyphae were formed and pseudostromata were developed in the substomatal leaf tissue. At the edge between CLS lesions and healthy tissue, deep splits and sulcate leaf tissue occurred ( Figure 2B). The CLS symptom obviously was subsided; cell lysis with minor intercellular space was accumulated in the necrotic centre of CLS symptoms.
Superficial mycelia and chains of conidia were characteristic symptoms of PM. Thin-sections of E. betae infected sugar beet leaves showed minor influence of the pathogen on tissue structure ( Figure 2C). Mycelium and conidia were observed on the upper leaf surface and less frequently on the lower leaf surface. The superficially growing pathogen penetrates the epidermal cell wall after appressoria formation and produces haustoria within epidermal cells; the formation of a new haustorium requires another appressorium and the penetration of the epidermal layer.
In cross sections of SBR pustules produced by Uromyces betae, a swelling of leaf tissue, caused by initial spore accumulation under the epidermis, was observed ( Figure 2D). In advanced stages of pathogenesis, accumulated urediniospores breached the epidermal layer. The roundish urediniospores were released and spread onto the neighbouring leaf area. Intercellular hyphae filling the intercellular space of the mesophyll were detected next to the pustule.
Role of spatial resolution in the detection and identification of leaf diseases
The spatial resolution of a sensor system is crucial for the detection and identification of leaf diseases. A spatial resolution of 0.2 mm per pixel was optimal to visualize characteristic leaf spots caused by C. beticola ( Figure 3). With decreasing spatial resolution the amount of pixels with mixed information increased and the differences in reflectance decreased. At a spatial resolution of 3.1 mm, characteristic symptoms were not detectable anymore; the spectral signal was made up of both, healthy and diseased tissue. The signal from a spatial resolution of 17.1 mm was similar to that measured with a non-imaging spectroradiometer. Essential information was lost by averaging the reflectance of healthy and diseased tissue of the measuring area.
Spectral signatures of characteristic regions of a mature symptom
Spectral signatures from the centre of mature symptoms differed between the diseases (Figure 4). Reflectance of Cercospora leaf spot diseased pixel increased in the VIS and decreased in the NIR compared to healthy pixels. Powdery mildew causes and overall increase of reflectance, whereas sugar beet rust causes minor changes in the VIS from 550 to 700 nm and a decrease in the NIR. The spectral signatures from transects through healthy leaf tissue are plotted in Figure 5A; each spectrum belongs to a pixel of the transect. Spectral reflectance of healthy leaf tissue from adjacent pixels over a leaf segment was quite homogeneous. Minor variation was due to slight heterogeneity of the leaf tissue, the surface structure of sugar beet leaves and its interplay with irradiance. Spectral reflectance from transects through mature CLS symptoms showed obvious differences depending on the region of the symptom ( Figure 5B). The margin of leaf spots had higher reflectance in the VIS and lower reflectance in the NIR. Spectra from the necrotic centre were characterized by increased reflectance in the VIS and the NIR region. Reflectance of leaf tissue covered by PM increased throughout the spectrum, depending on the density of the fungal mycelium on leaf surface ( Figure 5C). Spectral reflectance of pixels from the margin of PM colonies was characterized by strong increase in the VIS and minor increase in the NIR. Denser mycelia in the centre of the colonies caused a more pronounced reflectance increase. Spatial changes in spectral signatures caused by SBR were less obvious, in fact of the small size of rust colonies and less destructive interactions with the host plant ( Figure 5D). The transition area from healthy tissue to rust pustules was characterized by a general decrease in reflectance, whereas the centre of rust pustules had lower reflectance around the green peak (550 nm).
Spatiotemporal dynamics of spectral patterns during pathogenesis
Since subareas of characteristic, mature symptoms of sugar beet leaf diseases varied in their spectral signature, changes of the spectral reflectance of infected leaf tissue in time and space was investigated during pathogenesis of the diseases. As exemplified for CLS in Figure 6, reflectance of infected leaf segments was analyzed at several times after appearance of first visible symptoms covering the time span of the development of typical disease symptoms. The process of symptom development has been monitored using mean reflectance of symptoms at different stages as endmembers in SAM classification. Spectral signature of CLS -tiny spots of discoloured tissue -one day after first appearance showed marginal differences to healthy leaf tissue. With further symptom development, this tissue reaction and spectral signature spread at the head of the growing leaf spot whereas the spectral signature and reaction of the primary infected tissue changed to dry, necrotic tissue (centres of leaf spots in Figure 6). Typical CLS symptoms 20 dai included almost concentric rings of tissue differing in spectral signature; the signatures turn from one to the other with time.
Similar spatiotemporal developments were detected also for PM and SBR. For PM, reflectance of diseased tissue in the VIS and NIR consistently increased. Reflectance of the centre of colonies three days after first appearance in the VIS was significantly higher than that of the margin of the colonies and of colonies one day after appearance.
Classification and differentiation of symptoms and quantification of disease using the Spectral Angle Mapper
Based on hyperspectral imaging data the spectral angle classification algorithm was used for the identification of different subareas of disease-specific symptoms and their quantification (Tab. 1). Results of the SAM classification, summarized in Table 1 were validated using confusion matrices (provided as additional file 1, Table S1-S3). For CLS classification, the three classes 'healthy tissue', 'margin', and 'necrotic centre' of CLS were chosen. On the first day of the measuring period 99.89% of total leaf area was classified as healthy leaf tissue with an overall classification accuracy of 99.88%. The very high Figure 2 Thin-sections of healthy and infected sugar beet leaves stained with toluidine blue: healthy leaf with intact leaf structure, chloroplasts (chl) are located peripherally in the cells. Leaf infected with Cercospora beticola.at the border of tissue without macroscopic symptoms (marked with arrows) to symptomatic necrotic tissue (marked with asterisks) in the centre (c) of a mature symptom. Erysiphe betae infected leaf with mycelium (my) and conidia (co) on the upper leaf surface. The fungus has penetrated the epidermal cell wall, appressoria (a) and haustoria (h) were formed. Leaf with a ruptured pustule of Uromyces betae. The upper epidermis (ue) is detached from parenchyma, urediniospores (us) were released, and intercellular hyphae (ih) appear in the intercellular space of the mesophyll. kappa coefficient underlines the agreement between ground truth and classification result. Eight days after inoculation, only 1.1% of healthy leaf tissue was unclassified with an overall accuracy of 98.90% (kappa coefficient 0.99). Classification accuracy decreased to 89.58% (kappa coefficient 0.52) 11 dai, because parts of the margin of CLS was falsely classified as healthy (11.18%). With higher disease severities and mature symptoms, classification accuracy increased again (96.58%, kappa coefficient 0.92, 14 dai). Seventeen days after inoculation, 79.05% of leaf area was quantified as healthy by the SAM, and 20.95% as CLS.
For the classification and quantification of powdery mildew, the three classes 'healthy tissue', 'light mycelium' and 'dense mycelium' were chosen (Figure 7, additional file 1, Table S2). Leaves were classified healthy 100% in hyperspectral images taken just before inoculation (0 dai); classification accuracy was 100% (kappa coefficient = 1.00; Tab. 1). Visible powdery mildew colonies appeared 8 dai in the right middle of the sugar beet leaf and next to the branching leaf veins (Figure 7). Overall classification accuracy was 94.34% (kappa coefficient 0.88) at this time. During the time of symptom appearance and colony growth the results of SAM classification coincided with visually assessed disease symptoms (Figure 7). The SAM gave high classification accuracies 11, 14 and 17 dai, respectively (Tab. 1). The differentiation between light and dense mycelium of powdery mildew, however, proved to be difficult. Due to the small size of SBR symptoms and the low disease severity during the measuring period, classification of SBR by the SAM algorithm was difficult (additional file 1, Table S3). First rust pustules became visible 20 days after inoculation. Before this day, no classification of SBR inoculated sugar beet leaves was possible (Tab. 1). The post classification based on ground truth data yielded in an overall classification accuracy of 61.70% with a low kappa coefficient of 0.56 with 15.98% unclassified healthy leaf area and 15.12% unclassified symptoms of SBR. Satisfying differentiation between healthy leaf parts and symptoms of SBR were also not feasible. An amount of 20.67% of the rust pustules was classified as healthy and vice versa, 4% of the healthy tissue was classified as SBR.
Discussion
Hyperspectral imaging proved to be highly suitable for the detection, identification and quantification of fungal diseases on the leaf level. Each disease influences the spectral reflectance of sugar beet tissue in a specific way resulting in disease-specific spectral signatures. Similar effects have been described previously for foliar and soil-borne diseases of sugar beet by Mahlein et al. [5] and Hillnhütter et al. [22] using non-imaging hyperspectrometry.
Since the portion of a signal from diseased tissue in a mixed signal depends on disease severity, the sensitivity and specificity of non-imaging spectroradiometers is limited. Especially at low disease severities, spectra are based on high percentage of reflectance from healthy tissue and only a low portion of symptomatic tissue causing changes in the spectrum. Diseased plants or leaves may be detected by using non-imaging spectrometry, however, only imaging techniques with high spatial resolution -i.e. operating in proximity to the objects -allow for the detection, identification, and quantification of disease symptoms. Disadvantages of non-imaging spectroradiometer, including the separation of mixed infection with two or more diseases on the same leaf or plant, can be overcome by the use of HSI technology.
A pixel-wise attribution of disease-specific symptoms and healthy tissue is conducive to observe spectral reflectance patterns of foliar diseases in detail. Some disease symptoms can only be distinguished from other diseases and stresses when hyperspectral imaging with high spatial resolution is used [17,14]. The detection limit of a non-imaging spectroradiometer was 10% diseased leaf area for CLS and powdery mildew and 20% for SBR, respectively [5]. In contrast, single symptoms can be detected and identified by using HSI systems, since pure signals from pixels of diseased tissue are recorded.
High spatial resolution is crucial in particular for the detection of leaf diseases with discrete, roundish symptoms like CLS or SBR. Spatial resolution of the hyperspectral camera used in this study provided information even on subareas of disease symptoms. Nevertheless, the tiny uredinia of U. betae and limited spatial resolution of the sensor resulted in a high amount of mixed pixels in SBR experiments. Depending on the shape of the symptoms, pixel size should be smaller than the object of interest by a factor of 2 to 5 [23,24]. This rule from remote sensing restricts the sensing of plant diseases to proximal sensing technologies.
The sugar beet diseases differed in their temporal and spatial development as well as in their effects on plant tissue associated to reflectance characteristics. The spectral impact of sugar beet diseases on leaf reflectance was previously described in detail by Mahlein et al. [5] On hyperspectral imaging cubes along transects through CLS symptoms revealed a continuum from healthy tissue, over newly colonized tissue, discoloured (reddishbrown) tissue, chlorotic cells, dying cells and dead cells in the centre of mature associated with characteristic changes of reflectance in the VIS and NIR, which is especially sensitive to modifications of the tissue structure [25,26]. Boyer et al. [27] described similar effects in senescent leaves of the northern pine oak.
The biotrophic pathogens U. betae and E. betae are less destructive; both pathogens largely rely on the integrity of host cells and functionality of metabolism of their host plant. Structural changes of infected leaf tissue and modifications in pigment content were smaller than for CLS-diseased leaves and resulted in only slight changes in VIS and NIR reflectance. The number of chloroplasts was not visually affected at the time of the appearance of mature symptoms. The whitish mycelium of E. betae on the surface increased tissue reflectance over the full range of the sensor. An unambiguous detection of powdery mildew in early stages is challenging since the dust-like cover results in a parallel shift of reflectance with minor influence on the shape of the reflectance curve. The reddish-brown urediniospores of Uromyces betae, in contrast, influenced tissue reflectance similar to the reddish-brown margin of CLS symptoms (minor decrease from 450 to 500 nm, increase from 550 to 700 nm, decrease in NIR, Figure 4). High concentrations of carotenoids and melanin-like pigments, causing the characteristic brown-orange colour of urediniospores are well documented for many rust fungi [28]. As stated Figure 5 Reflectance continuum from healthy tissue and disease specific symptoms. Spectral signatures were extracted pixel-wise from a transect through characteristic leaf tissue from hyperspectral imaging. Reflectance of (A) healthy tissue and mature symptoms of (B) Cercospora leaf spot, (C) powdery mildew, and (D) sugar beet rust. Characteristic symptoms are shown as RGB images beneath their corresponding reflectance spectra. The pixels from which reflectance spectra were extracted are indicated with a black rectangle. by Gitelson et al. [29] carotenoids and chlorophyll have overlapping absorption bands in the blue range around 520 nm. Cercospora leaf spots cause an increase in reflectance from 400 to 550 nm whereas sugar beet rust causes no increase in reflectance in this range. Reflectance is constant with roundabout 0.05%/100; it is assumed that the carotenoids described for sugar beet rust urediniospores counteract the effect of the chlorophyll loss. Nevertheless, the small size of SBR colonies impeded the detection in early stages or at low disease severity.
Spatial patterns of discrete symptoms of sugar beet diseases could be investigated by pixel-wise assignment of spectral signatures. Modifications of spectral reflectance at different developmental stages were displayed in spectral signatures of different subareas of the symptoms. For instance reflectance of new, immature symptoms was similar to that from the margin of fully developed lesions. The results for powdery mildew and SBR generally confirm the principle that maturing, but still growing disease symptoms include all developmental stages so far.
Specific effects of diseases, disease stage, and the impact of disease severity on spectral characteristics of plants are complex, but may allow for new insights into host-pathogen interactions [30]. Similar to Ustin and Gamon [31], who classified different plant functional types based on morphological and physiological traits into 'optical types' by reflectance measurements, spectra of subareas of infected tissue categorised during disease development in a similar way. Hyperspectral imaging clarified various stages of sugar beet diseases as a continuum rather than discrete classes. Gradients of reflectance exist between healthy/asymptomatic and Figure 6 Spatial and temporal dynamics in hyperspectral signature of Cercospora leaf spot during pathogenesis. RGB images of infected sugar beet tissue (first row); region of interest (ROI) for the extraction of characteristic spectral signatures for the classes healthy tissue (light green), leaf spot 1 (light green) 3 (blue), 7 (yellow) and 11 (red) days after appearance, respectively (second row); Spectral Angle Mapper (SAM) classification based on spectral signatures from ROIs (third row); spectral signatures for each class (last row).
symptomatic tissue which may impede the classification between healthy and diseased leaf areas. The development of patterns in time and space, recorded by hyperspectral imaging may help to identify disease or stress influencing crops on the canopy level [32] and on the tissue level [30].
Given that the spectral patterns of healthy and diseased tissue are known, supervised classification was the choice to detect, identify, and quantify diseased tissue of sugar beet leaves. Since SAM classification is based on defined endmember spectra, the detection of leaf colonization prior to the occurrence of visible symptoms was not feasible by following this approach, but visible symptoms were classified with high accuracy. Benefits of the SAM algorithm for disease detection are insensitivity to heterogeneities of surface topography and illumination, because the angle between two vectors is invariant with respect to the length of the vectors [33]. Leaf veins and differences in growth rates cause a characteristic undulated, grooved topography of sugar beet leaves depending on the genotype. Heterogeneities in reflectance intensity occur, as radiation is not reflected straightforward by these surfaces. Although classification accuracy of SAM was satisfying, it should be mentioned that this classification algorithm uses the average spectrum of each endmember class (e.g. healthy and different symptom peculiarities). The spectral variability within each endmember class, denoted as intra-class variability is not retained. Luc et al. [34] obtained a higher overall classification accuracy of Belgian coastline regions by modifying the common SAM to an optimized SAM preserving the intra-class variability. This approach may also resolve problems in disease classification, e.g. lower accuracy for early disease stages when only immature symptoms occur. Similar to the problems in the tomato -P. infestans system described by Zhang et al. [35] low disease levels of SBR resulted in lower accuracy of the SAM algorithm in this study.
Conclusions
This study is the first analysis of characteristic symptoms of various sugar beet diseases and their development in time and space using HSI. This kind of pathogen 'life-logging' with HSI is of high interest for various applications from basic research on the cellular level to large scale applications in agricultural fields. New insights from hyperspectral disease detection on sugar beet make a contribution to a better understanding of plant optical properties during pathogenesis. Different analysis methods and sensor specificities can be transferred and generalized for other plant-pathogen systems. The technology enables the development of precise high-throughput screening systems of plant diseases for resistance breeding and fungicide development.
Plant material
Sugar beet seeds (Beta vulgaris L., cv. Pauletta, KWS, Einbeck, Germany) were pre-grown in small pots and were piqued when the primary leaves had fully developed. Seedlings were transferred into a commercial substrate (Einheitserde Topf, Klasmann-Deilmann, Geeste, Germany) in plastic pots (Ø 170 mm) at 23/20°C (day/ night), 60% relative humidity (RH) and a photoperiod of 16 h. Plants were watered daily and fertilized weekly with 100 ml of a 0.2% solution of Poly Crescal (Aglukon, Düsseldorf, Germany) and were used for the experiments after reaching growth stage (GS) 16 [36].
Culture and inoculation of pathogens
Conidia of C. beticola were harvested from diseased sugar beet leaves, sampled from fields in autumn and incubated in a moist chamber for 12 h. Cercospora beticola was inoculated by spraying a spore suspension (4 × 10 4 conidia ml -1 ) onto leaves using a hand sprayer. Subsequently, plants were covered with plastic bags to realize 100% RH at 25/20°C for 48 h.
Urediniospores of U. betae were brushed off diseased leaves and were stored at -19°C. Suspensions of U. betae (4 x10 4 urediniospores ml -1 ) were sprayed onto leaves before covering sugar beets with plastic bags and incubating them for 48 h at 19/16°C. For further incubation the plants inoculated with C. beticola and U. betae were transferred to the greenhouse at 23/20°C and 60 ± 10% RH. Plants heavily infested with PM were used as inoculum source of E. betae. Healthy plants were inoculated in a chamber where a ventilator ran for 25 seconds in order to distribute E. betae conidia evenly on the leaves. Plants were left over night and afterwards transferred to the greenhouse. Non-inoculated plants were kept as healthy controls at 23/20°C and 60 ± 10% RH in the greenhouse.
Technical setup and hyperspectral image acquisition
For image acquisition sugar beet plants were placed on mobile tables (0.8 m × 0.8 m, four plants per table) 2 days after inoculation (dai). According to Chaerle et al. [37] the fifth fully developed leaf pair of each sugar beet plant was fixed horizontally on a frame between a grid pattern made of two layers of rubber-laminated mesh wire. Frame and grid pattern were coated with black, matte colour to reduce reflectance of the material. The mesh wire largely avoided movements of leaves which were subdivided into equally-sized squares (20 × 20 mm) on the images. The hyperspectral imaging system combines an imaging spectrograph and a mirror scanner. The line scanning spectrograph ImSpector V10E (Spectral Imaging Ltd., Oulu, Finland) has a spectral range from 400 to 1000 nm and a spectral resolution of up to 2.8 nm. The maximal image size of the 30 μm sensor slot results in 1600 pixels per line with a sensor pixel size of 0.0074 mm. Limited by the distance between target and sensor system (0.60 m) a spatial resolution of 0.19 mm per pixel was obtained. A mirror scanner (Spectral Imaging Ltd.) -maximal field of view 80°-mounted in front of the objective lens provided the second spatial dimension of the images. The hyperspectral sensor system was mounted on a manual positioning XY-frame, surrounded by six ASD-Pro-Lamps (Analytical Spectral Devices Inc., Boulder, USA) radiating a near-solar light spectrum. The distance between lamps and leaves was 0.5 m with a vertical orientation of 45°. Imaging data were recorded in a dark chamber in order to realize optimal and reproducible illumination and constant measurement conditions. Hyperspectral images were taken daily from 2 dai until 21 dai.
Using the software SpectralCube (Spectral Imaging Ltd., Oulu, Finland) the angle of the mirror scanner as well as the spectral and spatial resolution were adapted to the object. Images on leaf level were taken with spectral binning 4 and spatial binning 1. Frame rate and exposure time were adjusted to binning and object. The sensor system was focused manually to a barium sulphate calibration bar (Spectral Imaging Ltd., Oulu, Finland) with black rhombi on a white background, placed in the same distance to the camera as the leaves. For Figure 7 Automatic classification of powdery mildew on sugar beet leaves using spectral angle mapper (SAM) algorithm. The three classes 'healthy' (green), 'light mycelium' of powdery mildew (yellow), and 'dense mycelium' of powdery mildew (red) were separated at different disease severity stages with a maximum angle threshold of 0.1°. RGB images and false colour SAM classification images 8 dai, 11 dai, and 14 dai. subsequent calculation of reflectance, three images were grabbed. A dark current image was recorded by closing an internal shutter of the camera, followed by an image of a white reference bar (Spectral Imaging Ltd., Oulu, Finland), with the same horizontal size and on the same level as the object area, both with the same exposure time. Subsequently an image of the leaf area was recorded with improved exposure time. Experiments were conducted at least twice.
Normalization and pre-processing of hyperspectral images Calculations of reflectance, relative to a white reference bar and the dark current measurement were performed using the software ENVI 4.6 + IDL 7.0 (ITT Visual Information Solutions, Boulder, USA). After this normalization the Savitzky-Golay filter [38] was applied to smooth the signals from hyperspectral images. The parameters for the smoothing process were 5 supporting points to the left and right, respectively, and a fifth degree polynomial. The pre-processed images were used for further analysis using ENVI 4.6 + IDL 7.0.
Reduction of spatial resolution by cubic convolution
The resampling function 'cubic convolution' was applied on hyperspectral images to reduce the spatial resolution of the original data. Calculation of new pixel values was performed by weighing 16 surrounding pixels. From a primary spatial resolution of 0.19 mm pixel size, reduced spatial resolutions of 0.8 mm, 3.1 mm and 17 mm pixel size were calculated.
Disease-specific spectral signatures
Spectral signatures of pixels from characteristic regions of fully developed disease symptoms were extracted. Twenty fully developed symptoms of the same developmental stage were analysed for each disease. As the characteristics of symptoms vary during pathogenesis, spectral signatures of symptoms at different stages were collected and averaged. Ten symptomatic areas were analysed daily for each disease. Spectral signatures from infested areas were extracted from regions of interest (ROIs). Additionally RGB images of sugar beet leaves were taken and the leaf infection of each pathogen was evaluated visually and classified as percentage indicating the fraction corresponding to disease area.
Spectral Angle Mapper (SAM) classification
Automatic classification known from remote sensing image analysis was applied to hyperspectral images of diseased sugar beet leaves for the differentiation of diseases. The Spectral Angle Mapping method (SAM, [33]) was performed using the software ENVI 4.6 + IDL 7.0. Spectral classification approaches assign each pixel to one out of several known categories or classes (endmembers) through a statistical approach. Spectrally unique signatures of pure image components, i.e. endmembers, have to be defined, and specific classification algorithms can be calculated to classify the pixel. For CLS classification the endmembers 'healthy', 'margin' of a leaf spot, and 'centre' of a leaf spot were chosen, for PM 'healthy', 'light mycelium' and 'dense mycelium', and for classification of SBR 'healthy' and 'rust'. The data set was divided into a set of training data and a set of test data, to train the classifiers. The classification decomposes the hyperspectral image into a false colour image, containing thematic information of the previously selected classes. SAM calculates the spectral similarity of spectra and reference spectra using the spectral angle between the two spectra in an n-dimensional space dependent on the number of spectral bands. The output of SAM is an angular difference for each pixel, which can be illustrated in a false colour image; small spectral angles correspond to high similarity, large spectral angles to low similarity [33]. Because the analysed spectra are transferred as vectors, variable illuminations due to the surface structure and veins of sugar beet leaves were attenuated (darker pixel will plot along the same vector, but closer to the origin). The SAM result was validated by the overall accuracy, quantifying the percentage of cases correctly classified and the kappa coefficient which accommodates for the effects of change agreement.
Microscopic investigations
Sugar beet leaf tissue from non-infected leaves and leaves infected with the pathogens (3 -5 mm × 3 -5 mm) was sampled for histological analysis. Specimens were fixed with 8% paraformaldehyde and 8% glutaraldehyde in 0.2 M sodium cacodylate buffer (pH 7.3) under vacuum for 4 h at room temperature [39]. Samples were washed three times in cacodylate buffer for 20 min each, dehydrated in a graded ethanol series, and embedded in London Resin white medium. The embedded tissue was semi-thin sectioned with a diamond knife on an ultra-microtome (Reichert Ultracut E; Leica Microsystems, Nussloch, Germany) and was stained in 1% toluidine blue. Stained samples were observed with a Leitz DMR 6000B photomicroscope. Digital photos were taken using a digital camera (JVC, Ky-F75U) and the software Discus, 4.6 (Technical Office Hilgers, Königswinter, Germany). disease progress and quantification of diseased leaf area by SAM classification. Confusion matrixes (Table S1-S3) for the SAM classification of Cercospora leaf spot, powdery mildew and sugar beet rust diseased leaves for each measuring date. | 7,792.4 | 2012-01-24T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
MethylCal: Bayesian calibration of methylation levels
Abstract Bisulfite amplicon sequencing has become the primary choice for single-base methylation quantification of multiple targets in parallel. The main limitation of this technology is a preferential amplification of an allele and strand in the PCR due to methylation state. This effect, known as ‘PCR bias', causes inaccurate estimation of the methylation levels and calibration methods based on standard controls have been proposed to correct for it. Here, we present a Bayesian calibration tool, MethylCal, which can analyse jointly all CpGs within a CpG island (CGI) or a Differentially Methylated Region (DMR), avoiding ‘one-at-a-time' CpG calibration. This enables more precise modeling of the methylation levels observed in the standard controls. It also provides accurate predictions of the methylation levels not considered in the controlled experiment, a feature that is paramount in the derivation of the corrected methylation degree. We tested the proposed method on eight independent assays (two CpG islands and six imprinting DMRs) and demonstrated its benefits, including the ability to detect outliers. We also evaluated MethylCal’s calibration in two practical cases, a clinical diagnostic test on 18 patients potentially affected by Beckwith–Wiedemann syndrome, and 17 individuals with celiac disease. The calibration of the methylation levels obtained by MethylCal allows a clearer identification of patients undergoing loss or gain of methylation in borderline cases and could influence further clinical or treatment decisions.
INTRODUCTION
DNA methylation is an epigenetic mark associated with a broad range of disorders including cancer (1), autoimmunity (2), aging (3) and imprinting (4). This mechanism implies the addition of a methyl group to the 5 -carbon of cytosine in a CpG dinucleotide to form 5-methylcytosine (5-mC) (5). Modifications in DNA methylation could affect gene expression as reported in several types of diseases (6)(7)(8)(9).
To validate epigenome associations, identify region of interest or clinically relevant biomarkers and create new diagnostic tests, it is crucial to develop fast, cheap and accurate DNA methylation assays (10) In this sense, bisulfite amplicon sequencing is an ideal choice for its capacity to analyse multiple targets in parallel with high accuracy, concordance and low cost (11). However, this method critically requires the amplification of bisulfite converted DNA for the discrimination between un-methylated and methylated cytosines. The bisulfite conversion consists in the modification of un-methylated cytosines on uracil (U) maintaining methylated cytosines as cytosines (C). The result of the conversion is a single strand fragmented DNA no longer complemented. If there is a preferential amplification of an allele and strand in the PCR, this effect is called 'PCR bias' (12). In order to obtain accurate results, it is important to minimize its effect as much as possible. To this end, investigators (13,14) have proposed to redesign primers by looking at strand-specific as well as bisulfite-specific flanking primers, but this solution is expensive and time consuming and might not solve the problem completely. Instead, PCR bias can be calculated and corrected in silico (12,15) by using standard controls with known methylation levels. Specifically, the best-fit hyperbolic (12) and cubic polynomial (15) curve obtained from the apparent level of methylation after PCR in standard controls is used to correct the observed methylation levels in the case and control samples.
In this work, we propose a new Bayesian calibration method that overcomes the limitations of the existing tools. In particular, our method analyses jointly all CpGs within a CGI or a DMR, avoiding 'one-at-a-time' CpG calibration or the calibration of the average methylation level across CpGs that neglects the variability across CpGs (15,16). To test the proposed method, we designed eight independent assays in two CGIs located on SDHC gene promoter and six imprinted DMRs, see Table 1 for details. After genomic DNA bisulfite conversion, each target region was amplified by specific primers, and specific amplicons were sequenced on MiSeq. Each assay was run on five standard controls with known methylation percentages (0%, 25%, 50%, 75% and 100%) to determine the specific calibration curve through MethylCal. Compared to existing calibration tools (12,15), our method is able to capture with precision the variability of the apparent level of methylation observed after amplification at different actual methylation percentages. We demonstrate this feature and the benefits of our method when deriving the calibration curves in all the assays analysed.
When applied to a data set consisting of 18 patients potentially affected by Beckwith-Wiedemann syndrome (BWS) (17), the calibration curves obtained by our new method permit a more precise correction of the observed levels of methylation in two target regions (KCNQ1OT1 and H19/IGF2) with a clearer identification of patients undergoing loss or gain of methylation. We also validated Methyl-Cal in a second data set regarding patients with celiac disease (16,18). Our method achieved better calibrations and more reliable corrections of the methylation levels in three target regions that have been associated with susceptibility to celiac disease. These features are important in clinical practice, since the accurate calibration of the methylation levels obtained by more sophisticated statistical methods could influence treatment decisions or further actions.
Samples
For standard controls, we used Human methylated and non-methylated DNA set from Zymo (Zymo, CA, USA). The non-methylated DNA was purified from HCT116 DKO cells knockout for both DNA methyltransferases DNMT1 (-/-) and DNMT3b (-/-). The methylated DNA was purified from the same HCT116 DKO cells and was enzymatically methylated by M.SssI methyltransferase. Five actual methylation percentage (0%, 25%, 50%, 75% and 100%) were prepared mixing different ratios of nonmethylated and methylated human control DNA (Zymo, CA, USA) bisulphite converted (MethylEdge Bisulfite Conversion System, Promega). Additionally, we collected DNA from 18 potential BWS patients and 15 healthy controls. Genomic DNA was extracted from peripheral blood using Gentra Puregene Blood Kit (Qiagen) and DNA quality was determined by Qubit 2.0 (Invitrogen, ThermoFisher). Appropriate human subject approvals and written inform consent were obtained from all participants. Bisulphite conversion of genomic DNA was performed in all samples at the same time with MethylEdge Bisulfite Conversion System from Promega.
PCR amplification
We designed eight assays to quantify the methylation level at each CpG site in two CGIs located on SDHC gene promoter and six imprinted DMRs, see Table 1 for details. For the design of the primers we used Bisulfite Primer Seeker 12S a tool developed by Zymo (http://bpsbackup. zymoresearch.com/). The primers parameters were: 20-32 bp primer length, 150-220 bp product length, 55-57 • C T m , allowing 1 CpG in the first 1/3 of primer, whereas the minimum number of CpGs per product is 4. All designs were tested for primer dimers by the Multiple Primer Analyzer software (ThermoFisher). To allow sequencing through Nextera XT kit (Illumina), we added overhangs sequences to each primer, forward overhang 5 -T CGTCGGCAGCGTCAGATGTGTATAAGAGACAG-3 and reverse overhang 5 -GTCTCGTGGGCTCGGAGATGTGTATAA GAGACAG-3 . All standard controls were bisulfite converted at the same time and eight specific PCRs were run at the same time using the same standard controls. To determine the conversion rate, we examined the conversion of cytosines on thymidine in non-CpG sites in non-methylated control DNA (0%) showing a conversion rate higher than 98% (19,20). The sequences of the PCR primers used are listed in Table 1. The PCR reactions were carried out in 25 l with ZymoTaq Premix (Zymo) using 1.2 l of bisulphite converted DNA. The amplification program was 95 • C for 10 min, then 40 cycles at 95 • C for 30 s, 58 • C for 40 s and 72 • C for 1 min, and elongation step at 72 • C for 7 min. PCR products were purified with QIAquick PCR Purification Kit (Qiagen). To attached dual indices and Illumina sequencing adapters we performed a second PCR of the purified products using Nextera XT index kit (Illumina) following the recommendations of manufacturer's. The second PCR was purified with AMPure XP beads (Beckman Coulter), quantified by Qubit 2.0 (Invitrogen) and normalized to 4 nM.
MethylCal
Model outline. MethylCal is a fully Bayesian mixed additive regression model. It predicts the apparent level of methylation observed after amplification based on the actual methylation percentages (AMP), borrowing information across all CpGs within a CGI or a DMR. MethylCal's regression model can be described as follows where y ij ∈ [0%, 100%] is the apparent level of methylation after PCR at the ith AMP (i = 1, . . . , l) and the jth CpG (j = 1, . . . , m), x ij is the ith AMP (x 1j = 0% and x lj = 100%) which is constant across CpGs (in our experimental design 0%, 25%, 50%, 75% and 100% actual methylation are the same for all CpGs) and  0 , . . . ,  3 are the coefficients of the polynomial regression. Finally, i j ∼ N(0, σ 2 ). MethylCal is based on Moskalev's cubic polynomial regression (CPR) (15) given its simplicity, flexibility and effectiveness to calibrate methylation data. However, instead of fitting a distinct CPR for each CpG, in (1) CpGs are jointly analysed using all n = l × m observations at once. The second key feature of our model is the inclusion of the random-effects RE ij (i = 1, . . . , l, j = 1, . . . , m) that capture distinct effects at each AMP or CpG or a combination of both. Depending on how RE ij is defined, different models can be derived from (1). In the next section, we present the specification of RE ij that we found useful in order to model accurately the apparent level of methylation after PCR in standard controls.
Random-effects specification. MethylCal includes four regression models that differ by the specification of the random-effects RE ij and the model that fits better the data is selected by the Deviance Information Criterion (DIC) (21). The regression models considered in MethylCal are Besides the fixed-effects polynomial regression terms, in (2) the random-effects AMP i are introduced to model the variability of the apparent level of methylation after PCR at different AMPs not explained by the CPR. In (3) the crossed random-effects (22) CpG j are added to capture the heterogeneity of the apparent level of methylation across CpGs. In (4) the latent Gaussian field (LGF) μ = (μ, . . . , μ m ) T (23) replaces the crossed random-effects CpG j to model the dependence of the apparent levels of methylation across CpGs. Finally, in (5) the random-slopes (22) CpG * j are added to model the larger (smaller) variability of the apparent level of methylation after PCR across CpGs at lower (higher) AMPs. The opposite scenario with a smaller (larger) variability at lower (higher) AMPs is also considered in (5). For identifiability conditions, in all models considered, we assume that i AMP i = 0, j CpG j = 0 and j CpG * j = 0. Supplementary Figure S.1 provides a schematic representation of MethylCal's regression models, highlighting the role of the CPR, the crossed random-effects AMP i and CpG j , the LGF μ and the combined effect of the randomintercepts CpG and random-slopes CpG * in predicting the apparent level of methylation after PCR.
Priors set-up. Since MethylCal is a fully Bayesian model, a prior distribution is specified for each unknown parameter. The fixed-effects regression coefficients follow a non-informative normal prior distribution, β 1 , . . . , β 4 ∼ N(0, 10 3 ), whereas for the intercept an improper prior distribution is used, π (β 0 ) ∝ β −1 0 . The crossed random-effects AMP i and CpG j follow a normal distribution, AMP i |τ ∼ N(0, τ −1 ) and CpG j |υ ∼ N(0, υ −1 ) with a non-informative prior precision τ ∼ Gam(1, 0.1) (E(τ ) = 10 and Var(τ ) = 100) and υ ∼ Gam(1, 0.1). For the LGF, we follow (24) and model μ as a Random Walk of order 1 (RW1) (25), and p j and p j − 1 the chromosomal posi-tion of two consecutive CpGs (with p 0 = 0). With this specification the dependence between methylation levels depends on the distance between the corresponding CpGs, i.e., the closer the CpGs, the stronger the dependence. Finally, for the random-intercepts/slopes model, we specify a Normal- The default values for the Wishart hyperparameters are r = 4 and R = I 2 . The prior set-up is completed by the specification of a proper but relatively uninformative prior on the error variance, σ 2 ∼ InvGam(10 −10 , 0.001).
Advantages of the proposed model. MethylCal has several advantages compared to existing calibration tools. First, CpGs are jointly analysed using all n = l × m observations at once, avoiding unrealistic assumptions of independence of the methylation levels at nearby CpGs. Second, MethylCal is more parsimonious with fewer parameters to estimate (five for the main effects, including the error variance, and l + 1, l + m + 2, and l + 2m + 4 randomeffects coefficients for model M 1 , M 2 -M 3 and M 4 , respectively, in contrast to 5m coefficients required by Moskalev's CPR, where m is the number of CpGs in a DMR or CGI). Combined with a larger sample size, it allows narrower coefficients credible intervals and smaller prediction credible intervals, i.e., less model uncertainty. Third, differently from a simple fixed-effects model, the specification of different random effects allows MethylCal to adequately account for the patterns of variances and correlations of the methylation levels. While in Moskalev's method Var(Y i j ) = σ 2 j is constant across AMPs, MethylCal allows a more complex variance structure. In model M 1 , Var(Y i j |σ 2 , τ ) = σ 2 + τ −1 , where −1 models the variability of the apparent level of methylation after PCR across AMPs. In model M 2 , Var(Y i j |σ 2 , τ, υ) = σ 2 + τ −1 + υ −1 with −1 the additional variability of the apparent level of methylation across CpGs, whereas in model M 3 , the RW1 induces the autore- 12 the elements of the covariance matrix −1 . Finally, in contrast to Moskalev' CPR, MethylCal is able to capture the dependence between the observations and, in particular, the dependence of the methylation levels across CpGs (26).
12 with Y ij and Y i j the observations from two distinct AMPs and CpGs and with x ij and x i j the corresponding actual methylation percentages.
Inference. Inference on MethylCal's parameters is performed using INLA R package (http://www.r-inla.org/). INLA is a probabilistic language that performs approximate Bayesian inference by means of integrated nested Laplace approximations (27) and numerical integrations. The main advantage of INLA is its simplicity since a known practical impediment of Monte Carlo Markov chain methods in real applications is the large computational burden. Instead, INLA only requires the specification of the re-gression model, similarly to other regression packages in R (https://www.r-project.org/). A second advantage is its computational speed since no sampling is required from the posterior densities. This is particularly important in model M 3 since LGF posterior inference is rather difficult using Monte Carlo Markov chain.
Note that steps 2 and 3 are only required for model M 3 . Despite the Laplace approximations and numerical integrations, INLA provides results that are very close to those obtained by exact MCMC methods. Details about INLA procedure can be found in (28) and (23).
Given the additive structure of MethylCal, the predictive values are derived straightforwardly. For example, in model T , E(β| y) the posterior mean of the fixed effects, E(AMP i | y) the posterior mean of the random-effects AMP at the ith level and E(μ j | y) the posterior mean of the LGF at the jth CpG. Sim-
Predictive measures
We compare the predictive ability of the MethylCal's model selected by the DIC with Moskalev's CPR. In particular, we report the following 'in-sample' and 'out-of-sample' predictive measures: • Residual Sum of Squares: ) indicates the prediction of y ij when the observation corresponding to the ith AMP and jth CpG is excluded from the regression. We also consider the case E(Y i j |x i \ j ) when the jth CpG is removed and E(Y i j |x \i, j ) when the ith AMP is excluded from all CpGs; y i j is the average apparent level of methylation after PCR without the measurement corresponding to the jth CpG. MSEP is the Mean Squared Error of Prediction when x \(i j) or x i \ j are removed from the regression. The case x \i, j is not considered.
The RSS ∈ [0, 1] is a measure of 'in-sample' fit and it is well known that over-parameterized models achieve usually better RSS. The MSEP ∈ [0, +∞) is instead a measure of 'out-of-sample' prediction based on leave-one-out cross-validation. A model with lower MSEP should be preferred since it predicts more accurately the apparent level of methylation after PCR for unobserved values of the actual methylation percentages, a feature that is important in the derivation of the corrected methylation degree. Finally, the CV-index ∈ ( − ∞, 1] is similar to MSEP, but it aims at comparing the 'out-of-sample' prediction of the proposed model with a simpler non-parametric model that predicts the apparent level of methylation after PCR by using the average value of all other observations. A negative CV-index is in favour of a simpler non-parametric model versus a more sophisticated parametric one.
Corrected methylation degree
Given an observed level of methylation, measured in an individual (either in the case or in the control group) at a particular CpG within DMR or a CGI, the corrected methylation degree can be obtained. However, differently from (12), where it can be calculated analytically by inverting the equation that describes the calibration curve, both Moskalev's CPR and MethylCal require a numerical procedure to perform the PCR-bias correction. In (15), the corrected methylation degree is obtained by solvinĝ wherex j ∈ [0%, 100%] is the corrected methylation degree, y obs j ∈ [0%, 100%] is the observed level of methylation at is the maximum likelihood solution of the CPR for the jth CpG based on apparent level of methylation after PCR in the standard controls and The existence of an unique solution depends onβ j , but in the examples considered Moskalev's CPR is a strict increasing function. Thus, the objective function (9) admits only one solution which can be obtained by the R function optimize.
The derivation of the objective function for MethylCal's mixed additive regression model is slightly more complicated since only few known values of the actual methylation percentage are usually tested in a calibration experiment. This is a typical problem in linear mixed models when the predictions are made for new observations, as these predictions are conditional on an unobserved level of the ran-dom effect (31). To overcome this problem, we consider η i = E(AMP i | y) the posterior mean of the random-effects AMP at the ith level. Note that i is also the predicted value of the random-effects AMP at the same ith level of the observation x ij . A cubic spline interpolation is then fitted on the posterior means i (i = 1, . . . , l) and, by doing so, a new value (x j ) can be predicted for any value of the actual methylation percentage x j , see Supplementary Figure = 1, . . . , m), the PCR-bias corrected methylation degreex j ∈ [0%, 100%] is the solution of where E(β| y) is the posterior mean of the fixed effects, (x j ) is the cubic spline predicted value of the random-effects AMP at the new observation x j , C j = 0, C j = E(CpG j | y), C j = E(μ j | y) and C j = E(CpG j | y)+E(CpG * j | y)x j for model M 1 , M 2 , M 3 and M 4 , respectively. In (10) the existence of an unique solution depends on the combined effect of E(β| y), (x j ) and C j , but in the examples analysed, MethylCal's calibration curve is strictly increasing, allowing for an unique solution. For each CpG, the numerical value ofx j is then obtained by using the R function optimize, specifying (10) as the objective function.
RESULTS
The assays presented in Table 1 were analysed using Methyl-Cal and Moskalev's CPR. First, for each specific assay, we derived the calibration curves using five standard controls with known methylation percentages. Second, we checked the goodness of fit of the calibration curves obtained by MethylCal and compared the results with those obtained by Moskalev's CPR. Third, we corrected the observed methylation degree based on the estimated calibration curves in two specific target regions (KCNQ1OT1 and H19/IGF2) important for their BWS clinical diagnostic value and in three target genes that have been associated with susceptibility to celiac disease. The three steps are detailed below.
Derivation of the calibration curves
We obtained specific calibration curve for each assay using the proposed method and compared the results with Moskalev's CPR. Figure 1 shows the level of methylation of two assays KCNQ1OT1 (top panels) and H19/IGF2 (bottom panels) predicted by Moskalev's CPR, whereas Figure 2 shows the results obtained by MethylCal. Moskalev's CPR is an over-parameterized model: when the actual methylation percentages (AMP) at 0%, 25%, 50%, 75% and 100% are used in the calibration experiment, the number of estimated parameters for each CpG (four for the regression coefficients and one for the residual variance) equals the number of observations, leaving no degrees of freedom. Thus, the 95% prediction confidence interval is extremely wide, see Figure 1B-E, with a large uncertainty regarding the estimated model. In contrast, Figure 2B-E highlight the parsimony of MethylCal. Using all n = l × m observations at once, with less parameters to estimate and thus higher degrees of freedom, the 95% prediction credible interval are much smaller than Moskalev's CPR. Moreover, using MethylCal, the predicted level of methylation are very close to the apparent level of methylation after PCR. This is evident by looking at Figure 2 A-D, where model M 4 and M 3 were selected by the DIC for the assay KCNQ1OT1 and H19/IGF2. In both assays, MethylCal interpolates the apparent level of methylation observed after amplification remarkably well despite the fact that the data show a more complex pattern than previously reported hyperbolic (12) or cubic polynomial (15) shape when the apparent level of methylation after PCR is plotted as a function of the actual methylation percentage. In contrast, in Figure 1A-D, Moskalev's CPR is not able to interpolate the data with the same precision, in particular for the inner values of the AMPs (25%, 50%, 75%). Finally, Figures 1C-F and 2C-F show the impact of the interpolation on the PCR-bias correction. For each CpG-AMP combination, (9) and (10) are used to correct the apparent level of methylation after PCR. If the correction is perfect, the corrected methylation degrees will coincide with the AMPs used in the calibration experiment. Overall, MethylCal's correction is more precise than Moskalev's adjustment due to its ability to interpolate adequately the apparent level of methylation after PCR at different AMPs. Figure 2F, some measurements seem less well calibrated at 75% actual methylation. A closer look at Figure 2E reveals that the apparent level of methylation after PCR for CpG 12 is outside the posterior predictive interval [l, u] for outliers detection with l = Q1 − 1.5IQR and u = Q3 + 1.5IQR with IQR = Q3 − Q1 and Q3 and Q1 the 75th and 25th percentiles of the posterior predictive density. A second CpG outside the posterior predictive interval for outliers detection is present also at 100% actual methylation although in this case, given the shape of the PCR-bias correction curve, the impact on the calibration is less pronounced. Under the fitted MethylCal's model, these observations can be either regarded as outliers, and thus removed from the analysis, or the data generation process, including biological and biochemical factors, should be further investigated to understand the possible causes of this unusual pattern. This conclusion highlights a further feature of MethylCal, i.e., its ability to pinpoint specific CpG-AMP combinations as potential outliers that do not fit with the bulk of the data and need to be further checked.
By visual inspection of MethylCal's results presented in
The predicted level of methylation and the PCR-bias correction for the rest of the assays analysed, two CGIs (SDHC CpG:17 and SDHC CpG:27) located on SDHC gene promoter and four imprinted DMRs (PLAGL1, GRB10,
Goodness of fit
MethylCal's superior performance compared to Moskalev's CPR is also apparent when the 'in-sample' and 'out-ofsample' goodness-of-fit measures are considered. Table 2 shows the predictive performance for two assays tested, KCNQ1OT1 and H19/IGF2. The best MethylCal's model selected by the DIC performs better than Moskalev's CPR when the Residual Sum of Squares (RSS) is considered. These results demonstrate that, although Moskalev's CPR is an over-parameterised model that should attain better 'insample' prediction, it is not suitable for calibration when the data do not show previously reported hyperbolic or cubic polynomial data shapes. The same conclusions can be drawn for the other assays presented in Supplementary Table S.1. MethylCal shows better predictive performance in all assays tested and only marginally worst for the GRB10 assay. MethylCal performs better also in the PLAGL1 assay which is the most favourable case for Moskalev's CPR given the cubic polynomial shape of the data.
Our comparisons also consider the 'out-of-sample' prediction and three possible scenarios are examined. In the first one, the cross-validation is performed by removing a data point that corresponds to a specific CpG-AMP combination. In the second scenario, each CpG is excluded oneat-a-time, while in the last scenario each AMP is removed separately in the cross-validation. Since Moskalev's method cannot predict the AMPs of the CpGs that have been removed, in the second scenario the 'out-of-sample' prediction is obtained by averaging Moskalev's predicted values of the two flanking CpGs, each of them weighted by the distance (in bp) between the excluded CpG and each flanking CpG.
In all assays tested, the Mean Squared Error of Prediction (MSEP) of the best identified MethylCal's model is lower than Moskalev's CPR by several orders of magnitude when the cross-validation is performed across CpG-AMP combinations or across CpGs, see Table 2 and Supplementary Table S.1. When the cross-validation is performed across AMPs the difference between MethylCal and Moskalev's CPR is less evident. Since in this scenario an AMP has been removed from all CpGs, MethylCal cannot borrow information about the excluded AMP across CpGs. Nonetheless, MethylCal has lower MSEP than Moskalev's method across all assays analysed, with a gain ranging between 1% and 5%. The improvement for the GRB10 assay (∼43%) is particularly high since in this case the exclusion of a calibration sample does not hurt the estimation of the randomeffects AMP i , see Supplementary Figure S.8E. Taken together, these results suggest that MethylCal should be also preferred when the number of calibration samples is reduced from five to four.
We also evaluate MethylCal's performance by using the CV-index. Interestingly, the CV-index for Moskalev's CPR is always negative when a CpG-AMP combination is removed in the cross-validation. Thus, a nonparametric model that predicts the CpG-AMP combination by using the remaining observations performs better than Moskalev's CPR. This is also true when a CpG is excluded in the cross-validation, but the GRB10 assay. In contrast, when looking at the CV-index, selected the best MethylCal's model is always better than Moskalev's CPR and it has an inferior CV-index performance only in one case (KCNQ1OT1 assay) in the prediction of the CpG-AMP combination and another one (SDHC CpG: 17) in the CpG 'out-of-sample' prediction.
Finally, Supplementary Table S.2 summarizes Methyl-Cal's goodness-of-fit measures across all assays tested and compares them with Moskalev's CPR. MethylCal's best model selected by the DIC performs always better than Moskalev's CPR in the 'in-sample' prediction, but in a single assay. In the 'out-of-sample' prediction MethylCal's best model is always better than Moskalev's CPR (with the exception of the GRB10 assay) either considering the MSEP or the CV-index measures. Moreover, MethylCal's best model has a non-negative CV-index in 14 out of 16 cases.
Application in clinical diagnostic of Beckwith-Wiedemann syndrome
BWS is caused by genetic and epigenetic abnormalities on chr11p15.5-11p15.4 that produce an increment of IGF2 growth factor levels and/or a reduction of CDKN1C growth suppressor protein levels. The loss of methylation of maternal KCNQ1OT1 and the gain of methylation of maternal H19/IGF2 are the most frequent defects in BWS. In addition, the frequency of mosaicism is high in BWS, introducing the problem of borderline cases that are difficult to diagnose.
The observed methylation levels of 15 healthy controls and 18 potential BWS patients were corrected using the calibration curves obtained by MethylCal and Moskalev's CPR in the KCNQ1OT1 and H19/IGF2 assays. Patients with an average corrected methylation level below a 3SD confidence interval were considered to undergo loss of methylation and those with a level above the 3SD confidence interval were considered to experience gain of methylation, see Figures 3A-B and 4A-B for the assays KCNQ1OT1 and H19/IGF2, respectively, using Moskalev's CPR (left panels) and MethylCal (right panels). To avoid false positives, in clinical practice a ±3SD confidence interval is usually chosen since it guarantees low type-I error (␣ =0.0027). Moreover, the confidence interval should be large enough to contain the control samples' corrected methylation degrees across all CpGs. Figure 3A and B present the results of the corrected methylation degree for the KCNQ1OT1 assay in the healthy control group. MethylCal has a larger confidence interval (28.012-86.71) compared to that obtained by using Moskalev's CPR (37. 26-81.73). This is due to the effect of the calibration curve estimated by Moskalev's CPR that shrinks the corrected methylation degrees for observed methylation levels greater than 50%, while the opposite happens for observed methylation levels lower than 50%, see Figure 1C. The joint effect of a larger healthy controls' confidence interval and a more accurate calibration of the methylation degree in the patients group permit to reclas-sify patients B5B37 as normal methylated in contrast to Moskalev's CPR that classifies the same patient as having undergone loss of methylation, see Figure 3C and D. Moreover, with MethylCal, patients B5B38 and B5C41 are well within the healthy controls' confidence interval (including the range of the corrected methylation degree across CpGs) with less uncertainty about their classification. Figure 4C and D shows the results of a second assay, H19/IGF2, used in the classification of patients. While both methods detected gain of methylation in patients B5A42 and B5B38, and thus affected by BWS, patient B5B37 is also identified as having undergone gain of methylation by MethylCal. However, in contrast to the KCNQ1OT1 assay, in the H19/IGF2 assay there is more uncertainty regarding the classification: for both patients B5B37 and B5B38, the range of the corrected methylation degree across CpGs intersects the upper bound of the healthy controls' confidence interval, while normal-classified methylated patients B5C01 and B5C06 show the same uncertainty at the bottom of the confidence interval.
Finally, patients' classification depends upon the choice of the length of the healthy controls' confidence interval. However, when a less conservative test is chosen (␣ = 0.01), MethylCal's results do not change. This is not true when Moskalev's CPR is employed as shown in Supplementary Figure S.9. This is due to Moskalev's less precise calibration curve and its shrinkage effect on the corrected methylation degrees for which a small difference in the level of significance has a large impact on the patients' classification.
Application in clinical diagnostic of celiac patients
We applied MethylCal in a second data set containing human genomic control DNA measured at eight distinct AMPs (0%, 12.5%, 25%, 37.5%, 50%, 62.5%, 87.5% and 100%) in eight NFkB-related and Toll-like receptor genes (16). It also contains the uncorrected methylation levels on PAGE 11 OF 14 Nucleic Acids Research, 2019, Vol. 47, No. 14 e81 Figure 5. Calibrated methylation level and corrected methylation degree of the NFKBIA assay in celiac patients using Moskalev's cubic polynomial regression (left panels) and MethylCal (right panels). (A, B) The apparent level of methylation observed after amplification (y-axis) is plotted as a function of the the same target regions of 13 controls and 17 celiac patients at the time of diagnosis with patient data pyrosequenced in three runs (18). In our analysis we focused on NFKBIA gene, as well as on RELA and TNFAIP3 genes that, similarly to NFKBIA, have been associated with susceptibility to celiac disease. Figure 5 shows the calibration curves of the NFKBIA assay and the corrected methylation degrees in celiac patients using Moskalev's CPR (left panels) and MethylCal (right panels). Our method confirms its ability to automatically detect outliers. For example, in Figure 5B-D, several methylation levels in CpG 5 are detected as outliers (black dots) since they show an apparent level of methylation at 37.5%, 50% and 62.5% actual methylation that is lower than at 25%. Similarly, there is an outlier in CpG 3 where the apparent level of methylation at 50% actual methylation is as high as at 62.5%. Outliers were also detected between 37.5% and 62.5% actual methylation in the RELA and TNFAIP3 assays, see Supplementary Figures S.10B-D and S.11B-D, respectively. Rather than relying on a difficult visual inspection of the data, MethylCal identifies specific CpG-AMP combinations that do not fit with the bulk of the data and it accounts for them when it derives the calibration curves. See also Supplementary Table S.3 for the comparison of the goodness of fit between MethylCal and Moskalev's CPR on the NFKBIA, RELA and TNFAIP3 assays and the overall better performance of the proposed tool.
A different estimation of the calibration curves may have a large impact on the correction of the case/controls samples and the classification of the patients. Figure 5 E-F exemplify this case where Moskalev's CPR classifies patient 16D as normal methylated, while MethylCal, besides patient 16D, identifies patients 09D and 12D as normal methylated. In particular, MethylCal estimates an average corrected methylation level for patient 09D (11.34) that is more than double the level obtained by Moskalev's CPR (5.24). Further investigations confirm that patient 09D is always classified by Moskalev's CPR as having undergone loss of methylation irrespectively of the level of significance of the test (␣ ≤ 0.10).
DISCUSSION
Bisulfite amplicon sequencing is an ideal platform for the detection of methylation changes on multiple targets in parallel due to the low cost and the efficiency in the single-base quantification (32). The main limitation of this technology is a preferential amplification of an allele and strand in the PCR due to methylation state (12). This effect causes inaccurate estimation of the methylation and in silico calibration tools have been proposed to minimize it.
In this work, we proposed a new Bayesian calibration tool that is able to analyse jointly all CpGs within a CGI or a DMR avoiding 'one-at-a-time' CpG calibration. MethylCal has several benefits compared to existing methods (12,15), including a better 'out-of-sample' prediction which is particularly important in the derivation of the corrected methylation degree and the ability to detect CpG-AMP combinations that should be regarded as outliers, and therefore removed or further checked. Our approach is also very general and it is applicable irrespectively of the locus analysed (CGIs or DMRs), the type and degree of PCR bias to be recovered (large towards the un-methylated allele as in the PLAGL1 assay, small towards the methylated allele as in SDHC CpG:17), the number of CpGs per locus (few as in the MEST assay, many as in the PLAGL1 assay) and the number of calibration samples.
MethylCal includes four different models, each of which with a different random-effects combination. In the analysis of BWS data, M 4 is the preferred model since it allows the specification of the correlation of the apparent level of methylation between CpGs and the AMPs. This behaviour is particularly evident in two assays, GRB10 and MEST, that show higher than expected methylation levels at 0% and 25% actual methylation, an effect that gradually disappears at higher AMPs This pattern cannot be explained by the expected error associated with standard controls (5% for un-methylated DNA and an extra 5% for fully-methylated DNA). Given that the same calibration samples were used in all reactions and the high conversion rate (>98%), this phenomenon might be due to regions specific resistance to bisulfite conversion. A possible explanation is the formation of stable secondary structures around the CpG site that makes the region more resistant to denaturation and subsequent conversion (33).
The application of MethylCal for the calibration of the observed methylation levels of the KCNQ1OT1 and H19/IGF2 assays in a real data set of possible BWS cases shows the importance of the accurate quantification and correction of the PCR bias to distinguish borderline cases. We considered gain of methylation in a region when the level of methylation is above 3SD from the average methylation level detected in the control group and loss of methylation when the level of methylation is below 3SD. Using Methyl-Cal, we classified patients B5B37 as having undergone gain of methylation in the target region H19/IGF2 in contrast to Moskalev's method that identified the same patient with a loss of methylation in the target region KCNQ1OT1. In the analysis, we applied a very conservative threshold, ±3SD (around 0.3% of false-positive error in the diagnostic), but MethylCal's results do not change if the level of significance is increased to a less restrictive 1%, demonstrating that its corrections are less influenced by the choice of the level of the test. Finally, the benefits of the proposed method, i.e., better calibrations and more reliable corrections, are also shown in a second case/control data set related to pyrosequenced methylation levels in three target regions associated with susceptibility to celiac disease.
In both real data applications, the improvement in the accuracy observed after calibration determines the diagnosis, but it could also influence clinical or treatment decisions or further actions. Moreover, the accuracy of the calibration method is critical in disorders with mosaicism like BWS but not exclusive, since the same problem will affect, for example, circulating tumor DNA samples, which will have extensive application in cancer diagnostic in a near future.
In conclusion, MethylCal learns the presence, location and size of PCR bias better than existing methods and adjusts for it in the correction step, allowing the identification of loss or gain of methylation in difficult cases with less uncertainty compared to existing methods. The availability as a user-friendly R package will also permit its routine application in clinical diagnostic and research laboratories.
SOFTWARE
Written in R and available as an R package on https:// github.com/lb664/MethylCal. | 9,053.8 | 2019-03-24T00:00:00.000 | [
"Biology",
"Computer Science"
] |
Quench-drive spectroscopy and high-harmonic generation in BCS superconductors
In pump-probe spectroscopies, THz pulses are used to quench a system, which is subsequently probed by either a THz or optical pulse. In contrast, third-harmonic generation experiments employ a single multicycle driving pulse and measure the induced third harmonic. In this work, we analyze a spectroscopy setup where both a quench and a drive are applied and two-dimensional spectra as a function of time and quench-drive delay are recorded. We calculate the time evolution of the nonlinear current generated in the superconductor within an Anderson-pseudospin framework and characterize all experimental signatures using a quasiequilibrium approach. We analyze the superconducting response in Fourier space with respect to both the frequencies corresponding to the real time and the quench-drive delay time. In particular, we show the presence of a transient modulation of higher harmonics, induced by a wave mixing process of the drive with the quench pulse, which probes both quasiparticle and collective excitations of the superconducting condensate.
I. INTRODUCTION
The superconducting state of matter is characterized by a zoo of collective modes: These include, among others, Higgs, Leggett, bi-plasmon, and Bardasis-Schrieffer modes [1][2][3][4][5][6][7][8][9][10][11].The study of these modes is currently being established as a new field of collective mode spectroscopy, in the sense that bosonic excitations of the condensate reveal information about the underlying superconducting ground state and symmetry properties of the condensate itself [7,10,12,13].The Higgs mode, for instance, can be used as a spectroscopic tool to distinguish between different gap symmetries of unconventional superconductors [10].
The experimental study of collective superconducting modes poses significant challenges.Due to particle-hole symmetry, they generally cannot couple linearly to electromagnetic fields in the spatially homogeneous limit [3,14].Instead, they are activated in a two-photon Raman-like process [15,16].Thus, the main signature of collective modes consists of a renormalization of the nonlinear susceptibility, which can be probed by nonlinear spectroscopic techniques, such as high-harmonic generation, optical Kerr effect, and nonlinear optical conductivity measurements [17][18][19].
Generally, two distinct approaches exist to excite collective oscillations of a superconductor: The first is to apply a short-duration quench pulse, τ ∆ < 1 (τ being the pulse duration and ∆ the energy gap), to suddenly shrink the superconducting gap and excite the system into an out-of-equilibrium state [10].The second approach uses a single longer pulse, τ ∆ ≫ 1, to drive the material into a quasi-steady excited state [10,18,20].
Higher-dimensional spectroscopy techniques have been used to study the nonlinear response of various materials [21][22][23][24], but they have rarely been applied to superconductors [25].Instead, for superconductors, most time-resolved spectroscopies and theoretical studies have focused only on either short quench pulses or a multicycle driving pulse [5,26], without mixing them simultaneously.In the present work, we study the full evolution of the nonlinear current in the BCS superconducting state subjected to a spectroscopic setup where both a quench and a drive pulse are applied (Fig. 1(a)).We show how the spectroscopic data can be clearly analyzed in 2D Fourier space (ω, ω ∆t ), where the two frequency variables correspond to conjugates of real time t and the pump probe delay ∆t, respectively.While these spectra reduce to the aforementioned pump-probe and THG experiments in certain limits, we argue that quench-drive spectroscopy provides a comprehensive way to experimentally extract information of the nonlinear optical susceptibility and the spectrum of superconducting collective modes.We stress that one of the experimental advantages of the proposed quench-drive spectroscopy is that pulse frequencies need not be continuously scanned across a range of frequencies to probe the system, in contrast to simple driving protocols [27].Instead, to achieve frequency resolution and collective mode resonance, only the time delay between quench and drive pulses needs to be swept.
To solve the equations of motion for a superconductor we employ a pseudospin model, extended to describe the new quench-drive setup.This allows for the simulation of the evolution of the order parameter as well a calculation of the current induced in the superconductor [3,7].In addition, we present a diagrammatic approach to systematically interpret the two-dimensional spectra.
The paper is organized as follows: In Section II we introduce the quench-drive spectroscopy mechanism and explain its features diagrammatically.In Section III we describe the theoretical background.We use the pseudospin model to solve the Heisenberg equation of motion in the presence of an external field for the timedependent order parameter, and then calculate the generated nonlinear current.In Section IV we show the numerical results of the nonlinear current for the quenchdrive setup in a BCS superconductor.The discussion of results is provided in Section V. Finally, we give a summary and outlook on future applications and perspectives of quench-drive spectroscopy in Section VI.
II. QUENCH-DRIVE SPECTROSCOPY
We consider here a clean BCS superconductor without impurity scattering.We are interested in the nonlinear response of the superconductor, which is of third order in the external light field in materials with inversion symmetry.In particular, the quasi-equilibrium third order current is determined by the diamagnetic coupling to light, and its time-dependent expression reads [7,26,28] where (B * C)(x) = dyB(y)C(x − y) denotes a convolution integral.The function χ ρρ is the effective densitydensity response of the operator ρ = kσ For ease of notation, we have assumed that all applied electromagnetic pulses are polarized along the x-direction, i.e., A = A x.In the general case, where A can have arbitrary polarization, the density-density response becomes a tensor whose structure encodes additional information about material properties.A specific case of cross-polarized pulses is discussed in Apdx.C.
Within the BCS approximation, the gauge-invariant response for a single-band model has been computed in various references [3,15,26,28] and a general framework for pump-probe experiments based on a quasiequilibrium effective action formalism has been developed in Refs.[5,26].The density-density response is found to be peaked at the resonance frequency 2∆ of the Higgs mode, where 2∆ is the single-particle superconducting spectral gap.It was pointed out, however, that the resonance peak in the density-density response is the result of both single-particle contributions, stemming from quasiparticle excitations, and collective mode excitations.Importantly, it was shown that quasiparticles generally give the dominating contribution to the 2∆-peak in the clean limit, making observation of the Higgs mode difficult [15].Other collective modes of the condensate, such as Leggett [5,29,30], Bardasis-Schrieffer [9,31], or other relative phase modes [13] do contribute significantly to the density-density response and may even persist below the gap.Additionally, the Higgs modes may achieve a sizable signal in the presence of impurities [30,[32][33][34][35] or due to additional processes [36,37].In the present work, we will not focus on the attribution of spectral weight of the nonlinear response to their various origins and instead discuss spectroscopic measurement of the density-density response χ ρρ as a whole.
In Fourier space Eq.(1) can be expressed as where the δ-function is a manifestation of energy conservation, i.e., the three photon frequencies ω i have to sum up to the frequency ω of the induced current.The susceptibility χ ρρ in Eq. ( 3) enters with a functional dependence on the frequency variables ω 2 + ω 3 .In general, the integration over these variables scrambles the resonance spectrum of χ ρρ and no direct signature of collective modes can be recovered from j(ω).
Two approaches to circumvent this problem exist.First, one may choose A as a multicycle THz pulse of the form with τ d ω d ≫ 1 and ω d ∼ ∆ such that it has a narrow frequency spectrum of width τ −1 d centered around ±ω d .Then, the integration variables are constrained to ω i ≈ ±ω d and the susceptibilities are mostly evaluated at χ ρρ (0) and χ ρρ (±2ω d ) to yield the first or third harmonic, j(±ω d ), j(±3ω d ).To map out the functional dependence of χ ρρ one has to vary the driving frequency ω d .This, however, is not easily achievable experimentally.Instead, most current experiments fix the driving frequency ω d and instead attempt to shift the resonance energies contained in χ ρρ .For a superconducting mode, this is simply achieved by varying the temperature in the window (0, T c ).The clear disadvantages of this method are that (1) knowledge of the temperature dependence of the resonances of χ ρρ is required, (2) only modes above 2ω d are visible, and (3) thermal broadening effects are substantial.
The second approach consists of pump-probe setup.Here, we consider a novel pump-probe setup where in addition to a broadband quench pulse A q , the multi-cycle drive pulse A d is utilized.The quench A q has the same form as Eq. ( 4), but with τ q ≪ 1/∆.Further details of the pulse shapes are given in Appendix B. The two pulses are delayed with respect to each other by ∆t yielding the total field A(t) = A q (t + ∆t) + A d (t) (see Fig. 1 and Fig. 9).In frequency space, this results in a phase shift, where we have introduced the notation δ α,q = 1 for α = q and zero for α = d.
In a nonlinear THz experiment, the current j(t) is electrooptically sampled as a function of t and Fourier transformed numerically to obtain j(ω).Multiple such traces are recorded for varied ∆t to assemble the 2D spectrum j(ω; ∆t).Inserting Eq. ( 5) into Eq.( 3), we obtain where we are summing over all combinations {α i } of quench and drive pulse, A q and A d , respectively.We perform a Fourier transform in the parametric time delay ∆t to obtain We can represent the various terms in the sum over {α i } diagrammatically as depicted in Fig. 2. Here, the current is represented by a red wiggly line, and quench and probe pulses are depicted by blue and black wiggly photon lines, respectively.The density-density susceptibility χ ρρ is represented by a fermionic bubble whose internal frequency we have labeled ω n .
All external photon lines carry frequencies with profiles determined by the experimental bandwidth of the pulses A αi (ω i ), where the directionality is marked by arrows of the photon lines.The drive pulse will constrain the external frequencies to ∼ ±ω d .In fact, we will be assuming a sufficiently narrow-band drive pulse, τ d ≫ 1/∆, such that we can approximate The key advantage of the pump-probe geometry lies in the appearance of the second δ-function in Eq. ( 7) that introduces the experimentally accessible variable ω ∆t .We can see this as follows.When α 1 = d and α 2 = α 3 = q, the δ-function presents the constraint δ(ω ∆t +ω 2 +ω 3 ) and the density-density correlation function is evaluated at χ(ω 2 + ω 3 ) = χ(−ω ∆t ).Thus, χ can be pulled out of the integral in Eq. ( 7) and the measured current is directly proportional to the χ ρρresponse, whose frequency dependence can be mapped out by sweeping ω ∆t .The diagrammatic representation of this process is shown in Fig. 2(a).Making use of approximation (8), it follows from energy conservation that the current is non-zero only along the lines ω = ±ω d −ω ∆t in 2D frequency space (ω, ω ∆t ), where j is given by As similar discussion applies to the process depicted in Fig. 2(b).Here, α 1 = α 3 = d and α 2 = q.The susceptibility is evaluated as χ(±ω d − ω ∆t ) and determined the current along the lines ω = ±2ω d − ω ∆t and ω = −ω ∆t .Explicitly, the current is, Figure 2(c) describes the usual THG process that is independent of the quench.In 2D Fourier space it yields a signal at ω ∆t = 0 at the first and third harmonic frequencies of the drive, ω = ±ω d , ±3ω d , where the densitydensity susceptibility is evaluated at fixed values 0, ±2ω d .The wiggly lines represent photons: the black ones correspond from the driving field A d , the blue lines correspond to the quench pulse Aq, the red line represents the generated current.Solid lines denote fermionic bubbles that represent the nonlinear susceptibility χρρ.Due to the δ-function constraint in Eq. ( 7), all blue quench frequencies have to add up to −ω∆t.Energy conservation demands that all incoming frequencies add up to the frequency ω of the induced current.
The remaining diagrams of Fig. 2 can be separated into two classes.In Fig. 2(d) χ ρρ only depends on ω d and the discussion of the THG case applies.In Fig. 2(e-f) the dependence of χ ρρ on integration variables ω i cannot be removed.As a consequence, the resonance structure of χ ρρ is washed out by integration and one obtains a mostly constant signal for frequencies much smaller than the bandwidth of the quench pulse.
In summary, we have shown that the signal j(ω, ω ∆t ) falls onto diagonal lines ω = ±nω d − ω ∆t with n ∈ {0, 1, 2} in 2D Fourier space.From the 2D spectra, one can extract the density-density response according to where m = {0, 1} and the c i denote the background signal that is mostly constant in the limit of a broadband quench pulse.
III. MICROSCOPICAL DESCRIPTION
Having discussed the phenomenological structure of the nonlinear response in a quench-drive experiment, let us now microscopically investigate the response current of a conventional clean superconductor subject to quench and drive pulses.The solution is obtained solving the Bloch equations derived from the pseudospin model of the BCS Hamiltonian.
A. Equations of motion with quench-drive pulses
We write the BCS Hamiltonian using the pseudospin formalism [3,7,38,39] as with the pseudospin vector σk = 1 2 which is defined in Nambu-Gor'kov space, with spinor Ψk = (ĉ † k,↑ ĉ−k,↓ ) and the Pauli matrices τ = (τ 1 , τ 2 , τ 3 ).The pseudo-magnetic field is defined by the vector where ε k = ξ k − µ, ξ k being the fermionic band dispersion, µ the chemical potential.The superconducting order parameter Here, V is the pairing strength, and f k the form factor of the superconducting order parameter.For s-wave pairing one has f k = 1.
In the presence of an external field represented by the vector potential A(t), the pseudospin changes in time according to with σ k = ⟨σ k ⟩ and δσ k (t) = (x k (t), y k (t), z k (t)).The external electromagnetic field is included in the pseudomagnetic field by means of the minimal substitution k → k − eA(t) in the fermionic energy, resulting in The Heisenberg equation of motion for the pseudospin can be written in Bloch form, ∂ t σ k = 2b k ×σ k , providing the set of differential equations where δ∆(t) = δ∆ ′ (t) + i δ∆ ′′ (t) is the time-dependent variation of the order parameter induced by the external field, such that ∆(t) = ∆ + δ∆(t).Here, we assumed a real order parameter at the initial time t = 0, so that y(0) = 0.The solution of Eq. ( 18) provides the time-dependent evolution of pseudospins, from which the time-dependent order parameter ∆(t) and the generated current j(t) can be calculated.A detailed derivation of the equations of motion is given in Appendix A.
B. Nonlinear current
The current generated by the superconductor in this quench-drive setup is given by the general expression where the velocity is calculated as , and the charge density is defined as We can expand the velocity as a function of the vector potential A(t, ∆t), and expand the current in powers of the external field.The first non-vanishing term of the nonlinear current j N L (t, ∆t) generated by the driving pulse is the third order component where z k (t, ∆t) is the third component of the pseudospin vector σ k (t, ∆t), that contains the information of the state of the system perturbed by the quench pulse.The unit vector r i , i = x, y, represents the two directions along which the output current is measured.
IV. NUMERICAL RESULTS
We now present the results obtained from the numerical implementation of time-dependent Bloch equations described in the previous section, solved by means of a Runge-Kutta-4 algorithm without linearization or further analytical approximations.We used the fermionic band dispersion ε k = −2t(cos k x + cos k y ) at half-filling, setting the lattice constant a = 1.The point µ = 0 is special in the sense that it has perfect particle-hole symmetry as well as a van Hove singularity at the Fermi level.But from the viewpoint of collective modes, this symmetry point does not bear any special significance other than presenting a local minimum of the Higgs contribution compared to single-particle excitations in the nonlinear response [15].
We used the values of t = 125 meV for the nearestneighbor hopping energy, s-wave order parameter ∆ 0 = 15.8 meV (corresponding to a frequency of 3.82 THz), and a summation over the full Brillouin zone with a square sampling and a total number of N k = 10 6 points.For the time-dependent evolution we used a time-step of δt = 3 • 10 −4 ps, and for the quench-drive delay δ∆t = 2.5 • 10 −2 ps.For the quench we used a few cycles pulse with central frequency ω q = 4.77 THz, while for the driving an asymmetric pulse with central fre- quency ω d = 4.3 THz, so that both satisfy the condition ω q(d) < 2∆.Both pulses were considered linearly polarized along the x-direction, while the maximum amplitude of the electric field of quench and drive pulse was taken E q = 10.5 kV/cm and E d = 4.7 kV/cm, respectively, assuming a value for the lattice constant of a = 3 Å.For more details on the pulses used, see Appendix B. In Appendix C we analyze the 2D spectra for the case of cross polarization of the two pulses, while in Appendix D we show additional results that were computed using a symmetric Gaussian driving pulse instead of an asymmetric one.
A. Two-dimensional quench-drive spectroscopy
We solved the time-dependent equation of motion in the quench-drive spectroscopy setup using the pseudospin model and calculated the nonlinear current generated by the condensate as described in the previous section.The result plotted as a function of the real time evolution t and the quench-drive delay time ∆t, which is measured as the interval between the maximal peaks of the envelopes of the two pulses (see Appendix B for more details on the pulse shapes), is shown in Fig. 3.We notice that the diagonal line signifies the arrival of the quench pulse, which overlaps with the drive for times t = ∆t in the range t, ∆t ∈ [0, 2] ps.In this region the response is modulated as a function of the delay time ∆t.Next, we Fourier transform the real time variable t into the frequency ω.In Fig. 4 we show the nonlinear gener- ated current intensity as a function of the quench-drive delay time ∆t, namely |j N L (ω, ∆t)|.We notice that both the first and the third harmonic of the fundamental driving frequency ω d are modulated in delay time ∆t, with maximum intensity in the interval 0 ps ≤ ∆t ≤ 2 ps, which corresponds to the range of interference between the quench and the drive pulses, as shown in Fig. 3. Additionally, the signal intensity does not vanish away from ω d and 3ω d , where we instead observe a striped pattern, with each intensity line tilted towards the central time ∆t = 1 ps.These features can be more readily interpreted by plotting the 2D Fourier transform of the current, i.e., as a function of the frequency ω and the delay-time frequency ω ∆t , respectively, shown in Fig. 5.In particular, we notice the first and third harmonic as strong peaks in the central vertical line, at ω ∆t = 0, which corresponds to the equilibrium response of the driving pulse in absence of the quench.Note that it is sufficient to plot two quadrants of the nonlinear current in 2D frequency space, since it follows from j(t, ∆t) ∈ R that j(ω, ω ∆t ) = j(−ω, −ω ∆t ).The modulations in ∆t appear here as broad diagonal lines in 2D frequency space as expected from Eq. ( 11).These features correspond to a dynamically generated four-wave mixing signal due to both the quench and the drive pulses.
B. High-harmonic generation and transient excitation of the superconductor
To clearly distinguish the shape and position of the peaks observed in the frequency 2D plot, we performed one dimensional cuts of Fig. 5 along various lines.Fig. 6 shows the plot along the vertical line at ω ∆t = 0.This corresponds to an equilibrium high-harmonic generation due to the driving pulse only.We observe a first harmonic peak at ω d and a third harmonic signal at 3ω d .Measurement of the temperature dependence of the THG peak would correspond to a usual THz THG experiment.The intensity of the fundamental and third harmonics (continuous line) are of the same order, since only the third order is plotted.The total current has a dominant linear first harmonic response (dashed line).In addition to the first and third harmonic, however, we notice the presence of an additional shoulder peaks at a frequency ω = 2∆ + ω d and at ω = 2∆ − ω d .These are the result of the intrinsic Higgs and quasiparticle resonance at 2∆ [40] in conjunction with a wave-mixing process with a driving photon of frequency ±ω d .In Fig. 7 we show the prototypical case of a horizontal cut in Fig. 5 along ω = 3ω d .The peak at ω ∆t /ω d = 0 is the equilibrium third harmonic as visible in Fig. 6, while the smaller peaks at ω ∆t /ω d = −1, −2, −3 stem from the modulation of the third harmonic due to the quench.Additional smaller peaks appear in Fig. 4 as a modulation in the delay time ∆t of the third harmonic, giving rise to the characteristic striped pattern.
V. DISCUSSION
We can understand all features in the spectrum shown in Fig. 5 by considering each of the diagrams in Fig. 2 that represent the induced current expanded to third order in various combinations of powers of A d and A q .The equilibrium THG signal proportional to A 3 d has to be independent of ∆t and therefore falls onto the vertical line ω ∆t = 0 in the 2D spectrum j(ω, ω ∆t ).This is represented by the diagram Fig. 2(c), where only the driving field acts on the condensate, and the current spectrum can be described as a function of real-time evolution as in Fig. 6.Interestingly, we also notice that in addition to the aforementioned fundamental and third harmonic, a shoulder peak of the third harmonic at a frequency ω d + 2∆ appears when the driving pulse is asymmetric and not Gaussian-shaped (see Appendix D for the data with the symmetric envelope).This is the direct consequence of the effective quench induced by the driving, which launches free Higgs oscillations alongside the quasiparticle contribution and enhances the intensity of the nonlinear susceptibility at ω H = 2∆ [3,7,36].In principle, a THG experiment for a single temperature would suffice to identify the collective mode resonance.However, this approach strictly relies on the condition 2∆ ≈ 2ω d and is specific to the asymmetric pulse shape [36] (See also Appendix D).The processes described by diagrams (a,b,d-f) in Fig. 2, which involve at least one photon of the quench, are responsible for the signals along diagonal lines ω = ±nω d − ω ∆t with n ∈ {0, ±1, ±2}.The spectral window in which these lines can be observed are related to the bandwidth of the quench pulse.Here, we differentiate between even and odd n.Odd diagonal lines show a peak at ω ∆t = −2∆ and even diagonals are peaked at ω ∆t = ω d − 2∆.This is expected from Eq. ( 11) since the susceptibility χ ρρ is peaked at 2∆ for the modelled single-band superconductor.From Eq. 11, we would additionally expect a peak at ω ∆t = −ω d − 2∆ for the line at n = −2.However, while the diagonal n = −2 line in principle is present, its spectral weight is negligibly small within the corresponding frequency range.Note that diagonal lines at ±n are related by frequency inversion j(ω, ω ∆t ) = j(−ω, −ω ∆t ).
By inspecting the 2D spectrum in Fig. 5, it is now straightforward to extract the resonances of the nonlinear susceptibility χ ρρ .In our case, we observe the four peaks on the diagonal lines from which we extract the value 2∆.If the superconducting condensate supports additional collective modes that nonlinearly couple in the electromagnetic response, their mode frequencies can be readily extracted as well.
VI. CONCLUSION AND OUTLOOK
In this work we proposed and analyzed from a new perspective a pump-probe spectroscopy setup on conventional clean superconductors with a combination of a single-cylce THz quench pulse and a multi-cycle driving THz probe field.We used a numerical approach based on the Anderson-pseudospin model to solve the equations of motion and to calculate the generated nonlinear current.In addition, we investigated the nonlinear optical processes by means of a diagrammatic approach to interpret and explain the obtained results.
In particular, we showed that, in addition to the usual third harmonic generation measured in driving experiments, new features are obtained in a two-dimensional spectrum of the nonlinear generated current.These features are manifest as diagonal lines in 2D frequency space of the time and pump-probe delay and allow for a direct extraction of resonances in the nonlinear susceptibility.
The susceptibility encodes the intrinsic superconducting response of the quasiparticles and resonances of transient excitation of the Higgs mode.
The advantage of a two-dimensional analysis of quench-drive spectroscopy is manifest in the possibility to scan a wider frequency spectrum at once with fixed parameters of quench and drive pulses, by scanning the quench-drive delay time.In addition, with the present setup the quench pulse allows to push the system out of equilibrium, quenching and shrinking the superconducting gap, allowing the driving pulse to probe different states of the superconductor, resulting in different peak profiles and positions in the 2D frequency spectra.
It is also interesting to examine the possibility to extend the quench-drive spectroscopy framework to the case of cuprates, which exhibit a different symmetry of the order parameter in momentum space, and preformed phase-incoherent Cooper pairs [41], which can reveal more information on the competing orders and their symmetries.
All in all, we believe that this work can pave the way towards coherent time-dependent multi-dimensional spectroscopy on superconductors in the THz regime.A full two-dimensional pump-pump-probe spectroscopy with coherent pulses will be the focus of a future work.Its possibilities range from coherent control of superconductors to the study of competing orders, such as superconductivity, charge-density wave, and bi-plasmon among others [21,42].Other systems where the Higgs response is known to be enhanced, such as cuprates, could be interesting to investigate with this spectroscopic approach, to efficiently study the transient non-equilibrium response of quasiparticles and the Higgs mode and to unveil the features of their rich phase diagram.Here, we repeated the same calculations of the nonlinear current generation in the quench-drive spectroscopy setup as in the main text, but using a symmetric Gaussian envelope for the drive pulse.The results are shown in Figs.12-13 and correspond to those in Figs.5-8 in the main text, where the asymmetric drive was used.In the top panel of Fig. 13, the equilibrium high-harmonic generation does not include the shoulder peak at ω = 2∆ + ω d , ω ∆t = 0, since it was generated by the initial effective quench of the asymmetric driving field.However, all other features of high-harmonic modulation and transient excitation at ω ∆t = 2∆ are still present, since they originate from the wave mixing of the quench and the drive pulses, independently of their shape.
Figure 1 .
Figure 1.Quench-drive spectroscopy.(a) Quench-drive spectroscopy setup.A single-cylce quench pulse and a multicycle drive pulse excite both Higgs mode and quasiparticles, resulting in a third-harmonic generation (THG) and a dynamical modulation of higher harmonics.In addition, the driving pulse effectively quenches the system, launching Higgs oscillations.In this illustration we show the asymmetric driving pulse.(b) Representation of the frequency spectrum of the quench pulse Aq(ω) (orange), centered at ω = ωq, and the sum (SFG) and difference frequency generated (DFG) pulses A 2 (ω) (green), centered at ω = 0 and ω = 2ωq, respectively.The grey vertical line represents the position of the critical value 2∆.In the inset the real-time quench pulse is shown.
Figure 2 .
Figure 2. Diagrammatic representation of the nonlinear processes.The wiggly lines represent photons: the black ones correspond from the driving field A d , the blue lines correspond to the quench pulse Aq, the red line represents the generated current.Solid lines denote fermionic bubbles that represent the nonlinear susceptibility χρρ.Due to the δ-function constraint in Eq. (7), all blue quench frequencies have to add up to −ω∆t.Energy conservation demands that all incoming frequencies add up to the frequency ω of the induced current.
Figure 3 .
Figure 3. Time-delay two-dimensional plot of the generated nonlinear current.2D plot of the nonlinear output current j N L (t, ∆t) generated by the driven superconductor, as a function of real time t and the quench-drive delay time ∆t.The narrow diagonal stripes are generated by the quench, while the vertical ones are the response to the drive pulse.The intersection provides a wave mixing pattern for t, ∆t ∈ [0, 2] ps.Here we used the asymmetric drive pulse, with quench and drive pulse frequencies respectively ωq = 4.77 THz and ω d = 4.3 THz.
Figure 4 .
Figure 4. Frequency-time delay two-dimensional plot of the generated nonlinear current.2D plot of the nonlinear current j N L (ω, ∆t) generated by the driven superconductor, as a function of relative frequency ω/ω d (with driving frequency ω d = 4.3 THz) and the quench-drive delay time ∆t.This corresponds to a Fourier transform in the horizontal direction in Fig. 3.The fundamental (ω = ω d = 4.3 THz) and the third harmonic (ω = 3ω d = 12.9 THz) are both modulated in the delay time ∆t.
Figure 5 .
Figure 5. Two-dimensional Fourier-transformed plot of the nonlinear current.2D plot of the generated nonlinear output current intensity j N L (ω, ω∆t) as a function of the real frequency ω and the quench-drive delay frequency ω∆t.It corresponds to the two-dimensional Fourier transform of the data in Fig.3.The vertical response at ω∆t = 0 corresponds to the quench-free superconducting signal, namely the high-harmonic generation due to the driving field.The diagonal lines, instead, represent the transient modulation of the higher-harmonics due to the quench-drive wave mixing.
Figure 6 .
Figure 6.Driven high-harmonic generation.Plot of the generated nonlinear (continuous line) and total (dashed line) current as a function of frequency, j N L (ω) and jtot(ω), respectively.This plot corresponds to a vertical cut in Fig. 5 along ω for ω∆t = 0.The peaks at ω = ω d = 4.3 THz and ω = 3ω d = 12.9 THz correspond to the fundamental and third harmonic, respectively.The smaller peak at ω = 12 THz corresponds to the transient excitation of Higgs and quasiparticles with ω = ω d + 2∆, due to the asymmetric driving pulse.
Figure 7 .
Figure 7. Transient modulation of higher harmonics.Frequency-resolved spectral weight of the third harmonic current as a function of ω∆t, j N L (ω∆t, ω = 3ω d ).This corresponds to a horizontal cut in the 2D Fourier-transform plot in Fig. 5 along ω∆t at ω = 3 ω d .It is possible to identify a transient modulation at frequencies −ω d , −2ω d and −3ω d , respectively.
Fig. 8
photon of frequency ±ω d .In Fig.7we show the prototypical case of a horizontal cut in Fig.5along ω = 3ω d .The peak at ω ∆t /ω d = 0 is the equilibrium third harmonic as visible in Fig.6, while the smaller peaks at ω ∆t /ω d = −1, −2, −3 stem from the modulation of the third harmonic due to the quench.Additional smaller peaks appear in Fig.4as a modulation in the delay time ∆t of the third harmonic, giving rise to the characteristic striped pattern.Fig. 8 corresponds to a diagonal line in the 2D plot of Fig. 5 passing through the point ω ∆t = 0, ω = ω d , and projected along the ω ∆t axis.The peak at ω ∆t = 0 is the signal of the first harmonic.Of particular interest is the peak at ω D = −2∆, which is a direct consequence of the quasiparticle resonance at ±2∆, represented by the process in Fig. 2(d).Due to the wave mixing of the quench and the drive, we have here isolated the intrinsic superconducting response with the characteristic frequency of 2∆.Moreover, the peaks in Fig. 5 along the diagonals placed at ω ∆t = −2∆ + ω d are resulting from the process represented by the diagram in Fig. 2(b), and they disappear when quench and drive have perpendicular polarization, since the corresponding interaction vertex vanishes (see Appendix C).
Figure 9 .
Figure 9. Quench and drive pulses.Plot of the amplitude of the vector potential A corresponding to the quench (top left, frequency ωq = 4.77 THz) and drive pulse, respectively, used in the calculations: the asymmetric drive pulse (center left) and Gaussian shaped drive pulse (bottom left) have a frequency ω d = 4.3 THz.On the right their Fourier spectrum in frequency is shown: the asymmetric drive has a sharp peak in frequency with a slower decay, while the Gaussian-shaped drive is narrower in frequency.
Figure 11 .
Figure 11.Nonlinear current along the x axis: traces from the 2D Fourier transform.Left: Nonlinear current as a function of the frequency ω, obtained with a trace along the vertical axis from the left plot in Fig.10.Right: Nonlinear current as a function of ω∆t with the constraint ω = ω d − ω∆t, obtained from a diagonal cut passing by the fundamental harmonic of the left plot in Fig. 10.
Figure 12 .
Figure 12.Two-dimensional Fourier-transformed plot of the nonlinear current.2D plot of the generated nonlinear output current intensity j N L (ω, ω∆t) as a function of the real frequency ω and the quench-drive delay frequency ω∆t.It corresponds to the two-dimensional Fourier transform of the data in Fig.3.The vertical response at ω∆t = 0 corresponds to the quench-free superconducting signal, namely the high-harmonic generation due to the driving field.The diagonal lines, instead, represent the transient modulation of the higher-harmonics due to the quench-drive wave mixing.The symmetric Gaussian shaped driving pulse was used here.
Figure 13 .
Figure 13.High-harmonic generation and transient modulation with Gaussian drive.Top: nonlinear current j N L (ω) at ω∆t = 0.In contrast to Fig.6, no peak at ω = 2∆ + ω d is present in this case.Center: nonlinear current modulation as a function of the quench-drive frequency ω∆t, j N L (ω∆t, ω = 3ω d ).This corresponds to Fig.7using the Gaussian envelope for the drive pulse.Bottom: j N L (ω∆t) obtained with the condition ω = −ω∆t + ω d , equivalent to Fig.8. | 8,236.2 | 2021-12-22T00:00:00.000 | [
"Physics"
] |
Two-component jets from 3-dimensional magnetohydrodynamic jet simulations of disk winds at sub-parsec scales
Article available at http://www.epj-conferences.org or http://dx.doi.org/10.1051/epjconf/20136102006 pc). This particular box size was carefully chosen to contain the jet within the simulated domain. Incorrect results may be obtained if much mass leaves the grid. Once the front of the jet reaches the outer edge of the grid, we stop the simulation. We use ZeusMP to run our simulations 1. We refer to [12] for more details on the setup.
Introduction
Astrophysical jets are commonly observed in many astrophysical settings involving a central object surrounded by an accretion disk (protostars, active galactic nuclei (AGNs), X-ray binaries, etc).These jets all have in common the accretion disk threaded by large-scale poloidal field lines.These jets are believed to result from the centrifugal acceleration of disc material by the magnetic field ( [1]) which also acts as the collimating agent.The role of the magnetic field has been demonstrated in numerical simulations (e.g.[8]) where it was also shown that a universal scaling from YSO jets to AGN jets is feasible.Others (e.g.[11]) also find that the intrinsic jet acceleration mechanism is similar in both the AGN and the YSO systems and that jets in AGN, for example, can be looked at as scaled up, relativistic versions of the jets in YSOs.The simulations discussed here can naturally be scaled by assuming different central objects.In the context of AGNs, our results should be interpreted in the context of slowly rotating (to non-rotating) black holes systems.We highlight and focus on two-component jet solutions found in our simulations, which may be needed to reconcile the unification scheme of BL Lacs and FR I radio galaxies.
Methods
In all simulations the initial density profile is that of a hydro-statically stable accretion disk corona, with ρ ∝ r −3/2 .The disk follows a similar density profile, but it is a. e-mail<EMAIL_ADDRESS>a factor 100 larger which is imposed by pressure balance between the disk surface and the overlaying corona.An initially current-free (∇ × B = 0) magnetic field is set up in the corona that extends into the disk [5].On the surface of the disk, the poloidal magnetic field strength falls off as a power law with disk radius : B p ∝ r µ−1 .In order to investigate the role of the initial magnetic geometry in launching and collimating disk winds, we set up four different cases, µ = −0.01 ( [8], hereafter OP), the self-similar configuration of Blandford-Payne ( [1], hereafter BP) with µ = −0.25, and an µ = −0.5 configuration ( [10], hereafter PP).In addition we investigated the the µ = −0.12which is intermediate between the OP and the BP cases.These range from conditions of most gradually falling and initially somewhat collimated fields to fields which fall off steeply with disk radius and that have much more open magnetic geometries.
We use a Cartesian grid.The initial conditions are a 10 8 M non-spinning black hole as a point mass at the center of a Keplerian disk that provide the boundary conditions for the simulation.The inner edge of the accretion disk surrounding this black hole is at 10r g with r g = 2GM/c 2 being the Schwartzschild radius, G is the gravitational constant, M is the mass of the black hole and c is the speed of light.The velocity at the inner edge of the disk in Keplerian rotation at this distance is 6.7 × 10 4 km/s much below the speed of light.The disk has an outer radius of 800r g far exceeding the size of the disk participating in the outflow.
The grid size is 1536 zones in the x 1 direction, corresponding to 3×10 4 R g (∼ 0.29 pc), and 500 zones in each of the x 2 and x 3 directions corresponding to ±9000R g (∼ 0.09 pc).This particular box size was carefully chosen to contain the jet within the simulated domain.Incorrect results may be obtained if much mass leaves the grid.Once the front of the jet reaches the outer edge of the grid, we stop the simulation.We use ZeusMP to run our simulations 1 .We refer to [12] for more details on the setup.
Results
In Figure 1 we show the four jets that we have simulated just before they leave the box.The OP jet evolves the fastest, while the jet evolves slower with more negative µ.All jets have a strong bow shock in the front of the jet.Despite the OP jet propagating forward the fastest, because of the different magnetic field structures a more negative µ leads to a higher Mach number at the bow shock in front of the jet.Therefore, the bow shock is bent backwards more sharply for the more negative µ, explaining why the cocoon remains closer to the jet axis.This cocoon is only visible in the front part of the jet for OP and µ = −0.12,as it has been pushed off the grid farther back.For the BP and PP simulations, it remains on the grid for the entire length of the jet.
In all simulations, the jet is rotating (see Fig. 2).The part of the jet close to the axis in the OP, µ = −0.12 and BP simulations rotates with a Keplerian profile similar to the disk.The part of the PP jet closest to the disk also rotates with a Keplerian profile.Further from the source in the PP jet, as well as at large distances from the source in the jet in BP, µ = −0.12,and OP jets, a kink mode develops forming a spiral-like structure that is able to wash out the Keplerian rotation profile.While the kink mode appears to grow farther from the source, it does not appear to grow out of control destroying the jet.Farther from the axis (in OP, µ = −0.12,and BP) the jet rotates faster than what should be expected if the whole jet was rotating as one Keplerian profile, showing that these jets have two components : an inner and an outer jet component.
In Fig. 1 we see that in all cases, the magnetic field wraps up tightly around the thin inner jet.We can see also from Fig. 1 that there is a helical field associated with the outer jets in the OP, µ = −0.12,and BP simulations.This, however, appears to be more loosely twisted than around the inner jet.This helical field associated with the outer jet is clearly separated from the inner jet, another indication that the two components are dynamically and physically separated.There is no helical field twisting up outside of the thin jet in the PP simulation, as expected since there is no-outer-jet in this case.The outer jet component in the BP configuration is quite weak, and it terminates soon upon interacting with the cocoon of material pushed aside by the jet.We find no outer jet in the PP simulation.
From a 10 8 M black hole, we find jet rotation velocities up to 55 × 10 3 km/s for the inner OP jet, and up to 40 × 10 3 km/s for the inner BP jet.The outer jet is clearly visible in the plot of the rotational velocity in Fig. 2, with rotational velocity up to 20 × 10 3 km/s in the OP jet.The more negative µ simulations have slower rotating outer jets, in the BP simulation the outer jet rotates with only a few thousand km/s.
In Fig. 3 we plot the specific angular momentum, which is conserved along a field line, given by l = rv φ − rB φ /κ, where r is the distance from the axis, v φ is the toroidal velocity (in cylindrical coordinates), B φ is the toroidal magnetic field, κ = ρv p B p is the conserved mass loading along a given field line, ρ is the mass density, v p is the poloidal velocity, and B p is the poloidal magnetic field strength [9] for all four jets.Note in particular that in the region of high toroidal velocity (Fig. 2), there is an increased negative specific angular momentum.This is because more of the angular momentum is being carried by the twisted toroidal field than the rotating fluid.The BP jet only shows a weak enhancement in l associated with its outer jet, consistent with that outer jet being weak.The PP jet does not show any such enhancement, showing that the rB φ κ term does not dominate.It appears and this is somewhat expected that the self-similar configuration of BP separates the one-component from the two-component regimes.
The jets have an onion like velocity structure [12], with the fastest velocities found in the inner jet close to the axis, while the outer jet is much slower.This is reminiscent of the fast spine, slower sheath structure reported in several jets, for instance M87 (e.g.[6]).
The geometry's role becomes apparent when looking at the momentum vectors in Fig. 4, taken in a slice through the middle of the grid.In all cases, some of the wind from the disk is collimated into a thin (inner) jet.But, with the more open field configurations (where the field lines on the disk have a larger angle from the jet axis), some of the wind flows out at very large angles with the jet axis.This is especially noticeable in the PP simulation, where far from the disk axis, but close to the disk boundary, the momentum vectors are aligned almost perpendicular to the jet axis.As a consequence, a cavity is being cleared out very near the disk boundary.In the less open field configurations, the vectors have a smaller angle with the axis, and the material flowing out in that direction ends up being collimated into the outer jet.There is, however, a bit of the same cavity being cleared out also in these cases, but it is narrower.This cavity acts as a separation between the inner and outer jet.
The inner jet in the OP and µ = −0.12simulations, appears to open up with a more or less constant opening angle.We find that the opening angle in the OP jet is 7.0 • , while in the µ = −0.12jet it is 5.6 • .The inner BP jet and the PP jet appears to re-collimate further out in the jet.
Summary
We have presented the results of four 3D MHD simulations of winds from accretion disks around non-spinning black holes.Initially all models are set up the same way, with the only difference between the simulations being the opening angle of the magnetic field on the disk, or in other 02006-p.2words how fast the poloidal magnetic field on the disk drops off with radius.We find that a two-component jet develops in all cases except the simulation where the magnetic field drops off the fastest.The outer jet component is dynamically and physically separated from the inner jet.
The two-component jets we find from our simulations could be of relevance to AGN jets where it has been suggested that a two-component jet model may be needed to reconcile the viability of the unification scheme of BL Lacs and FR I radio galaxies (e.g.[4]).In the suggested two-component structure of AGN jets, a fast spine is surrounded by a slow (but still relativistic) layer not unlike what we are finding, assuming the scaling in velocities are justifiable.However, while proposed models seem to combine a Blandford-Znajek ( [3]) and BP processes, we instead suggest that is not necessarily the case since a twocomponent jet can naturally arise from the disk itself.Our results might also be of relevance to GRBs which seem to require a two-component explosion involving two-jets (e.g.[2]).Finally, we note that a 3-component jet structure 02006-p.3 The Innermost Regions of Relativistic Jets and Their Magnetic Fields is thus feasible according to our findings if one includes the Blandford-Znajek component.
Figure 1 .
Figure 1.The density and magnetic field structure of the four jets resulting from the four increasingly open initial magnetic geometries for the disk/halo magnetic field configurations that we have simulated.The magnetic field is drawn on only one side to better illustrate the density structure.upper right : OP configuration, upper left : µ = −0.12configuration, lower left : BP configuration, lower right : PP configuration.Notice the clear inner and outer jet structure in the OP and µ = −0.12jets, and the weak outer jet structure in the BP jet.The BP and PP jets are surrounded by a cocoon of material pushed aside by the jet.This cocoon can be seen near the front of the OP and µ = −0.12jets, further back it has been pushed off the side of the simulation box.
Figure 2 .
Figure 2. Contour plot of the velocity component in a slice through the middle of the grid for the OP simulation (top left) and BP simulation (bottom left).The right shows a plot taken across the jet at 10 4 R g from the disk.The black hole mass is 10 8 M .We only plot the OP and BP simulations to save space, as these illustrate the general trend with lower toroidal velocity in the outer jet for more negative µ.
Figure 3 .
Figure3.Specific angular momentum in a slice through the middle of the grid.The outer jet is clearly associated with a very strong specific angular momentum, in the OP simulation it is more than 10 28 cm 2 /s (blue color), compared to the ∼ 10 27 cm / s (red color) in the rest of the jet.The PP jet, without an outer jet, has low specific angular momentum ∼ 10 27 cm 2 /s throughout the whole jet.
Figure 4 .
Figure 4.The momentum vectors overlaid on the density structure in a cut through the middle of the grid for the four simulations.The vectors are normalized for each figure individually, so the length of the vectors can not be compared between the different simulations. | 3,213 | 2013-01-01T00:00:00.000 | [
"Physics"
] |
Light Absorption Enhancement and Laser-Induced Damage Ability Improvement of Aluminum Alloy 6061 with Non-Porous Alumina/CdSe@Al2O3/SiO2 Functional Gradient Films
Numerical calculations of ultraviolet to near-infrared absorption spectra by cadmium selenide quantum dots (CdSe QDs) doped in anodic aluminum oxide pores were performed using a finite-difference time-domain model. The height, diameter, and periodic spacing of the pores were optimized. Light absorption by the dots was enhanced by increasing the height and decreasing the diameter of the pores. When the height was less than 1 μm, visible light absorption was enhanced as the spacing was reduced from 400 nm to 100 nm. No enhancement was observed for heights greater than 6 μm. Finally, the optical mode coupling of the aluminum oxide and the quantum dots was enhanced by decreasing the pore diameter and periodic spacing and increasing the height. Laser ablation verified light absorption enhancement by the CdSe QDs. The experiments verified the improvement in the laser-induced damage ability with a nanosecond laser at a wavelength of 355 nm after aluminum alloy 6061 was coated with functional films and fabricated based on numerical calculations.
Introduction
With the shortage of traditional fossil fuel resources such as coal, oil, and natural gas, coupled with the fact that they will cause serious environmental pollution and the greenhouse effect, finding new energy sources to replace traditional fossil fuels has become a major issue for contemporary technology to solve. For oil, the world's total proven reserves exceed 1373 billion tons, and the reserve production ratio is about 43 years. For natural gas, the world's total reserves are 141 trillion cubic meters, and the reserve production ratio is about 66 years. For coal, the world's total reserves are 1043.8 billion tons. According to the current output, it is estimated that it can be mined for 235 years [1]. Among energy generation methods, the fusion energy produced by laser-driven, controlled inertial confinement fusion (ICF) is valued by all countries for its rich fuel, clean materials, and the safety of fusion reactors. Representative examples include the large-scale laser tritium-deuterium fusion device built at Lawrence Livermore National Laboratory (LLNL) and initiated by the U.S. Department of Energy in 1995, the "National Ignition Facility" (NIF) [2][3][4]. The new milestone reached in August 2021 was an energy yield from the target of more than 1.3 MJ, representing around 70 percent of the energy that the laser pulse had delivered to the fuel capsule in its sights, and "generating more than 10 quadrillion watts of fusion power for 100 trillionths of a second", according to the NIF [5]. The French Atomic Energy and Alternative Energy Commission approved the "Laser Megajoule, LMJ" plan; the Rutherford-Alpton Laboratory of the Committee for Scientific and Technical Equipment of the United Kingdom used the Perwatt laser for the first time in the world; and the first Perwatt laser in Asia was built in Japan as the GEKKO-XII high-energy nanosecond at the Laser Engineering Institute of Osaka University. As part of the PEARL-X facility in Russia, there is a next-generation Perwatt laser facility. In China, an ultra-high-power laser facility at the Shanghai Institute of Optics and Mechanics also has world-leading equipment [6,7].
Aluminum alloy is used as the terminal optical component bracket due to its excellent mechanical properties. If stray light, ghost image, etc., are improperly handled, or due to the effect of the beam transmission system diaphragm, the laser will irradiate the surface of aluminum alloy and cause it damage [8,9]. In addition, the splashed metal particles may adhere to the surface of the optical element, causing secondary pollution on the surface of the optical element and reducing the laser-induced damage threshold (LIDT) of the optical element by about 60% [10][11][12][13]. Although we can continuously improve the processing and manufacturing accuracy of optical components, the influence of system stray light on the aluminum alloy "framework" is inevitable. Therefore, to alleviate the damage of optical components, stray light absorption management must be studied. While ensuring the processing quality of optical components, the study of laser-induced damage mechanisms and damage protection of aluminum alloy frames is also a key technical issue that cannot be ignored.
A key technical issue is the absorption of stray light [14] in nuclear energy systems. Aluminum alloy frames normally require surface treatment to eliminate stray reflections. Zhu et al. [15] fabricated a broadband plasmonic absorber with an average absorbance of ≈99% from 400 nm to 10 µm through assembly of Au nanoparticles onto a nanoporous template. In addition, Zhang et al. designed and illustrated an ultrathin Ag nanocomposite absorber with Ag nanocomposites, which could eliminate over 90% of stray light at 400-600 nm wavelengths. However, the low efficiency of preparation due to a complicated and time-consuming fabrication technology combined with the high expense of noble metals such as Au and Ag restrict its application in aluminum alloy frames. This requires light-absorbing materials in the pores of anodic aluminum oxide (AAO), which is a commonly used chemical conversion film formed on the surface of aluminum alloy. CdSe quantum dots (QDs) have been used to absorb light with relatively low cost [16,17]. Recently, Kohnehpoushi et al. [17] described visible light absorption enhancement in a CdSe-QD-sensitized TiO 2 periodic nanorod array. The enhancement mechanism was related to the diameter, height, and periodic spacing of the TiO 2 nanorods. Baffou et al. [18,19] extended the discrete dipole approximation and Green dyadic tensor method to simulate the thermodynamics of laser-irradiated plasma nanostructures, imaged the heat source distribution in light absorption systems (such as plasma nanostructures) by a molecular fluorescence anisotropy method, and verified the general physical rules controlling heat generation in plasma structures.
Here, we examined light absorption enhancement in AAO nanopores after the incorporation of CdSe QDs (see in Figure 1). In our design, CdSe QDs were filled in AAO pores, rather than in semiconductors such as TiO 2 , ZnO, or Si, because they can be used for eliminating stray light in ICF systems. Two-dimensional finite-difference time-domain (FDTD) calculations were used to solve Maxwell's equations. The QD diameters were 10 nm and the grid size in the X-and Y-directions was 1 nm, while that in the Z-direction was 10 −4 nm. Periodic boundary conditions were applied in the X-direction and perfectly matched boundary-layer conditions were used in the Y-direction. Ultraviolet (UV) to near-infrared light (200-1000 nm) in the p-polarization plane was incident in the forward Y-direction. In the FDTD simulations, each electric field component (Ex, Ey, Ez) was calculated at a different location within a Yee cell to determine the absorption profile, as given by Equation (1) as follows.
where P abs is the power absorbed per unit volume at each position, ω is the angular frequency, E is the total electric field amplitude, and ε ω is the permittivity of the material. They were generated by laser irradiation calculated by a particle counter which was firstly used to evaluate the LIDT of the materials. Nd: YAG laser was used to irradiate the material. The ablation craters are detected by scanning electron microscopy (SEM).
where Pabs is the power absorbed per unit volume at each position, ω is the angular frequency, E is the total electric field amplitude, and εω is the permittivity of the material. They were generated by laser irradiation calculated by a particle counter which was firstly used to evaluate the LIDT of the materials. Nd: YAG laser was used to irradiate the material. The ablation craters are detected by scanning electron microscopy (SEM).
Theories and Simulation
Semiconductor nanocrystals (SNCs) such as cadmium selenide (CdSe) quantum dots offer wide applications in the fields of photovoltaics, solar energy harvesting, nanophotonics, imaging, sensing and other fields [20][21][22][23][24][25]. Since incident radiation causes excitation of free electrons, metal nanoparticles (MNPs) can generate intense electric fields in their vicinity [26]. When SNCs are held in the close vicinity of MNPs, the high electric fields induced by MNPs can lead to enhanced absorption in SNCs. These absorption phenomena in both nanostructures (MNPs and SNCs) are described by various theories. Plasmons originate from collective oscillations of free conduction electrons [27], while excitation is a bound state of electron-hole pairs [28], which makes the modeling of these hybrid hetero-structures very difficult, as to accurately describe the characteristics of the system, we need to effectively combine two different theories. The models which have been used to practically describe hetero-structure systems are complex. People regard nanostructures as independent units (applicable for a weak coupling regime) with independent dielectric functions which could be making classical electro-dynamic interactions within a finitedifference time-domain (FDTD) or a discrete dipole approximation (DDA) [29,30].
The numerical study of proposed functional films was conducted by using a commercial-grade simulator based on the FDTD method. (The basic idea of FDTD is to sample and discretize each electromagnetic field quantity E and H alternately in time and space. Each E(H) component is surrounded by four H(E) components. Using this discretization method, the Maxwell curl equation with time variables can be transformed into a set of difference equations that can be easily calculated by a computer.) [31,32]. As long as the initial conditions, boundary conditions and material parameters are set, the electromagnetic field distribution of the whole space can be calculated step by step and iteratively on the time axis. In the process of FDTD calculation, the field component E(H) at a sampling point in space is associated with the surrounding field component H(E), and each medium parameter in the material equation is given to each Yee cell to reflect the role of the medium in the process of electromagnetic wave propagation. Therefore, this method can deal with the problems of target radiation and electromagnetic scattering with a non-uniform and complex shape and structure. Here, the nanostructure films were set as three functional films (nonporous alumina isolating film, a CdSe@Al2O3 nano-composites absorption film, and a SiO2 dielectric sealing protective film). Figure 1 shows the simulated unit
Theories and Simulation
Semiconductor nanocrystals (SNCs) such as cadmium selenide (CdSe) quantum dots offer wide applications in the fields of photovoltaics, solar energy harvesting, nanophotonics, imaging, sensing and other fields [20][21][22][23][24][25]. Since incident radiation causes excitation of free electrons, metal nanoparticles (MNPs) can generate intense electric fields in their vicinity [26]. When SNCs are held in the close vicinity of MNPs, the high electric fields induced by MNPs can lead to enhanced absorption in SNCs. These absorption phenomena in both nanostructures (MNPs and SNCs) are described by various theories. Plasmons originate from collective oscillations of free conduction electrons [27], while excitation is a bound state of electron-hole pairs [28], which makes the modeling of these hybrid hetero-structures very difficult, as to accurately describe the characteristics of the system, we need to effectively combine two different theories. The models which have been used to practically describe hetero-structure systems are complex. People regard nanostructures as independent units (applicable for a weak coupling regime) with independent dielectric functions which could be making classical electro-dynamic interactions within a finite-difference time-domain (FDTD) or a discrete dipole approximation (DDA) [29,30].
The numerical study of proposed functional films was conducted by using a commercialgrade simulator based on the FDTD method. (The basic idea of FDTD is to sample and discretize each electromagnetic field quantity E and H alternately in time and space. Each E(H) component is surrounded by four H(E) components. Using this discretization method, the Maxwell curl equation with time variables can be transformed into a set of difference equations that can be easily calculated by a computer) [31,32]. As long as the initial conditions, boundary conditions and material parameters are set, the electromagnetic field distribution of the whole space can be calculated step by step and iteratively on the time axis. In the process of FDTD calculation, the field component E(H) at a sampling point in space is associated with the surrounding field component H(E), and each medium parameter in the material equation is given to each Yee cell to reflect the role of the medium in the process of electromagnetic wave propagation. Therefore, this method can deal with the problems of target radiation and electromagnetic scattering with a non-uniform and complex shape and structure. Here, the nanostructure films were set as three functional films (nonporous alumina isolating film, a CdSe@Al 2 O 3 nano-composites absorption film, and a SiO 2 dielectric sealing protective film). Figure 1 shows the simulated unit cell of the periodic model. In the simulation procedure, the heights of the Al 2 O 3 ellipsoids were 8 µm, 6 µm, 4 µm and 2 µm. The diameters of the nanopores were 60 nm, 70 nm, 80 nm and 100 nm with periodic spaces of 100 nm, 200 nm, 300 nm and 400 nm. Over the top layer, ultraviolet (UV) to near infrared (200-1000 nm) in the p-polarization plane was incident in the forward direction of the y-axis. Finally, the specific absorption was calculated using Equations (2) and (3). The dielectric function of materials used in the simulation was fitted with the experimental data to ensure that the simulated results agree with the measured ones. Figure 2 plots the absorption spectra of AAO pores with embedded QDs for various height-to-diameter aspect ratios and periodic space. In Figure 2a, the light absorption was enhanced for wavelengths of less than 550 nm with increasing pore height and decreasing diameter. In the infrared region, decreased height and diameter enhanced light absorption. This was consistent with light absorption as a function of periodic pore spacing over the range 200-400 nm (Figure 2b-d). The results for 200 nm and 300 nm spaces were close, and the light absorption sharply decreased for a 400 nm spacing. Figure 2a-d revealed shifts in the absorption peak in the 300-500 nm range toward lower wavelengths with increasing pore spacing, which indicates an increase in the AAO bandgap. The enhanced light absorption was largely related to the height of the pores. Thus, in the following, the heights of pores embedded with QDs were increased to optimize absorption.
cell of the periodic model. In the simulation procedure, the heights of the Al2O3 ellipsoids were 8 μm, 6 μm, 4 μm and 2 μm. The diameters of the nanopores were 60 nm, 70 nm, 80 nm and 100 nm with periodic spaces of 100 nm, 200 nm, 300 nm and 400 nm. Over the top layer, ultraviolet (UV) to near infrared (200-1000 nm) in the p-polarization plane was incident in the forward direction of the y-axis. Finally, the specific absorption was calculated using Equations (2) and (3). The dielectric function of materials used in the simulation was fitted with the experimental data to ensure that the simulated results agree with the measured ones. Figure 2 plots the absorption spectra of AAO pores with embedded QDs for various height-to-diameter aspect ratios and periodic space. In Figure 2a, the light absorption was enhanced for wavelengths of less than 550 nm with increasing pore height and decreasing diameter. In the infrared region, decreased height and diameter enhanced light absorption. This was consistent with light absorption as a function of periodic pore spacing over the range 200-400 nm (Figure 2b-d). The results for 200 nm and 300 nm spaces were close, and the light absorption sharply decreased for a 400 nm spacing. Figure 2a-d revealed shifts in the absorption peak in the 300-500 nm range toward lower wavelengths with increasing pore spacing, which indicates an increase in the AAO bandgap. The enhanced light absorption was largely related to the height of the pores. Thus, in the following, the heights of pores embedded with QDs were increased to optimize absorption. Figure 3a-d, visible light absorption was enhanced with increasing spacing over the range 100-300 nm. However, when the spacing was 400 nm, the light absorption began to decrease. In the infrared region, the absorption enhancement at the 100 nm spacing was far greater than the other spaces. When the diameter-to-height ratios were 60:8000 and 70:6000, the enhancement of the 100 nm spacing was more than 95% (Figure 3a,b). When the height was 2000 nm, the enhancement was about 90%, while those for other spaces were less than 60%. At the UV band, the absorption enhancement increased with increasing height and decreasing diameter. However, in Figure 3a, the enhancement was not apparent at heights over the range 6-8 µm. Figure 3 plots the light absorption enhancement in AAO pores with embedded QDs for 100 nm, 200 nm, 300 nm, and 400 nm spaces. To optimize the spacing and height, the pore diameter-to-height ratios were 60:8000, 70:6000, 80:4000, and 100:2000. In Figure 3ad, visible light absorption was enhanced with increasing spacing over the range 100-300 nm. However, when the spacing was 400 nm, the light absorption began to decrease. In the infrared region, the absorption enhancement at the 100 nm spacing was far greater than the other spaces. When the diameter-to-height ratios were 60:8000 and 70:6000, the enhancement of the 100 nm spacing was more than 95% (Figure 3a,b). When the height was 2000 nm, the enhancement was about 90%, while those for other spaces were less than 60%. At the UV band, the absorption enhancement increased with increasing height and decreasing diameter. However, in Figure 3a, the enhancement was not apparent at heights over the range 6-8 μm.
Experiment Procedure
In our studies, AA 6061 worked as a substrate with a length, width and thickness of 10 mm, 25 mm, and 3 mm, respectively. The chemical composition of AA 6061 and laser parameters can be found in ref. [9]. An Nd:YAG laser operating at 355 nm was used (schematics of laser ablation testing equipment shown in Figure S1). The 355 nm laser pulse duration was 6 ns, the area of the gaussian laser spot was 0.7 mm 2 , and the repetition rate was 1 Hz. The instruments used in our experiments include an Nd:YAG laser, collimated light source, focusing lens, splitter wedges, an EPM2000 energy calorimeter, a sample carrier (two-dimensional, adjustable, step-by-step accuracy is 10 μm), an optical microscope and a computer for controlling and data acquisition. Laser ablation craters were studied with scanning electron microscopy (SEM), and a particle counter was used to record the
Experiment Procedure
In our studies, AA 6061 worked as a substrate with a length, width and thickness of 10 mm, 25 mm, and 3 mm, respectively. The chemical composition of AA 6061 and laser parameters can be found in ref. [9]. An Nd:YAG laser operating at 355 nm was used (schematics of laser ablation testing equipment shown in Figure S1). The 355 nm laser pulse duration was 6 ns, the area of the gaussian laser spot was 0.7 mm 2 , and the repetition rate was 1 Hz. The instruments used in our experiments include an Nd:YAG laser, collimated light source, focusing lens, splitter wedges, an EPM2000 energy calorimeter, a sample carrier (two-dimensional, adjustable, step-by-step accuracy is 10 µm), an optical microscope and a computer for controlling and data acquisition. Laser ablation craters were studied with scanning electron microscopy (SEM), and a particle counter was used to record the number of particles created with various diameters. The laser device was placed with the level of organic pollutants from A/10 to A, which means the range of nonvolatile residues was 0.1 µg/cm 2 to 1 µg/cm 2 . In addition, the level of particulate pollutants is between 50 and 300, which means the density of particles greater than 5 µm is between 166 and 170,000 per square foot.
All of the films were prepared via anodic oxidation. The process included degreasing, alkali corrosion, neutralization, anodization, electrolytic deposition, and sealing. The degreasing solution was composed of aqueous solutions of sodium salts or bases such as Na 2 SO 4 and NaOH. The treatment was at 50-65 • C for 5-10 min. The alkali etching solution included NaOH at a concentration of 50-80 g/L and commercial additives. The bath temperature was 50-75 • C. The natural oxide film on the surface of the aluminum alloy was removed by soaking for 10-15 min. The surface was then immersed in a neutralizing solution with HNO 3 as the main component, and the metallic luster was restored. The metal matrix was completely exposed and easily corroded. Thus, if oxidation and subsequent protection steps could not be performed immediately, it was soaked in deionized water to avoid contact with air. The anodization processes were conducted at 0 • C, 12 V and various times to acquire different film heights. CdSe nanoparticles were prepared inside the AAO pore. Commercial sealing reagent was applied for the sealing step.
Results and Discussion
To verify the accuracy of the simulations and to demonstrate that the surface treatment of the aluminum alloy 6061 (AA 6061) increases light absorption, spectra were acquired before and after surface treatment. The height of the absorption layer was about 8 µm and the pore diameters were about 60 nm. Cross sections of functional films are shown in Figure 4a. Figure 4b-f demonstrate the distribution of elements of functional films. The experimental results in Figures 4 and 5 indicate that regardless of the surface quality or the generated particles following surface anodizing, the AAO pores with embedded QDs exhibited higher light absorption than undoped AA 6061. In most cases, more absorption indicated greater damage. However, the absorbed light was diffused in deeper regions and the SiO 2 acted as a protective layer for the AA 6061 substrate. The melting and vaporization points of AAO are 2327 K and 3253 K, respectively. Those for SiO 2 are 2273 K and 2973 K, and those for AA 6061 are 923 K and 2740 K, respectively. These differences indicated that the protective effects were better than the absorptive effects. Hence, the functional layers absorbed more and experienced less damage because of the AAO pores embedded with QDs. The QDs thus absorbed the light, and the higher absorption values of the AAO and the SiO 2 films provided the AA 6061 with greater protection against laser-induced damage. The thickness of SiO 2 film is about 200 nm, so the distribution of the surface is only the silicon element and oxygen element. Figure 5 shows the surface morphology of AA 6061 after 15 laser pulses with a 480 µm spot radius. In Figure 5a, AA 6061 coated with a nonporous alumina isolating layer, a CdSe@Al 2 O 3 nanocomposite absorption layer, and a SiO 2 dielectric sealing protective layer, the radius of the heat zone was less than 100 µm, and there was a cluster of ablation pits after solidification. The ablation zone cannot be seen, which was significantly smaller than the 0.7 mm 2 laser spot size. (The enlarged view of the ablation area is shown in Figure 5b). This indicates that the surface treatment with functional layers absorbed light and protected the AA 6061 substrate. However, Figure 5c shows the morphology after being exposed to 15-pulses of laser radiation of AA 6061. (The enlarged view of the ablation area is shown in Figure 5d). The areas of the heat zones were greater than 0.5 mm 2 , which indicated damage at the edge of the Gaussian laser spot at a low power density. The enlarged figure in the upper right corner depicts the loose-layered structure after cooling for several days. It was in an unstable state that could be peeled away. To further verify that the AAO pores with embedded QDs absorbed the light, we used a particle counter to record the number of particles with various diameters created by the laser irradiation. Greater light absorption indicated stronger protection of the upper layer membrane and less particle contamination. Figure 5 shows the surface morphology of AA 6061 after 15 laser pulses with a 4 μm spot radius. In Figure 5a, AA 6061 coated with a nonporous alumina isolating layer CdSe@Al2O3 nanocomposite absorption layer, and a SiO2 dielectric sealing protect layer, the radius of the heat zone was less than 100 μm, and there was a cluster of ablati pits after solidification. The ablation zone cannot be seen, which was significantly smal than the 0.7 mm 2 laser spot size. (The enlarged view of the ablation area is shown in Figu 5b.) This indicates that the surface treatment with functional layers absorbed light a protected the AA 6061 substrate. However, Figure 5c shows the morphology after bei exposed to 15-pulses of laser radiation of AA 6061. (The enlarged view of the ablati area is shown in Figure 5d.) The areas of the heat zones were greater than 0.5 mm 2 , wh indicated damage at the edge of the Gaussian laser spot at a low power density. The e larged figure in the upper right corner depicts the loose-layered structure after cooling several days. It was in an unstable state that could be peeled away. To further verify th the AAO pores with embedded QDs absorbed the light, we used a particle counter record the number of particles with various diameters created by the laser irradiatio Greater light absorption indicated stronger protection of the upper layer membrane a less particle contamination. Figure 6 shows particles generated by fifteen pulses of laser irradiation. (The specifi number of particles is shown in Tables S1 and S2 in the Supplementary Materials.) Th particle numbers and their diameters were recorded after each laser shot. The total num ber of particles of AAO pores with embedded QDs was 1239. A total of 57.6% of the 0. Figure 6 shows particles generated by fifteen pulses of laser irradiation. (The specific number of particles is shown in Tables S1 and S2 in the Supplementary Materials). The particle numbers and their diameters were recorded after each laser shot. The total number of particles of AAO pores with embedded QDs was 1239. A total of 57.6% of the 0.3 µmdiameter particles were produced during the first three laser pulses, and nearly 70% of the rest were generated (Figure 6a). However, 31,300 particles were generated when the AA 6061 surface was not treated with AAO pores with embedded QDs. More than 90% of the various particle diameters were produced during the first three laser pulses. This indicated that, without treatment, the surface could be easily damaged by laser irradiation. There were 101 particles with diameters of 5 µm. These could significantly contaminate an ICF system and affect optical transmission (Figure 6b). Figure 6 shows particles generated by fifteen pulses of laser irradiation. (The specific number of particles is shown in Tables S1 and S2 in the Supplementary Materials.) The particle numbers and their diameters were recorded after each laser shot. The total number of particles of AAO pores with embedded QDs was 1239. A total of 57.6% of the 0.3 μm-diameter particles were produced during the first three laser pulses, and nearly 70 % of the rest were generated (Figure 6a). However, 31,300 particles were generated when the AA 6061 surface was not treated with AAO pores with embedded QDs. More than 90% of the various particle diameters were produced during the first three laser pulses. This indicated that, without treatment, the surface could be easily damaged by laser irradiation. There were 101 particles with diameters of 5 μm. These could significantly contaminate an ICF system and affect optical transmission (Figure 6b). Figure S2.) The energy levels of molecules mainly include the energy levels of electrons, the vibrational energy levels corresponding to the relative motion of atomic nuclei in molecules, and the rotational energy levels corresponding to the overall rotation of molecules. The interval between the vibrational energy level and the rotational energy level of most material molecules is the energy corresponding to the infrared photon. Therefore, when most substances are bathed in infrared light, material molecules can absorb a large number of infrared photons and transition to a high energy level with faster vibration or rotation. The acceleration of molecular vibration or rotation in the micro corresponds to the temperature rise of the macro object being heated. The energy of ultraviolet photons mostly corresponds to the energy level of electrons in the molecules mentioned above. However, a UV laser can often make electrons transition from one atom to another so as to change the structure of the whole molecule, that is, photochemical ablation. tation in the micro corresponds to the temperature rise of the macro object being heated. The energy of ultraviolet photons mostly corresponds to the energy level of electrons in the molecules mentioned above. However, a UV laser can often make electrons transition from one atom to another so as to change the structure of the whole molecule, that is, photochemical ablation.
Conclusions
In conclusion, enhanced light absorption in AAO pores with embedded CdSe QDs was investigated using FDTD-based simulations. Visible light absorption increased when the pore height was increased and the diameter was decreased. It was not enhanced when the height was greater than 8 μm. The absorption edge shifted toward UV wavelengths, which indicated an increased equivalent bandgap for the AAO pores with embedded QDs. In the near-infrared region, a large periodic spacing (p = 400 nm) would lead to a higher absorptivity for increased heights, relative to that observed for the 100 nm, 200 nm, and 300 nm spaces. Finally, experiments indicated that AAO pores with embedded QDs improved AA 6061 resistance to laser-induced damage at a wavelength of 355 nm. The supplementary material indicates the improvement of the anti-laser-induced ability of AA 6061, which could be explained as follows: firstly, the functional gradient films composited of non-porous alumina/CdSe@Al2O3/SiO2 have a good absorptivity to stray light.
Conclusions
In conclusion, enhanced light absorption in AAO pores with embedded CdSe QDs was investigated using FDTD-based simulations. Visible light absorption increased when the pore height was increased and the diameter was decreased. It was not enhanced when the height was greater than 8 µm. The absorption edge shifted toward UV wavelengths, which indicated an increased equivalent bandgap for the AAO pores with embedded QDs. In the near-infrared region, a large periodic spacing (p = 400 nm) would lead to a higher absorptivity for increased heights, relative to that observed for the 100 nm, 200 nm, and 300 nm spaces. Finally, experiments indicated that AAO pores with embedded QDs improved AA 6061 resistance to laser-induced damage at a wavelength of 355 nm. The Supplementary Material indicates the improvement of the anti-laser-induced ability of AA 6061, which could be explained as follows: firstly, the functional gradient films composited of non-porous alumina/CdSe@Al 2 O 3 /SiO 2 have a good absorptivity to stray light. Then, the multi-films smooth the defects of the rough surface of AA 6061. Finally, the vaporization temperature of SiO 2 , which worked as protection layer, is 2973 K (more than that of AA 6061-2740 K) and improved its laser-induced damage threshold. The damage zone and ejected particles were much less pronounced than for untreated materials. The simulations agreed well with the experimental results and demonstrated that anodic aluminum oxide pores with embedded CdSe QDs enhanced the light absorption of AA 6061. Finally, the non-porous alumina/CdSe@Al 2 O 3 /SiO 2 functional gradient films can effectively absorb 355 nm UV stray light and improve the anti-laser-induced ability of AA 6061.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/nano12030559/s1, Figure S1: Schematics of laser irradiation damage testing equipment; Figure S2 Table S1: Particles generated by different laser irradiation times with laser fluence of 0.5 J/cm 2 of AA 6061 without functional layers; Table S2: Particles generated by different laser irradiation times with laser fluence of 0.5 J/cm 2 with functional layers. | 7,324.8 | 2022-02-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
A Scientific Research Information System via Intelligent Blockchain Technology for the Applications in University Management
The scientific research information system plays an essential role in improving management efficiency and promoting technological innovation in universities. With the increasing computational demand for human-centric research management, blockchain technology, with distributed storage, consensus sharing, and security traceability, has efficiently assisted the research information system in dealing with various issues such as big-data scale, information security, interconnection, rapid response, and private security. A novel scientific information system framework based on intelligent blockchain technology is proposed to promote university scientific research’s information level and management efficiency. Moreover, four smart data contracts, including data collection, verification, sharing, and supervision, are custom-designed under an efficient scientific research information system. Those intelligent contracts provide reliable data security and traceability algorithms to guarantee the practical application of the scientific research information system. The results show that the constructed system can relieve the centralized storage pressure of scientific research information and solve the cross-subject sharing obstacle of massive safety data among different systems. Thereby, the system increases the transparency of scientific evaluation and realizes the credible supervision of scientific research information, which provides a way to promote the innovative application of blockchain technology in scientific research management in colleges and universities.
Introduction
In recent years, computational intelligence technology has formed a new human-centered eld in which people are the primary target and service object of intelligent information systems. In the eld of education application, human-centered computational intelligence technology and the establishment of an education intelligent information system can realize the transformation of personnel from the operating subject to the served object. Universities are an essential part of the natural scienti c and technological innovation system, and their scienti c output and transformation applications play an essential role in promoting economic and social development [1,2]. Universities' scienti c research information system is the storage and operation carrier of scienti c research project management, scienti c research achievement management, scienti c and technological evaluation management, etc. [3][4][5].
With the expansion of scienti c research elds and project sources, as well as the increasing demand for gradual enrichment and information sharing, scienti c research management is faced with the challenges of expanding data scale, increasing complexity of management processes, improving work e ciency, and increasing interconnection requirements [6]. Scienti c research in universities has experienced a development process from paper materials and semi-informatization to informatization [7][8][9]. However, current scienti c research information systems applied in universities' management and analysis are still relying on separate personal account pattern [10,11]. Under this system framework, there are mainly three types of account topics, including university administrators, scientific research secretaries in secondary management departments, and massive researcher and teacher users, participating in the entire scientific research information management from collection, review, statistics, release, and submission [12][13][14]. Problems such as error-prone data, long execution cycles, untimely response, high duplication of work, and difficulty in sharing often exist in the management process [15]. Some colleges and universities have optimized the scientific research information system [16,17]. Sharing and real-time performance have been improved to a certain extent, the scientific research management information technology has not yet achieved breakthroughs due to the barriers of traditional information technology [18,19].
As a novel decentralized architecture and distributed computing paradigm, blockchain was initially used in the field of digital currency and information sharing [20]. Since blockchain-based codes can automatically record the whole process according to business rules set, blockchain provides effective solutions for information transferring issues related to centralization, security, and traceability, having significant advantages in network-wide records, low cost, high efficiency, safety, and reliability [21][22][23]. Ling et al. proposed a trusted permissionless IoT access protocol using blockchain and radio access network [24]. Singh et al. proposed a blockchain-based cyber-physical system security mechanism to ensure the secure transmission of information between UAVs [25]. In the past years, blockchain technology has also been introduced into the scientific management of universities and research institutes to explore some practical applications in the management of intellectual property rights, laboratory security, and library resources [26,27]. Especially in some European and American countries with developed education, education management system based on blockchain technology has been widely explored and achieved remarkable results. For example, at the Massachusetts Institute of Technology and Holberton College, blockchain technology is used for university degree management, learning result evaluation, degree records, certificate storage access, etc. [28]. On the characteristics and advantages of blockchain technology, applying blockchain to scientific research management in universities can effectively eliminate information asymmetry, achieve information synchronization between nodes, and enhance the credible sharing of information [29]. It provides a feasible way to solve the problems of traditional scientific research information systems and has important practical significance for promoting the management efficiency and scientific and technological innovation of colleges and universities. However, many existing universities are only stop at establishing the original management model to improve the efficiency and level of scientific research management. Some have deployed customized functions under the support of the unit's information technology department and professional technology companies. Other universities directly developed independent scientific research information systems, which have played an essential role in supporting discipline construction and scientific management [30,31].
ere are many difficulties in completing university scientific research information system by using blockchain technology as follows: Firstly, internal relevance is poor. Current layered architecture is adopted by university scientific research information systems, such as B/S architecture, C/S architecture, MVC architecture, SOA architecture, SSH architecture, etc. [32]. Although the layered architecture can meet the management needs within a specific time frame within the department, it is not easy to exchange data with other department, which leads to isolated islands of information between departments, resulting in redundant and duplication of business work, and inevitable risks such as information irregularities and errors.
Secondly, it is difficult to communicate with the outside world. Due to the requirements of data format standardization and network security, most scientific research information systems are limited to the internal use of universities.
ere is little information interaction with external systems, and it cannot connect with the competent department or other universities, cooperative units, etc. External reporting of information is still mainly manual, which leads to problems such as narrow coverage and poor timeliness of information transmission.
irdly, data storage is limited with poor traceability. Studies have found that the amount of scientific research information kept is increasing at 127% every year [33,34]. University scientific research data face tremendous storage pressure, and historical data have poor traceability. ere is no unified standard for data storage formats at different stages because scientific research policies, rules and regulations, and evaluation standards are regularly updated. A lot of time and energy is spent on data calls, and there is even a phenomenon of data loss.
Fourthly, security is difficult to guarantee. One of the reasons why it is difficult for scientific research information systems to be interconnected is that the existing system architecture has deficiencies in privacy security, identity authentication, and authority management and control. Unlawful access to or public disclosure of scientific research data will cause irreparable losses. In addition, there are problems such as the difficulty of opening the information channel for the transformation of scientific and technological achievements and the time difference between the update of scientific research data and the needs of management departments.
To end the above problems, this study proposes a novel way to integrate blockchain technology into university scientific research management, aiming to promote the work efficiency and intelligence level of intelligent management systems. With reasonable framework design and smart contract optimization, our system approach can achieve better administration performance for massive scientific research information in terms of security, traceability, and robustness, which is more suitable to complex practical applications for universities and research institutes. e rest of the article is organized as follows: in Section 2, related researches done with blockchain technology are introduced towards university scientific research 2 Mobile Information Systems information systems.
e details of the proposed system architecture design and smart contract algorithms are explained in Section 3. Section 4 presents contrastive experimental results and performance evaluation. Finally, Section 5 concludes the whole work with future research prospects.
Intelligent Blockchain Technology Development.
Intelligent blockchain originated from Bitcoin and is the underlying support technology of Bitcoin. People know it with the white paper "Bitcoin: A Peer-to-Peer Electronic Cash System" published in 2008 [35]. Blockchain technology has the characteristics of decentralization, transparency, openness, autonomy, and information immutability, including point-to-point transmission, distributed storage, cryptography, and consensus algorithms [36,37]. According to the network scope, blockchains are mainly divided into the public, alliance, and private chains. Public chain is represented by Bitcoin. Nodes are independent of each other, and the trust between the nodes is maintained through a consensus mechanism [38]. A private chain is a blockchain system used internally by an enterprise or organization. Read and write administrators control the permissions of the blocks in the organization. Although the private chain uses blockchain technology, it still has the characteristic of "centralization" in essence. Alliance chain is a blockchain controlled by a consortium composed of multiple enterprises or organizations. In the alliance chain system, only nodes authorized by the alliance have the power to keep accounts [39]. is kind of blockchain technology is essentially "partial decentralization" or "multi-centralization" to ensure information security and operational efficiency [40]. Compared with the traditional centralized management method, it is safer and more reliable.
Currently, blockchain technology is divided into three stages by academia. Blockchain 1.0 era is applying blockchain technology in the currency field, that is, programmable currency in currency payment. Blockchain 2.0 era is a field where blockchain technology is applied to the combination of digital currency and smart contracts. In the era of blockchain 3.0, blockchain technology is no longer limited to the financial field, which has gradually expanded to various applications in all walks of life. Nowadays, blockchain technology is considered the result of the fourth industrial revolution after steam engines, electricity, and the Internet. A large number of experts and scholars from all over the world are exploring and researching [41].
Blockchain Realization on University Scientific Research
Information System. In recent years, many universities and research units have made many attempts to apply blockchain and other advanced technologies to solve problems such as scientific research information storage, sharing, intelligent analysis, and decision management. For example, Altowaijri suggested the application of blockchain technicality because of its tremendous technical and systemic payoff in improving the sharing of management experience across the Saudi higher education system, particularly at Qassim University [1]. Alammary et al. pointed out that blockchain can bring significant benefits to education, including providing a secure platform to share student data, reduce costs, and increase trust and transparency [5]. Daraghmi et al. designed UniChain to improve the current management system as it provides interoperable, secure, and efficient access to EARs by students, universities, and other third parties while maintaining student privacy [6]. From the perspective of design ideas and implementation means, these system construction works for universities' scientific information management roughly reflect the following aspects: (1) Distributed storage technology of blockchain can alleviate the pressure of centralized storage of scientific research data. Distributed storage of the blockchain is essentially a kind of account data maintained by multiple nodes, different physical addresses, or multiple members. ereby, it is used to realize the decentralized database of data sharing, synchronization, and value assignment. e method of combining on-chain and off-chain storage is adopted by university scientific research data storage. Scientific research information index and the data summary of crucial information are first hashed and then stored on the chain. Complete scientific research materials are stored in an off-chain database that maps to the on-chain. Smart contracts are used to verify the authenticity of scientific research data to achieve data verification, query, and invocation. (2) Blockchain cryptography technology is used to ensure the security of scientific research data. Cryptography is the core of blockchain privacy security. Cryptographic algorithms involved in the blockchain include asymmetric encryption algorithms, Merkel trees, and hashing algorithms. Different encryption technologies are used to ensure the reliability and security of scientific research data transmission, submission, and sharing. In addition, cryptography provides technical support for the direct connection of cross-departmental information, such as project fund revenue and expenditure associated with the financial department, teaching project management associated with the educational administration department, student science and technology competition associated with the student department, teacher performance evaluation associated with the personnel department, etc. Cryptography technology fully realizes the secure collaboration of scientific research data. (3) Blockchain architecture model can realize data sharing across the primary scientific research information system. Competent authority can plan the scientific research chain in a regional scope and form a distributed node of the blockchain. Scientific research chain is different from the conventional blockchain in that it only stores scientific research Mobile Information Systems data abstracts and data specifications. Scientific research management department can be used as a node to establish an alliance chain. e front-end processor of the scientific research chain is used by the provincial science and technology commissions (bureaus), education commissions (bureaus), science and technology associations, and other government departments to realize the data association between the access requirements and the node scientific research information system. University scientific research information system realizes a top-down structure's selective sharing of scientific research information among different subjects. Cross-chain technology is used in different regions to realize the interconnection and intercommunication of scientific research data of universities in multiple regions. (4) Blockchain consensus mechanism can be used to realize the transparency of scientific and technological review and evaluation. Blockchain consensus mechanism is that multiple hosts form a network cluster through asynchronous communication. It requires state replication between hosts to ensure that each host reaches a consistent state consensus, thereby safely updating the data state in the distributed network. Election consensus algorithms such as PBFT and Kafka are used by scientific research information system in universities to achieve paperless and anonymous scientific research results such as peer review of scientific research results, scientific and technological project review, and scientific research funding audit to ensure fair and impartial scientific and technological review and evaluation. Furthermore, the traceability based on the chain structure is used to realize the whole process of the review and evaluation process is welldocumented.
(5) Smart contract algorithms can expand the functions of scientific research information systems in universities. Smart contracts have the characteristics of self-verification, decentralization, and automatic execution to set execution conditions with automatic trigger operations, which can be introduced to implement functions such as information interaction and value transfer provided by users in the blockchain network. Applications of smart contracts to verify the personnel indentations of various departments is a significant improvement over the traditional manual approval verification in the scientific research information system. Smart personnel contracts in different departments trigger different conditions to realize that the corresponding personnel will match the corresponding permissions.
(6) rough the customized deployment of smart contracts, functions that are difficult to achieve by traditional scientific research information systems are realized. For example, remote identification of scientific research projects can be realized through smart contracts and digital signature technology.
en, transformation and docking of scientific research results can be realized through intelligent contracts and encryption technology.
In addition, blockchain technology can also realize the credible supervision of scientific research information in universities. Supervisory department can be used to comprehensively grasp the information on scientific research projects, scientific research results, fund execution, scientific research personnel, etc. Once a problem is discovered, relevant departments and personnel can supervise the entire process through the chain storage of data, contract execution records, and system operation logs. At the same time, scientific research information system constructed by the application of blockchain technology can provide real-time scientific research statistical information to the competent department or regulatory department as well as assist in grasping the status quo and development trends in scientific research among various universities. In summary, the application of blockchain technology effectively solves the problems in the construction of universities' scientific research information systems and overcomes the shortcomings of existing information technology for broad application prospects. However, the existing information systems based on blockchain technology have the following problems. Firstly, most of the existing information systems take "data" as the first consideration factor, and there is a lack of people-oriented information system research. Secondly, the existing research on information systems based on blockchain is mostly theoretical research, and there is a lack of specific implementation methods for information systems based on blockchain. irdly, storage capacity of blockchain is limited, and the delay and scalability of information system research based entirely on blockchain are low. erefore, this study builds a people-oriented university scientific research information system, aiming to break through the barriers of information exchange between universities and departments and between departments through blockchain and customize smart contracts to serve the needs of users. is study adopts the "on-chain + offchain" dual-mode storage mechanism to alleviate the high latency of the system and enhance the scalability of the system.
Blockchain-Based Architecture of University Scientific
Research Information System. Innovation blockchain architecture consists of five important parts, including the application, contract, incentive, consensus, network, and data layers. On the general blockchain architecture, blockchain technology is applied to the construction of scientific research information systems in universities, and a certain degree of adjustment and optimization is needed. For example, the admission subject needs to perform identity verification, and election consensus is adopted in the consensus mechanism. Figure 1 shows the architecture of scientific research information systems in universities based on blockchain technology.
is study proposes a university scientific research information system architecture based on blockchain technology, which includes a data resource layer, a network consensus layer, a business logic layer, and an application service layer. At the data resource layer, this study innovatively applies the "on-chain + off-chain" data storage method to realize dual-mode data storage and reduce the complexity of data storage on the blockchain. rough the combination of on-chain storage and off-chain storage, it jointly serves the safe and efficient interaction of scientific research information in universities. In the network consensus layer, this study adopts the electoral consensus algorithm PBFT based on the demand for the exchange of scientific research information in colleges and universities to serve the consensus of user nodes in the scientific research information system of colleges and universities. In the business logic layer, this study customizes four smart contracts for data collection, verification, sharing, and supervision and uses asymmetric encryption to ensure the data flow of the university's scientific research information system. In the application service layer, university scientific research information system proposed in this study serves scientific researchers, management departments, other universities and government departments, etc., and truly realizes the reliable data sharing of university scientific research information between and within universities.
Data Resource
Layer. Data resource layer includes three parts, namely data collection unit, on-chain processing unit, and data storage unit, which are mainly used to realize the collection, storage, integration, and release of scientific research data. e data collection unit uses manual entry, automatic push, batch import, and network capture to collect various forms of scientific research data. e onchain processing unit includes chain structure, timestamp, hash calculation, MerkleTree, etc. In particular, the chain structure defines the on-chain storage method of the system; the timestamp is a chain based on the timeline to provide support for the traceability of scientific research Mobile Information Systems information; hash calculations are mainly used for scientific research information encryption, adjacent block associations, ensuring information integrity, etc.; and MerkleTree can effectively prevent malicious tampering of information, because it can search Tree similar structure. Blockchain network structure of the upper chain processing unit is shown in Figure 2. Data storage unit includes an off-chain database, a file system, and a kv database. Off-chain databases generally use conventional relational or nonrelational databases, which are used to store complete scientific research information. File systems generally use distributed file systems such as the Interplanetary File System (IPFS), which are used to store essential information and data summaries, and smart contracts. Kv database is a key-value database that stores data in key-value pairs. For example, Level DB, developed by Google, stores data indexes and provides mapping associations for the interaction between various types of data.
Network Consensus Layer.
Network consensus layer includes a network layer and a consensus layer. Firstly, network layer is the basis of distributed storage, since it encapsulates the blockchain networking mode, message dissemination mechanism, and data verification mechanism. Message dissemination and data verification mechanism can be customized according to application requirements. Secondly, consensus layer guarantees mutual trust between blockchain nodes because it encapsulates various consensus mechanisms. From the perspective of security mechanisms, the PBFT (Practical Byzantine Fault Tolerance) consensus algorithm can be used by university scientific research information systems. Requester sends a request to the controller node, and the controller node broadcasts the request to the agent node after receiving the request. After receiving the request, the agent node records and broadcasts it again. If it receives more than a certain number of the same request, it enters the next stage. If one of the nodes receives more than a certain number of identical requests, it initiates feedback to the requesting end. In addition, election consensus algorithms such as Kafka (distributed queue) can also be completed to optimize network request efficiency. Furthermore, 1/3 of the fault tolerance can meet the security requirements. Compared with another reward-type consensus, it has the advantages of less computing power and high efficiency. In the scientific research information system of colleges and universities, for the interaction of internal data of colleges and universities, this study designs the internal consensus algorithm of the system based on the PBFT consensus mechanism. Schematic diagram of the PBFT consensus algorithm is shown in Figure 3.
Previous block
Next
Mobile Information Systems
Consensus of the internal nodes of the university scientific research information system based on blockchain technology is divided into five stages, which correspond to Request, Pre-prepare, Prepare, Commit, and Reply in the PBFT consensus mechanism. e specific flow process of each stage is as follows: Request: a faculty member of the scientific research information system of a university initiates a request C from the system client and sends the request to the system master node, which is represented as a 0 node. e master node here is not unique. When an error occurs on the master node, the university scientific research information system will replace the new node as the master node. e consensus stage goes from entering to the Request stage. Pre-prepare: after receiving the request, the system master node broadcasts the message through the P2P network immediately. e broadcast range is all other nodes in the system, which are expressed as 1, 2, and 3 nodes here. e consensus stage enters the Pre-prepare stage. Prepare: after receiving the request, all other nodes in the system first record the content of the request, and broadcast it again relying on the P2P network. e broadcast range is all nodes except this node, expressed as: 1->0, 2, 3, 2->0, 1, 3. To prevent master node 0 from sending requests with different intentions of the faculty and staff from other nodes, it is set that node 3 cannot perform P2P broadcast on the received request C. e consensus stage enters the Prepare stage. Commit: when all nodes in the university scientific research information system receive more than 2/3 of the same requests from all nodes in the system in the Prepare stage, the consensus stage enters the Commit stage. Reply: when a node in the system receives one more request than 2/3 of the same requests of all nodes of the university scientific research information system in the Commit stage, the consensus stage enters the Reply stage. Feedback the consensus results to the faculty.
Business Logic Layer. Business logic layer includes the contract units and data encryption interaction mechanisms.
Data encryption interaction mechanism adopts asymmetric encryption method to encrypt data transmission. Specifically, the system will generate a unique pair of public key and private key for each user, the public key will be broadcast to the blockchain network, and the private key will be kept secret for the user. When the data are used, the data owner uses the data applicant's public key to encrypt and transmit, and the data applicant uses his own private key to decrypt the data. Various script codes, algorithms, and smart contracts of the blockchain system are encapsulated in the contract unit.
is is the core extension technology of blockchain network applications. Business logic layer is based on smart contracts, which are used to ensure the system's efficient operation while invoking the data encryption interaction mechanism to ensure the privacy and security of data throughout the system's life cycle. According to the business needs of university scientific research, information system construction, data collection, data verification, data sharing, data supervision, and other contracts can be adopted. Logical design of the smart contract is shown in Figure 4.
Collection of scientific research data, scientific research personnel information, achievement data, and financial data are mainly realized through data collection contracts. It mainly includes manual entry, network capture, batch import, automatic push, and other methods to ensure the traceability of data. is study makes a customized design for the data collection contract, as shown in Algorithm 1.
Authenticity of the data, the original verification, and the verification of the data caller are mainly realized through the data verification contract, which is used to ensure the security of the data. is study makes a customized design for the data verification contract, as shown in Algorithm 2.
Data sharing contract is used to set the procedural flow of data interconnection. e purpose of the contract is to realize multi-platform data sharing, break the information barriers of traditional scientific research systems, and get rid of the problem of information islands. is study makes a customized design for the data sharing contract, as shown in Algorithm 3.
Supervision contract is a necessary contract for all-round supervision of the entire scientific research system. Relying on the immutability of smart contracts, supervision contracts can realize credible supervision of the data of the entire system. In this study, supervision contract is customized, as shown in Algorithm 4.
Application Service Layer.
Application service layer provides complementary services in the form of web pages and mobile apps to scientific researchers, management departments, other universities, government affairs departments, etc., and also provides data interaction services according to users' different permissions, including project management, result management, performance evaluation, and data sharing. Project management includes project declaration, initiation, change, acceptance, funding, and other modules. en, achievement management includes management modules such as papers, scientific and technological awards, achievement appraisal, intellectual property rights, etc. Performance evaluation includes a subitem and comprehensive evaluation of teachers' scientific research performance. Finally, data sharing is designed to realize the interactive sharing of information such as scientific research projects, technological achievements, and performance evaluations of universities with other universities or government departments.
Results and Analysis
3.1. System Implementation. Blockchains are usually divided into public, consortium, and private chains. Public chain is entirely open, and anyone can conduct Mobile Information Systems transaction operations on the public chain, represented by Bitcoin, Ethereum, etc. Alliance chain is composed of some enterprises or organizations, and the organizations on the chain must approve its joining and exiting. Moreover, it has the characteristics of partial decentralization, represented by Hyperledger. e private chain is generally used within a company or organization. Operation authority on the chain is controlled by an organization, which is mainly used to manage internal work. While, public chain consensus is based on PoW (Proof of Work), which is unsuitable for nonfinancial fields such as supply chains where frequent transactions and information interaction are the mainstays. At the same time, as a consortium chain platform, Hyperledger has more evident advantages in high availability, high performance, and privacy protection. erefore, this study chooses the Fabric platform under Hyperledger to build a university scientific research information system. System is based on the network technology of the TCP/IP protocol, combined with blockchain, database, software engineering, data Mobile Information Systems coding, etc., with Hyperledger Fabric as the blockchain platform, MySQL as the cloud database, and Go, Java-Script, Html, CSS as the development language. System is developed using framework such as Nodejs and Bootstrap, and the data are processed and sent uniformly in JSON format. System adopts a browser/server (B/S) structure and runs in a wide area and local area network. Blockchain-based university scientific research information system mainly provides users with a more secure data storage mode, data sharing across the platform, and highly transparent review and evaluation, and dramatically enhances the system's scalability. Mobile Information Systems e login interface of the prototype system is shown in Figure 5(a). System provides a variety of login channels for different personnel, and the user can select the corresponding category to match the corresponding functional authority. Login method system takes the form of an account and password to log in. Before logging in for the first time, users can register for an account through the link at the bottom of the website. Taking the management department as an example, the entire system consists of five submodules for the management department, namely homepage, project management, result management, performance evaluation, and data sharing. e homepage of the prototype system is shown in Figure 5(b). At the top of the interface is a carousel of campus scenery, and the middle part uses tables and charts to make statistics and visual displays of the school's semiannual scientific research results. Below is a graph of the dynamic changes in department performance and the latest notifications. Project management interface is divided into five functions, namely project declaration, project establishment, project change, project acceptance, and project funding, by the steps of the project declaration as illustrated in Figure 5(c). Users can fill in relevant information to declare the project. As shown in Figure 5(d), the results management contains four essential management functions: thesis and work, scientific and technological awards, achievement appraisal, and intellectual property rights. Users can select the corresponding link and enter the corresponding management interface.
Experimental Performance Analysis.
We carry out numerical analysis on three aspects of fault tolerance rate, consensus consumption, and delay of scientific research information system in colleges and universities. In terms of fault tolerance rate, this study applies the PBFT consensus mechanism to the scientific research information system of colleges and universities. In the model, if there are f faulty nodes and f problem nodes, in the worst case, the faulty node and the problem node are different nodes. According to the principle that the minority obeys the majority, the normal nodes in the model should be larger than the faulty nodes and the problem nodes, so f + 1 normal nodes are needed. e model summary point is n. According to the following expression, we can see that the maximum fault-tolerant node of this model is (n − 1)/3.
In terms of consensus consumption, this study uses the number of nodes as a variable. When the total number of nodes is 5, 10, 15, and 20, the time it takes for nodes to reach consensus is shown in Figure 6. As the number of nodes increases, the number of requests broadcast by each node in the model increases, and the response time of each node to the request increases gradually, resulting in an increase in consensus time consumption.
In terms of data latency consumption, we count the time interval between when a request is initiated and when the request receives a response. Taking the number of requests as a variable, when the number of requests is 50, 100, 200, 300, 400, and 500, the data read delay is shown in Figure 7. Since the model adopts the "on-chain + off-chain" dual-mode storage mechanism and adopts the PBFT consensus mechanism, the data read delay increases as the number of requests increases.
Application Case Analysis.
After conducting on-the-spot research on several universities, we initially chose to apply the system to a university in Beijing, China, to verify the effectiveness of the system. e university adopts a centralized storage mechanism, and the data security cannot be guaranteed; there are information island barriers between the scientific research information of faculty and students, and it is difficult to share scientific research data; transparency of scientific research evaluation and evaluation process of the university is low, and credibility of the evaluation and evaluation results is low. erefore, the university scientific research information system developed in this study is used for optimization.
To verify the actual application of the system, we made statistics on the data interaction of the system within month, as shown in Figure 8. After we use blockchain technology to optimize the scientific research information system of colleges and universities, in the initial stage, due to publicity and popularization reasons, the use of scientific research personnel in colleges and universities is less. Over time, the number of data interactions within the system increases. On this basis, we classify the data request types in the system, as shown in Figure 9. Scientific research data sharing, departmental information exchange, and scientific research data management are the main interactive contents, which have played a good demonstration role in the digital transformation of scientific research information in universities.
Blockchain-based university research information system can enhance the ability of school administrators to manage research data. We conducted statistics on the data stored to the blockchain 1 month after the application of the system, as shown in Figure 10. Data stored on the blockchain grew incrementally with time, and when the system was put into use on the 20th day, the amount of data on the blockchain already exceeded the amount of data in the traditional system for nearly 6 months.
We have counted the management behaviour of school administrators in 6 months, as shown in Figure 11(a). ere is a gap between the total number of management requests and the total number of actual completed requests, and the system can achieve trustworthiness and transparency in research management. In the second month of testing, we tested the tamper-evident capability of the system, as shown in Figure 11(b). We performed more than 1200 attacks on the system in 1 month, and the system was able to identify and block them accurately.
Conclusions
Based on summarizing the existing problems in universities' scientific research information systems, we elaborated on the advantages of applying blockchain technology to construct scientific research information systems in universities. We analyzed the idea of implementing blockchain technology in universities' scientific research information systems and proposed a general framework for university research information systems based on blockchain technology. Our research provides an innovate scientific information system framework based on intelligent blockchain technology to promote university scientific research's information level and management efficiency.
ere are two major contributions to solve the challenges faced by the current scientific Figure 11: Research data management statistics chart.
research information system: firstly, the formulation of scientific research management blockchain technical specifications and top-down implementation are used to ensure the interconnection and intercommunication of crosssubject scientific research information and honestly give full play to the advantages of blockchain technology. Secondly, four smart data contracts, including data collection, verification, sharing, and supervision, are custom-designed to optimize the scientific management process, which no longer just staff personnel operations but also offer solutions to strengthen professional requirements of scientific research managers. Validation experiments and analysis demonstrates the better efficiency and robustness of the proposed information system, which meets the practical demands of scientific research management in different university and research institute applications. In the future, the proposed approaches in the study can combine other advanced information technologies such as artificial intelligence and big-data mining algorithms to study pattern recognition problems of linear and nonlinear systems, and can be applied to other fields such as time-series forecasting and engineering application systems [42][43][44][45][46][47].
Data Availability e authors declare that the data supporting the findings of this study are available with the authors.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this article. | 8,437.8 | 2022-05-27T00:00:00.000 | [
"Computer Science"
] |
Devonian-Carboniferous boundary sections in Iran
Many sections are known from Iran which exhibit sediments across the Devonian-Carboniferous (D-C) boundary. In contrast to the majority of published D-C sections worldwide from pelagic/hemipelagic environments, successions in Iran are mainly composed of shallow-water sediments. Correlation with hemipelagic or pelagic palaeoenvironments remains difficult due to biostratigraphic uncertainties in most sections and/or hiatuses. On the other hand, a limited number of sections dealing with shallow-water facies settings in Iran at this particular time period are known and further research is necessary. Several sections in the Alborz Mountains provide an excellent opportunity to study successions across the D-C boundary in shallow-water facies. In Iran, protognathoids are represented by Protognathodus meischneri and Protognathodus collinsoni. The two biostratigraphically important protognathoids (Protognathodus kuehni and Protognathodus kockeli) were not reported or did not occur for the first time in the Late Tournaisian. Early siphonodellids were described instead. In the frame of an Iranian/German research project, we study different palaeoenvironments to reduce serious palaeoenvironmental and palaeogeographical sampling bias which may limit our knowledge on the Hangenberg Event particularly in shallow-water facies. We present a summary on published D-C sections in Iran (Ghale-Kalaghu, Howz-e-Dorah 1, Howz-e-Dorah 2 and Shahmirzad) and sections which are under study (Mighan, Chelcheli and Khoshyeilagh) at the time of this writing.
Introduction
Based on recent discussions on climate change, the study of extinction events, dynamics and reasons for environmental and climate changes in Earth's history is of fundamental importance. The Devonian-Carboniferous (D-C) transition is one of the most interesting time slices in Earth's history as this period was characterised by extreme climate and faunal changes which led to the end-Devonian biodiversity crisis. Based on the ecological severity index by McGhee et al. (2013), the end-Devonian extinction is known as the fourth severe mass extinction in Earth's history. This first-order mass extinction eliminated nearly 20% of marine invertebrate genera and reduced the long-term biodiversity of all vertebrates by about 50% (Sepkoski 1996;Walliser 1996;Sandberg et al. 2002). As shown in the review paper by Kaiser et al. (2016), these estimates are poorly constrained for many fossil groups and much more work is necessary (for instance, in shallowwater realms) to get a better understanding of the complex interactions between palaeoclimate dynamics, palaeoecosystem changes and faunal diversity.
The D-C transition is characterised by several transgressive/regressive cycles, and widespread ocean anoxia has been recognised along continental margins or epicontinental basins known as the Hangenberg Black Shale (HBS) Event. Close to the D-C boundary, a major sea level fall (Hangenberg Sandstone (HSS) Event) of more than 100 m can be recognised in many sections around the world. Deposition of these black shales and sandstones is known as This article is a contribution to the special issue "Global review of the Devonian-Carboniferous Boundary". early and middle phases of the Hangenberg Crisis as defined by Kaiser et al. (2016) and Becker et al. (2016). The classic hemipelagic "Rhenish standard succession" of the Drewer section in the Rhenish Massif (Germany) exhibits the characteristic succession of the Hangenberg Crisis and was used to correlate different D-C sections from epicontinental basins and continental margins elsewhere (see summary by Kaiser et al. 2016). Depending on facies setting, equivalents of the regressive Hangenberg Sandstone Event can also be recognised as an unconformity and/or reworked sediments as shown by Cole et al. (2015), Bábek et al. (2016) and Kaiser et al. (2016). Stratigraphical gaps and non-deposition related to this major regression are also known from Eastern Iran (Bahrami et al. 2011). This eustatic sea level fall (HSS Event) might be associated with a glaciation on Gondwana. Evidence for this hypothesis is based on sedimentological as well as palaeontological criteria, and data are published from South America, North Africa and the Appalachians (e.g. Isaacson et al. 1999Isaacson et al. , 2008Streel et al. 2000Streel et al. , 2001Caputo et al. 2008;Brezinski et al. 2010;Lakin et al. 2016). Based on these data in combination with the magnitude of the sea level change, glaciation seems to be the main reason for that major eustatic sea level fall. However, the trigger mechanisms for anoxia and glaciation at the D-C boundary are discussed controversially and require further research. Regarding the latter one, it seems likely that widespread volcanism at that time was underestimated as there is a lack of large igneous provinces preserved in the rock record. On the other hand, evidence of widespread volcanic activity (pyroclastic ash flow deposits) around the D-C boundary is known from many countries such as in Germany, Spain, Uzbekistan, South China, Vietnam and Mongolia (Bai 2001;Liu et al., 2016;Komatsu et al. 2014;Lai et al. 2014;Racki et al. 2018a;Paschall et al. 2019;Stribrny et al. in press).
The D-C boundary has been defined on conodont biostratigraphy from the Global Stratotype Section and Point (GSSP) location La Serre Trench E' section in the Montagne Noire, France, where it is based on the first appearance of the basal Carboniferous conodont Siphonodella sulcata (Flajs and Feist 1988;Paproth et al. 1991). However, under the state of taxonomic knowledge when the GSSP was established at La Serre, the definition of the D-C boundary, and the stratotype section itself, is considered problematic to maintain. The early siphonodellids are taxonomically difficult to distinguish, and therefore, morphotype groups were established for the more precise discrimination of phylogenetically early and late faunas .
Therefore, the GSSP position has to be re-located at La Serre (Kaiser 2009), or the D-C boundary needs to be re-evaluated, either by new biostratigraphic indicators (Aretz 2013;Corradini et al. 2016) or by a combined biostratigraphic and sedimentological set of criteria (e.g. Becker et al. 2016). Another problem is obviously caused by the palaeogeographic sampling bias as the majority of investigated sections are from epicontinental basins and continental margins, whereas from shallow-water successions, detailed descriptions are limited. In this report, we use the revised conodont biozonation published by Spalletta et al. (2017) until the first occurrence of Siphonodella praesulcata, then we continue to apply the conodont biozonation published in Kaiser et al. (2009) which means praesulcata Zone (old Lower praesulcata Zone), ckI (extinction-based costatus-kockeli Interregnum), kockeli Zone (old Upper praesulcatus Zone) and sulcata/kuehni Zone (old sulcata Zone). This is due to the fact that Spalletta et al. (2017) delete the praesulcata Zone, and the ultimus Zone includes the praesulcata Zone and the ckI. Thus, the conodont biozone ranges of Kaiser et al. (2009) andSpalletta et al. (2017) are not congruent.
In this report, we present the state-of-the-art Devonian-Carboniferous boundary (DCB) in Iran. But, we do not describe all sections which exhibit Devonian and Carboniferous rocks across the D-C boundary in Iran due to large hiatuses close or around the D-C boundary and/or limited stratigraphy. In that case, we refer to relevant references. Most D-C sections in Iran are composed of shallow-water sediments, but some have potential to define the D-C boundary in palaeoenvironmental settings, of which we have a lack of information. Herein, we summarise published data of Bahrami et al. (2011) and provide new data on sections under study (Mighan, Chelcheli and Khoshyeilagh) in shallow-water facies.
Geological background
During the Palaeozoic, Iran was located at the northern margin of Gondwana (Berberian and King 1981;Scotese 2001). A small area in the north-east, the Kope-Dagh, was part of Laurussia (Berberian and King 1981). According to Golonka et al. (1994), most of Iran in the mid-Palaeozoic was located about 20-25°south of the palaeoequator. Upper Devonian and Carboniferous rocks are widespread in Iran but often belong to different structural units, and therefore, correlation is difficult. Iran can be subdivided into several structural units, some of which are separated by suture zones (Alavi 1991;Davoudzadeh 1997;Stöcklin 1968 (Fig. 1).
A number of sections across the D-C boundary in Iran were studied in the last decades, providing very variable information on the quality and quantity of different fossil groups, and limited information on geochemical proxies and sedimentology/facies is available. The sections summarised herein concentrate on successions across the D-C boundary even if some sections have a much larger stratigraphic range as it is shown herein. The sections described are already published and/or sections are under study, and some can be used to determine the D-C boundary based on Siphonodella sulcata. But, much biostratigraphic work is necessary in order to get a well-constrained DCB level and, thus, a profound overview on the Hangenberg Crisis in Iran. Our publication did not consider short reports and/or comments providing less detailed information on the D-C boundary and some unpublished master's theses and reports which have only been partly accessible. We summarise sections from the Central Iran (Shotori Range) and the Alborz Mountains, and we provide an improved correlation chart of most important D-C sections in Iran.
Shotori Range
The Shotori Range belongs to the east-Central Iran microplate which was situated approximately 33°S of the palaeoequator in the Carboniferous (Golongka 1994). The area was part of the western Palaeo-Tethys, covered by a large shelf sea, and sediments were mainly formed in shallow neritic palaeoenvironments (e.g. Wendt et al. 2002Wendt et al. , 2005. Several D-C localities were studied in the Shotori Range in the last decades with a special focus on different fossil groups (e.g. Ashouri 1990Ashouri , 1997aAshouri , b, 1998Ashouri , 2002Ashouri , 2004Ashouri , 2006Ghavidel-Syooki and Moussavi 1996;Ashouri and Yamini, 2006;Yazdi 1999;Yazdi and Turner 2000;Wendt et al. 2005;Bahrami et al. 2011). Due to the overall shallow-water palaeoenvironments of sections in the Shotori Range, a remarkable stratigraphical gap of variable range around the D-C boundary was proposed by several authors (Ashouri 1995(Ashouri , 1997bYazdi 1999;Wendt et al. 2005). Based on more detailed sampling, Bahrami et al. (2011) described three sections in the southern Shotori Range (Ghale-Kalaghu section, base of the section 33°2 0′ 40.86″ N, 57°20′ 09.72″ E; Howz-e-Dorah 1 section, base of the section 33°22′ 21.07″ N, 57°20′ 22.85″ E; Howz-e-Dorah 2 section, base of the section 33°22′ 16.67″ N, 57°20′ 23.0″ E) with a focus on conodont stratigraphy. The authors describe conodont assemblages from each section (Fig. 2a-c), but the paper lacks detailed information on a sedimentological/facies description and other fauna. Furthermore, no information is provided on geochemical proxies.
The investigated sections of the Shotori Range exhibit a continuous sedimentological record except a small hiatus close to the D-C boundary which covers the upper part of the praesulcata Zone and, most probably, the lower part of the sulcata Zone (Bahrami et al. 2011). However, due to different terminologies concerning the praesulcata (old Lower praesulcata) or kockeli Zone (old Upper praesulcata Zone), the stratigraphic range of the hiatus is not evident currently.
The stratigraphical gaps are in compliance with the sedimentological record. Around the D-C boundary, oolites and gypsiferous shales occur. Due to the overall shallow-water palaeoenvironments, the conodont record is not abundant and the faunas of the three sections are mainly composed of shallow-water genera (Bahrami et al. 2011) whereas pelagic index species are scarce (e.g. only one Siphonodella sulcata occurs in each section). However, previous biofacies models and concepts are recently under considerations (see discussions in Kaiser et al. 2017), and the occurrence of Siphonodella, which was regarded previously as pelagic fauna, and Protognathodus, which was considered previously as shallow-water fauna, is more likely due to biotic opportunism rather than sea level indicator ). In the study by Bahrami et al. (2011), the authors used species of Polygnathus and Pseudopolygnathus to identify the zonal boundaries. Macrofauna such as brachiopods, corals and ostracods occur in all sections but have not been studied systematically. The number of ostracod specimen is high, but in comparison to other shallow-water sections of late Famennian age from Mongolia (Nazik et al. in press), the ostracod fauna from Iran is less diverse.
Vertebrates were investigated in the Howz-e-Dorah section and in the Kale Sardar section, Shotori Range by Yazdi and Turner (2000), which helped to improve understanding of linkages between western and eastern Gondwana but were not useful in terms of a detailed biostratigraphy.
In Eastern Iran, the Nias section (33°39.512′ N, 57°0 8.568′ E) (Yazdi 1999;Wendt et al. 1997Wendt et al. , 2005 seems to represent a relatively undisturbed section probably including the D-C boundary. Frasnian and Famennian sediments including the Annulata Event are reported by several authors from that section (Feist et al. 1999;Morzadec 2002;Becker et al. 2004). Wendt et al. (2005) determined conodonts from the last carbonate layer of this section (but conodonts were not figured) which may indicate a middle Tournaisian age, but the D-C boundary was not yet constrained and future investigations are necessary. More sections were studied such as a section near Tabas (section A, 2 km NW from Tang-e-Abbassi; see Yazdi 1999), but the D-C boundary interval is characterised by a sedimentological hiatus/erosional surface. Thus, in the Shotori Range, the sections published by Bahrami et al. (2011) are the best D-C sections known so far even if hiatuses may occur. Thus, a more detailed stratigraphic framework is necessary, as well as a more detailed sedimentological record, and a systematic record of other fossil groups, and the study of geochemical proxies in order to get a better understanding for trigger mechanisms of the ecological collapse of the Hangenberg Crisis in shallow-water palaeoenvironments.
Alborz Mountains
The range of Alborz Mountains is 60-130 km wide and extends from NW Iran (East Azerbaijan Province) along the Caspian Depression into the Kope-Dagh and northwestern Afghanistan. In the Kope-Dagh, which was part of Laurussia (Berberian and King 1981), Palaeozoic rocks occur in remote settings close to the Turkish-Afghan border, and two sections were described by Wendt et al. (2005). The sedimentological record considerably differs from that of the other sections in Iran, and whether the successions contain Carboniferous rocks remains questionable.
Mid-Palaeozoic rocks in the central part of the Alborz Mountains are known for a long time (e.g. Wendt et al. 2005, cum lit.). Rocks are characterised by platform-type sediments, such as limestones, dolostones, sandstones and shales. Some sections exhibit more hemipelagic successions, but also hiatuses and unconformities obviously occur in Palaeozoic rocks in the central Alborz Mountains. Devonian and Carboniferous deposits in eastern Alborz are much thicker and more fossiliferous than the deposits of central and western Alborz and thus has become of special interest to study D-C boundary sections (e.g. Weddige 1984;Wendt et al. 2005;Habibi et al. 2008;Mohammadi 2009). Whereas the Upper Devonian Jeirud Formation (Assereto 1963) generally is composed of intratidal to supratidal sediments, the Lower Carboniferous Mobarak Formation (Stepanov 1971) exhibits the most extensive carbonate cycle along the northern margin of Gondwana (Torsvik and Cocks 2004;Wendt et al. 2005).
The sedimentological record of the type section of the Jeirud Formation (e.g. Gaetani 1965;Djafarian 1973;Wendt et al. 2005) is not useful to define the D-C boundary as the top
2011)
Howz-e-Dorah 1 section (Shotori Range) of the Upper Famennian is covered by a 300-m-thick basalt flow (Sartenaer 1964;Gaetani 1965). The type section of the Mobarak Formation investigated by Assereto (1963) requires much more detailed sedimentological as well as stratigraphical studies. Furthermore, the underlying Jeirud Formation is strongly faulted.
In the central Alborz Mountains, the Shahmirzad section (base of the section 35°47.29′ N, 53°18.84′ E; Fig. 3) is the best studied section in that region. The several-hundredmeter-thick Jeirud Formation which unconformably lies upon the upper part of the Mila Formation (Ordovician) (Peng et al. 1999;Wendt et al. 2005) is mainly composed of greenish and reddish shales, quartzites, reddish sandstones, dolomitic and conglomeratic sandstones and sandy limestones which point to a fluvial-deltaic palaeoenvironment with intervals of fully marine sediments (Ueno et al. 1997). Based on rare fossils, this formation has a stratigraphic range from Mid-Devonian to Lower Carboniferous (e.g. Gaetani 1965;Kimyai 1972;Ghavidel-Syooki 1995). The Jeirud Formation is continuously overlain by the Mobarak Formation which is represented by fossiliferous limestones, black shales and marly limestones, suggesting a subtidal palaeoenvironment. The Carboniferous part of the Shahmirzad section contains a number of different fossil groups such as crinoids, brachiopods, bryozoans and gastropods (Ueno et al. 1997;Webster et al. 2007). The most detailed stratigraphy based on conodonts was published by Habibi et al. (2008) (see Fig. 3).
2011)
Howz-e-Dorah 2 section (Shotori Range) Due to the palaeoenvironmental setting, the conodont fauna is quite scarce and not well preserved. Nevertheless, these authors discriminated six conodont zones and the D-C boundary which is defined by the first occurrence of Siphonodella sulcata in sample 16 (Fig. 3). The disadvantage of this section is based on the very shallow-water sediments of the Jeirud Formation. Thus, a very limited number of conodonts were found in the uppermost part of the succession (uppermost Famennian), and most conodonts were described from the overlying Carboniferous Mobarak Formation. On the other hand, the occurrence of a rich acritarch association and the rare conodont findings allow to place the D-C boundary below sample number 16 (Habibi et al. 2008). The spore Retispora lepidophyta was found in sample P2 in this section. According to Streel and Loboziak (1996, p. 582), Retispora lepidophyta has his first occurrence within the Late expansa Zone (ultimus Zone of Spalletta et al. 2017) and became extinct just below the D-C boundary, while the other taxa have a longer range. The first occurrence (FO) of Protognathodus kockeli in this section cannot be used for conodont zonation and cannot be applied to determine the kockeli Zone as this species was found after Habibi et al. (2008) for the first time which was much higher in the section (sample 30, L. typicus-anchoralis-latus interval). Therefore, the new proposed zonation by Spalletta et al. (2017) is not helpful here since the DCB interval would remain undivided by conodonts. Since Protognathodus kockeli is not known from the typicus-anchoralis-latus Zone but became extinct already in the crenulata Zone after the current state of knowledge, a re-evaluation of ranges in different environmental settings is necessary. Also, confusion with the homeomorphic Gnathodus fauna which occurs in the typicus-anchoralis-latus Zone could be one possible reason (Kaiser and Hubmann in prep.).
In the light of ongoing discussions, three sections in the Alborz Mountains are under study by an Iranian/German working group. These sections have a potential to increase knowledge on the Late Devonian biodiversity crises in different neritic palaeoenvironments and the Hangenberg Biocrisis in particular. The state of knowledge at the time of this writing is given for the three sections.
-Unit A (34.32 m thick) starts with 3-m alternation of grey to dark medium-bedded limestones and shales with trilobites, brachiopods, corals, gastropods and crinoids. It is followed by 10.20-m-thick succession of green to grey marls with rare fossils. The marls are overlain by an alternation (5.32 m thick) of grey medium-bedded limestones and marls showing the same fauna as at the base of unit A. The topmost part of unit A is composed of green to grey marls which have a thickness of 18.80 m. -Unit B has a thickness of 30.4 m and is characterised by more calcareous sediments compared to the previous unit. The medium-bedded limestones exhibit a nodular fabric, and occasionally, they are bioturbated. -Unit C which is 3 m thick contains grey medium-bedded marly limestones. Rare macrofossils such as trilobites occur. This unit contains the conodont Siphonodella praesulcata and is conformably overlain by grey shales. -Unit D is composed of mainly grey shales with rare brachiopods and trilobites and has a thickness of 1.25 m. This succession is most probably the equivalent of the HBS. -Unit E (2.5 m thick) contains mainly white cross-bedded sandstones (in distinct layers, also reddish sandstones occur) with some shell fragments and crinoids representing a shallow-neritic palaeoenvironment. The sandstones of unit E can be considered as an equivalent of the HSS. -Unit F has a thickness of 16.53 m and starts with 30-cm-thick grey medium-bedded limestones which y i e l d e d S i p h o n o d e l l a s u l c a t a . T h i s L o w e r Carboniferous limestone is overlain by 5.23-m green and dark grey shales following upwards to grey medium-bedded shaly limestones (11 m thick). In the uppermost part of this unit, an increase of bioclasts such as brachiopods, gastropods, ostracods, crinoid stems and corals occurs. A general sedimentological change from sandstones in unit E to carbonates in unit F indicates more distal palaeo-environments.
The investigated part of the Mighan section ranges stratigraphically from the Bispathodus aculeatus aculeatus conodont Zone (Spalletta et al. 2017) to the Siphonodella sulcata/Protognathodus kuehni ) conodont Zone (Fig. 5, Table 1). Herein, we present an overview on conodonts found in the section, and more details will be published by Parvizi et al. (in prep.) as a part of Parvizi's PhD thesis. As this research is work in progress, data on other fauna, a detailed sedimentology/facies description and geochemical data will be published later.
Unfortunately, most conodont samples of the Mighan section were barren, and only 28 samples out of about 45 yielded conodont elements. The abundance is quite scarce with few elements/kg only, except sample M26 which yielded 41 elements/kg (Table 1), and the preservation of conodont elements is not good, since many specimens are broken and incomplete. We discriminated 23 species and subspecies which belong to 4 genera (Bispathodus, Polygnathus, Pseudopolygnathus, Siphonodella). The genus Bispathodus is, by far, the most abundant, representing 25.7% of the entire fauna. Due to facies setting (most conodonts are broken), we counted 62% of unassigned conodont elements.
The disadvantage of conodont samples in that section is the lack of significant conodonts such as Protognathodus kockeli and P. kuehni; the former one was used for the new conodont biozonation (Spalletta et al. 2017) as marker species, and P. kuehni can be considered as a reliable index fossil for the sulcata Zone (see Kaiser et al. 2019, cum lit.). However, we found two specimens of Siphonodella praesulcata (samples M32 and M36) and one specimen of Siphonodella sulcata (sample M36) instead. Thus, it is possible to define the D-C boundary in this section based on the presently valid biostratigraphic criteria. Whether the combined biostratigraphic, sedimentological and future geochemical set of criteria confirms the position of the boundary depends on current research. Preliminary sedimentological criteria support the suggested position of the D-C boundary in the Mighan section as shown in Fig. 5.
The Chelcheli section (base of the section 36°36′ 15.54″ N, 54°32′ 55.57″ E) is also characterised by shallow-water facies. In distinct layers, limestones and shales exhibit diverse fauna which is composed of corals, bryozoans, vertebrate remains, brachiopods and gastropods among others. Work in progress concerns a detailed description of the fauna, sedimentology and the study of geochemical proxies. Herein, we present a preliminary record on conodont occurrences of this section (Fig. 6). It is noteworthy that specimens of Protognathodus are not completely absent and obviously co-occur with the early but rare siphonodellids. However, the Protognathodus and Siphonodella records from Iran have to be confirmed in more detailed taxonomic studies. In this respect, the specimens of Protognathodus are represented by morphotypes which can be regarded as atypical morphotypes (Ghale-Kalaghu section and Howz-e-Dorah section (Bahrami et al. 2011), Plate 4, Figs. 14-17, and unpublished faunal record from Chelcheli) due to their affinity to the homeomorphic Gnathodus faunas.
The D-C transition of the Khoshyeilagh section (base of the section 36°55′ 11.03″ N, 55°26′ 53.95″ E) is also part of our joint research project. The entire section has a thickness of about 1300 m and was first described by Bozorgnia (1973). Several workers studied this section, focussing on different fossil groups (e.g. Brice et al. 1974Brice et al. , 1978Ahmadzadeh Heravi, 1971;Blieck et al. 1980;Hamdi and Janvier 1981;Weddige 1984;Ghods 1982;Morzadec 2002;Wendt et al. 2005). Sediments around the 1.5-m-thick D-C transition are composed of an alternation of grey to black shales with hematitic nodules and siltstones with small brachiopods (? equivalent to the HBS) which are conformably overlain by sandstones and shales (? equivalent to the HSS). As conodont determination and microfacies analysis are not yet finished at the time of this writing, we present a preliminary overview of our study (Fig. 7). The Jaban section (35°39′ 34.57″ N, 52°15′ 3.36″ E) in the central Alborz Mountains was described recently by Sardar Abadi et al. (2015). Lower Carboniferous sediments (Mobarak Formation) conformably overlie siliciclastic Late Devonian (Geirud Formation) successions, but the authors of that report have had a focus on Lower Carboniferous rocks and older sediments were not described. Thus, there is no detailed information whether this section contains equivalent rocks representing the Hangenberg Biocrisis. Moreover, the early Tournaisian interval is characterised by a hiatus, so that the D-C boundary cannot be determined (Sardar Abadi et al. 2015). Several other fossiliferous sections in the eastern Alborz were repeatedly investigated in the last decades (e.g. Blieck et al. 1980;Weddige 1984;Ashouri 1990;Wendt et al. 2005) such as the Deh Molla section (36°38′ 38.2″ N, 54°56′ 55.8″ E) (see Wendt et al. 2005), but this section exhibits a considerable reduction in thickness and shows an incomplete succession and thus it is not useful to study the D-C transition. Another section in central Alborz was studied by Falahatgar et al. (2015). Their study focused on Tournaisian foraminifers. The Tournaisian seems to be complete and continuous which allows the discrimination of MFZ1 to MFZ8 biozones, but the disadvantage is that the base of the Kahanag section is only characterised lithostratigraphically (Falahatgar et al. 2015).
Central Iran
One of the most fossiliferous D-C sections ranging from the Late Devonian to the Carboniferous (Late Mississippian) in Central Iran is the Anarak 1 section (base of the section 33°1 1.327′ N, 53°53.655′ E) (see Reyer and Mohafez 1970;Sharkovski et al. 1984;Wendt et al. 2005), but close to the D-C boundary, this section exhibits minor faults and a major gap which comprises almost the entire Famennian and the Tournaisian (Wendt et al. 2005).
Late Devonian conodonts (Siphonodella praesulcata Zone sensu Kaiser et al. (2009), Shishtu Formation) from the Dalmeh section were described by Hairapetian and Yazdi (2003). Younger sediments are not reported, and the conodont fauna is mainly composed of shallow-water species such as Icriodus forms. Youngest sediments of this section are composed of massive oolitic limestones. Hairapetian et al. (2000) mentioned late Famennian vertebrate remains from this section, but an overview on the entire faunal elements is not documented. The same section was sampled by Wendt et al. (2005), but they did not find conodonts neither in the well-bedded, laminated limestones and shales of the Bahram Formation nor in the Lower Carboniferous dolostones (Hutk Formation). The sediments were obviously slightly metamorphosed, and thus, conodont samples from the base to the top were barren. Correlation with the Kuh-e-Bashi section which is located approximately 25 km to the southeast is mainly based on lithology. The Lower Carboniferous rocks of this section are composed of dolostones whose age is still unclear (Wendt et al. 2005). Due to the biostratigraphic uncertainties and not suitable facies, more detailed studies on the D-C boundary in both sections are not worthwhile.
Another two D-C sections (Rahdar and Bakshi sections) further to the east are exposed in the Rahdar-Gachal Anticline (Wendt et al. 2005), in the "Kashmar-Kerman Tectonic Zone" (Ramezani and Tucker 2003) west of the Kalmard Fault exhibiting reduced Late Devonian and more complete Early Carboniferous successions. According to Wendt et al. (2005), the conodont fauna is scarce in Devonian as well as in Carboniferous sediments as a result of very shallow-water palaeoenvironments. Thus, in both sections, it seems unlikely to define the D-C boundary.
Concluding remarks
A great number of D-C sections were described in Iran in the last decades and improved the knowledge on stratigraphy, facies and palaeoenvironmental setting. But in many cases, the focus of those studies was either on a specific fossil group or stratigraphy. Some reports and theses are only partly accessible, and conodont assemblages (as well as other fauna) reported in various publications have not been figured and/or are not accessible anymore. Thus, a more comprehensive methodological approach of promising D-C sections is necessary.
We summarise and describe most suitable D-C sections in Iran (Fig. 8) and present the state of knowledge on re-sampled sections in the Alborz Mountains. Most sections contain small hiatuses around the D-C transition as shown above or in the publication by Bahrami et al. (2011) as a result of the overall shallow-water facies, but some sections may contain a more or less complete succession as it is the case in the Mighan section (Parvizi et al., in prep.).
Based on the state of knowledge, no D-C sections in Iran investigated so far seem to be a GSSP candidate due to the overall facies setting and thus often incomplete sedimentologic and biostratigraphic record. But, some sections provide a number of new results on the Hangenberg Biocrisis in shallowwater facies with respect to conodont stratigraphy, faunal assemblage, sedimentology/facies and geochemistry. As shown in this paper, some Iranian successions could have a high correlative potential with neritic successions in Europe, Morocco or China (e.g. Kaiser et al. 2004;Brice et al. 2007;Qie et al. 2015) due to the co-occurrence of biostratigraphic significant pelagic/hemipelagic (conodonts) and neritic (ostracods, brachiopods, corals, trilobites, gastropods, etc.) organisms.
Whether the sections under study (Chelcheli, Mighan and Khoshyeilagh) presented herein are characterised by small hiatuses at the D-C transition has to be proven by further detailed research. The assumption of a gap at the D-C boundary, above the regressive HSS, is based mainly on the absence of P. kockeli and P. kuehni faunas (Bahrami et al. 2011). However, it has to be considered that event-related, often highly condensed successions resulted in missing occurrences, or in diachronous first occurrences of fossils, especially of the facies-dependent early siphonodellids and early protognathodids . Therefore, the delayed entry or even absence of marker fossils as a consequence of major environmental changes during the Hangenberg Biocrisis at the D-C boundary, which is widely known from many other regions, could probably also applied for the Iranian shallow-water successions. This assumption is supported by the lithofacies. For example at Chelcheli, the carbonate sediments lying conformably above the HSS, and a sedimentological hiatus above the HSS, seem unlikely. Moreover, the Mighan section seems to represent a continuous succession around the D-C boundary, too.
The Protognathodus fauna in Iran is represented by Protognathodus meischneri and Protognathodus collinsoni previously reported by Bahrami et al. (2011), while the biostratigraphic significant P. kockeli (index marker for the kockeli Zone after Kaiser et al. (2009) = Upper praesulcata Zone) and P. kuehni (index marker for the joint sulcata/kuehni Zone after Kaiser et al. (2009) = sulcata Zone) faunas were not reported (except in the late Tournaisian by Habibi et al. 2008). However, more high-resolution biostratigraphic studies are needed in order to evaluate the occurrence of the early protognathodids. The absence of marker conodonts could be related to gaps, or facies-dependent late or rare occurrences, as explained above. Although condensation of event beds is less likely because of the overall micritic facies in the studied regions, it should be considered as well since the DCB interval is globally characterised by a carbonate crisis due to a glaciation pulse (Kaiser et al. 2008). Thus, the absence of marker conodonts due to condensation is, in this case, caused by previous sampling biases.
The praesulcata Zone, the extinction-based costatuskockeli Interregnum (HBS and HSS) of Kaiser et al. (2009) and the sulcata Zone can be recognised in the recently investigated Iranian successions as shown in the Mighan section (Fig. 5), while the kockeli Zone as well as the bransoni (duplicata) Zone cannot be recognised by conodonts. Since the biozonation concept of Spalletta et al. (2017) does not consider the praesulcata and sulcata zones as well as the ckI, the Iranian successions could consequently not been subdivided by conodonts at the D-C transition by applying this biozonation concept.
Based on the well-known morphological complexity of marker conodonts at the D-C boundary, especially of the siphonodellids, polygnathids and Siphonodella-like siphonodellids Becker et al. 2013), the Iranian successions can provide new important data upper uppermost 358.9 chrono. Ziegler & Sandbeg, 1990 sandbergi quadruplicata on the left side of the columns. Please note that work is in progress for three sections (Chelcheli, Mighan and Khoshyeilagh); thus, data presented are preliminary. Shown are old and new conodont zonations (Ziegler and Sandberg 1990;Kaiser et al. 2009;Spalletta et al. 2017) on conodont biostratigraphy. Thus, further detailed conodont studies are required to be supported by other methodologies (e.g. geochemistry, facies, magnetic susceptibility among others) to clearly define the D-C boundary in different facies. Nevertheless, research in shallow-water facies such as in Iran will help to reduce serious palaeoenvironmental and palaeogeographic sampling bias which may limit our knowledge on events in general, and particularly on one of the most severe extinction events in Earth's history. Finally, a specific problem in Iran arises with the given formation names of the described sections. Depending on the study areas, different formation names occur for the same time slice even if the lithological record is similar. A much better sedimentological record combined with a more precise stratigraphic range of some formations is necessary (as it was shown by Bahrami et al. (2018) for the Bahram Formation) in order to provide a better correlation between different sections in different tectonic settings or areas in Iran. | 7,701.4 | 2020-09-30T00:00:00.000 | [
"Geology"
] |
Characterizing thermal-oxidation behaviors of nuclear graphite by combining O2 supply and micro surface area of graphite
The effects of different parameters on oxidation rate are non-linear, interactive and diversified in which the change of adequacy of O2 supply is an important indicator. The influence of microstructure on oxidation rate became stronger worsening the fitting linearity to calculate the activation energy based on present method with the decreased adequacy of O2 supply due to the increase of temperature, the decrease of gas flow rate, etc. Here, we proposed a method to characterize thermal-oxidation behaviors of nuclear graphite by combining O2 supply and micro surface area of graphite. The proposed method improved the linearity and reduced the standard error of Arrhenius plots of oxidized graphite IG-110 (10 L/min reactant gas) and ET-10 (0.2 L/min reactant gas). The value of activation energy of graphite IG-110 oxidized under ASTM D7542 condition is calculated as 220 kJ/mol by this method echoing the results of previous studies with sufficient O2 supply. For the conditions with less O2 supply at low gas flow rate and/or high temperature, the change of microstructure of oxidized graphite should be obtained as an important factor influencing oxidation rate of graphite.
Nuclear grade graphite, because of its anti-radiation performance and excellent mechanical properties, is widely used in High Temperature Gas-cooled Reactor (HTGR) 1,2 and molten salt reactor 3,4 as the material of structure, moderator, reflector and fuel element. In addition, graphite is also applied to electronics, chemical engineering and other fields. When a HTGR runs under normal operation, the impurities are introduced due to graphite degasification and small leakage of H 2 O from the secondary side to the primary side through the heat-exchanger tubes of steam generator 5 , which inevitably corrodes the graphite components at temperature higher than 400 °C 6 . In addition, the oxidation/corrosion of the graphite will be accelerated during an air or water ingress accident 7,8 . It has been found that the mechanical and thermal properties of graphite will be deteriorated by its oxidation/ corrosion, shortening the lifetime of the graphite components [9][10][11] . In addition, the reaction of air and graphite can cause temperature increase by heat generation and accumulation of explosive CO gas in the reactor during an air ingress accident 12 . Despite of the above negative effects, the thermal oxidation can sever as an option to treat a large scale waste of nuclear graphite 13 .
Consequently, the oxidation/corrosion of graphite arises to be a crucial issue to assess the economy and safety of a HTGR among which O 2 oxidation is usually the fastest reaction. There are two main concerned reactions related with O 2 oxidation of graphite when temperature is less than 900 °C 14 : The reaction (1) is considered as the intrinsic oxidation reaction between O 2 and graphite. Reaction (2) (CO combustion) may influence the reaction (1) (oxidation reaction) by varying O 2 supply and energy balance of reaction (1). Related studies on nuclear graphite had been widely carried out in investigate the oxidation behaviors of various graphite. Present studies mainly consisted of two categories according to their purposes and reactant gas flow rates, high gas flow rate based on common sense of material engineering and low gas flow rate for accident conditions of reactor usually driven by natural convection. The studies with high gas flow rate usually had the oxidation conditions close to ASTM D7542 15 (originally approved in 2009) with related sufficient O 2 supply (e.g. 10 L/min air flow) and a cylinder geometry specimen (e.g. D = H = 25.4 mm) based on the common sense of material engineering. On the contrast, the studies with low gas flow rate usually had the diversified conditions (gas flow rate, O 2 concentration and geometry of specimen) with related insufficient O 2 supply according to the accident analysis for different reactors. Fundamentally, the graphite oxidation rate relates with temperature, difficulty of oxidation (activation energy) and reactant supply including O 2 concentration, gas flow rate and micro structure and geometry of graphite. The effects of different parameters on oxidation rate are non-linear, interactive and diversified. Present studies mainly focused on the relations between oxidation rate and temperature or O 2 concentration which discussed the activation energy or reaction order of graphite oxidation respectively. The influences of gas flow rate and microstructure of graphite on oxidation rate are usually ignored.
For oxidation behaviors with high gas flow rate, the microstructure of graphite, such as surface area, was indicated to be a constant object at a certain range of Mass Loss (ML), which was independent from temperature and O 2 supply 16,17 . Simultaneously, the standard of ASTM D7542 recommended the method to calculate the activation energy of graphite oxidation using average oxidation rates from 5% to 10% ML of specimen. Contescu et al. indicated the adequacy of O 2 supply could be the indicator whether ASTM D7542 is applicable 18 . The adequacy of O 2 supply indicated by the ratio of O 2 supply to consumed O 2 should be around 10 or higher to avoid the departure of the oxidation mechanism from chemical kinetic regime.
However, several studies 12,19-23 on graphite IG-110 got the quite different values of activation energy, whose conditions were close to that recommended by ASTM D7542. A study indicated that recommend condition by ASTM D7542 can not guarantee sufficient O 2 supply for oxidation of graphite IG-110, and therefore the increased insufficiency of O 2 supply resulted in decreased values of activation energy 24 . In addition, a recently study found the nonlinearity of the average reaction rate (from 5% to 10% ML) with the increase of air flow rate 25 implicating other factors such as microstructure may play an unignorable role in graphite oxidation.
On the other hand, with low gas flow rate, the studies considered the actual accident conditions of reactor usually concerning the oxidation behaviors of diversified geometries of specimens 11,24,[26][27][28][29][30] . Some of these studies characterized the oxidation behaviors according to oxidation rate at a same oxidation time 11,24,26,27 not a same range of ML since the accident analysis mainly concerned the situation based on the time criterion after the air ingress accident. All above studies with much lower supply O 2 (e.g. 0.2 L/min gas flow) usually got quite lower values of activation energy than those under close conditions with ASTM D7542 (10 L/min gas flow) for same graphite.
Based on the above phenomena, we concluded the adequacy of supply O 2 may act as an indicator to characterize the oxidation behaviors. The influence of microstructure on oxidation rate became stronger worsening the fitting linearity to calculate the activation energy based on the present method when the adequacy of O 2 supply decreased due to the increase of temperature, the decrease of gas flow rate, etc. For the situation of a related high gas flow rate (around 10 L/min), the oxidation behaviors should be discussed more in detail in terms of the type of graphite and actual O 2 supply. Since the conditions recommended by ASTM D7542 were determined by the experiments of graphite NBG-10, PGXW and H4650 31 , the sufficiency of O 2 supply may be hurt by the increased O 2 consumption of the oxidation of some other graphite, such as IG-110. For the situation of a low gas flow rate (e.g. 0.2 L/min) concerning the accident condition, the O 2 supply became insufficient quickly with the increase of temperature, and therefore apparently worsened the linearity of the fitting calculation 24,26 .
All mentioned above call a higher adaptive method to characterize the oxidation behaviors of nuclear graphite in a wide range of reactant supply (with gas flow rates of 10 L/min and 0.2 L/min, cylinder and oblate rectangular specimens, etc.). After re-examining relations among various factors determining the oxidation rate, we proposed a method to characterize the oxidation behaviors regarding concentration of O 2 , gas flow rate and surface area of open pore of graphite. Two typical scenarios were applied to validate the proposed method. The first is the oxidation of graphite IG-110 with high gas flow rate (10 L/min) whose conditions followed ASTM D7542. The second is the oxidation of graphite ET-10 with low gas flow rate (0.2 L/min) where the oblate rectangular specimens were oxidized. For the high gas flow rate (10 L/min), we got the related data including surface areas from previous study 32 . The calculation result of activation energy of graphite IG-110, 220 kJ/mol, echoed the results of previous studies with more adequate O 2 supply, 218 kJ/mol 12 and 222 kJ/mol 19 , compared with 201 kJ/mol 21 or 205 kJ/mol 32 under the experiment condition recommended by ASTM D7542. For the low gas flow rate (0.2 L/min), nuclear graphite ET-10, produced by IBIDEN Co. Ltd, was oxidized by a 0.2 L/min mixture gas (helium and O 2 , 10% or 20% O 2 mole fraction) at 650-850 °C. The oxidation facility mainly consisted of a gas chromatograph and a tube furnace whose original purpose aimed to provide basic oxidation data of graphite under accident conditions of a HTGR 24,26 . A mercury porosimeter measured the microstructure of the pristine and oxidized specimens. The higher linearity and smaller standard errors of Arrhenius plots also indicated the applicability and rationality of the proposed method. Future works were discussed concerning a wider range of gas flow rate and O 2 concentration and more types of graphite.
Results
Results of oxidized graphite with related high gas flow rate. Oxidation Rate (OR) and its related pore area of graphite IG-110 were obtained from the previous work of Wang et al. 32 . The specimens were oxidized based on the experiment conditions (10 L/min air flow) recommended by ASTM D7542 and their pore areas were obtained based on optical microscopy examination. The Arrhenius plots, i.e. the temperature dependence of OR, are shown in Fig. 1(a). The Arrhenius plot labeled with ln OR ( )
10%
uses the average oxidation rates from 5% ML to 10% ML for calculating the activation energy according to ASTM D7542. The Arrhenius plot labeled with that the experiment conditions of ASTM D7542 needed adjustment to provide enough O 2 for oxidation of graphite IG-110 24 .
The surface area of open pore positively related with the reaction area between graphite and O 2 and the volume of open pore positively related with the O 2 supply and the reaction volume of CO combustion. Graphite oxidation also positively related with CO combustion since graphite oxidation is the origin of CO. Previous study proposed a method to distinguish O 2 consumed by oxidation reaction and by CO combustion 24 . Here, Fig. 2(a) shows the relations between various factors, such as microstructure of graphite IG-110, O 2 supply (O 2 in exhaust (C(E)) and ratio of O 2 consumed by CO combustion (C(2)) to that by oxidation reaction (C(1))) and OR. Here, we include the average ORs from 5% ML to 10% ML and the revised ORs combing O 2 supply and surface areas of open pore. The ν ln OR C SA ( /( ) ) O had slightly higher linearity than the ln OR ( ) . Since O 2 was redundant (C(E) was high) at most temperature (except at 750 °C), ORs were mainly determined by the temperature effect.
Results of oxidized graphite with low gas flow rate. The nuclear graphite ET-10 was oxidized by the mixture gas (O 2 and helium) with 0.2 L/min mixture gas considering the air ingress accident of HTGR. The mole fraction of O 2 was 10 mol% or 20 mol%. Table 1 includes the calculation results of graphite ET-10. The standard errors of the proposed method are obvious smaller. In addition, it gets closer pre-exponential and activation energies at different O 2 concentrations.
Here, Fig. 2(b) shows the relations between various factors, such as microstructure of graphite ET-10, O 2 supply and OR. The ln(OR 90min ) had obvious worse linearity than the ln(OR/(CνSA O )). When O 2 was redundant (C(E) was high) from 700 °C to 725 °C, rates of oxidation reaction were mainly determined by the temperature effect. had worse linearity than the ν The ML rates of the specimens are shown in Fig. 3. The ML rates increased with the increase of the oxidation temperature. In addition, the O 2 concentration had a positive effect on the oxidation rate. Figure 4 shows the microstructure of the specimens oxidized at different temperature for 90 minutes. In general, the surface area and volume of small open pore (diameter < 30 nm) decreased with the increase of temperature. The surface area and volume of middle open pore (30 nm < diameter < 3000 nm) and big open pore (diameter > 3000 nm) increased with the increase of temperature. Totally, the surface area of open pore of graphite ET-10 decreased, and oppositely, the volume increased. An exception is at 850 °C where the volume of big pore was smaller than that at 800 °C resulting in the decrease of volume of open pore from 800 °C to 850 °C.
Discussion
The present method for characterizing the O 2 oxidation behaviors mainly concerned the influence of temperature on reaction rate based on ML (from 5% to 10%) of graphite 15,18,31 . The O 2 supply is predicted to be sufficient if the conditions of ASTM D7542 is obeyed strictly 18 . The microstructure of graphite, such as surface area, can be ignored or considered as a constant object at same ML range (5-10%) which was independent from temperature and O 2 supply 16,17 . In this way, the ORs are almost stable especially at the same ML range (5-10%) and the average However, O 2 oxidation of graphite and CO combustion interact through temperature, graphite supply (microstructure) and O 2 supply (concentration of O 2 and flow rate of reactant gas) resulting in quick change of the adequacy of O 2 supply (ratio of O 2 supply to consumed O 2 ) for some graphite. The microstructure of some graphite oxidized under the ASTM D7542 conditions, such as graphite IG-110, may be quite different even at same ML range (5-10% as required by ASTM D7542) when the oxidation temperature was different 32 (Fig. 5(a)). According to the experiment results by Contescu et al. 21 , the increased of OR was apparently higher than those of other graphite, such as graphite PCEA and NBG-18 under same conditions of ASTM D7542. This indicated the adequacy of O 2 supply (the ratio of O 2 supply to consumed O 2 ) for oxidation of graphite IG-110 was obviously lower than those of graphite PCEA and NBG-18. The situation of graphite PCEA can be indicated as the sufficient O 2 supply since the surface areas are almost independent from ML, temperature and O 2 supply within 5-10% ML, same as that mentioned by El-Genk and Tournier 16,17 . Of course, the absolute independence is impossible for any actual graphite. On the contrast, the situation of graphite IG-110 is far away from the sufficient O 2 supply where it is only sufficient at 600 °C within 5-10% ML and nearly sufficient at 650 and 700 °C at 10% ML. Because the above complexities of adequacy of O 2 of graphite IG-110, the change of oxidation conditions close to ASTM D7542 will apparently change the adequacy of O 2 supply. And therefore, the interaction between the changes of the adequacy of O 2 supply and the graphite microstructure results in variations of behavior of ORs with temperature, and finally different values of activation energy. This is the main reason why the calculation results of activation energy were quite different 12,19-22,32 for graphite IG-110. Among them, the more adequate O 2 supply increased the surface area of oxidized graphite and the OR at high temperature finally resulting in higher calculated values of activation energy (218 kJ/mol 12 and 222 kJ/mol 19 ).
Our proposed method, namely calculating activation energy considering microstructure of graphite, can mediate the influences of O 2 supply on microstructure of graphite, OR and activation energy, and therefore it gets closer results as those with more adequate O 2 supply. Figure 1(a) and Table 1 showed the higher linearity and the lower standard error of the fitting based on our method.
When observing the microstructure of oxidized graphite ET-10 with much lower O 2 supply, 0.2 L/min reactant gas, the micro surface areas of oxidized graphite ET-10 were usually less than that of pristine graphite ( Fig. 5(b)). In general, the surface area of open pore decreased with the increase of ML and temperature. The Arrhenius plot combining the surface area of open pore and O 2 supply was improved with less standard error (Table 1) and higher linearity (Fig. 5(b)). In addition, calculation results became more reasonable with closer pre-exponential factors and activation energies at different O 2 concentrations.
Our proposed method is applicable for not only high gas flow (10 L/min) but also related low gas flow (e.g. 0.2 L/min) regarding the situation of air ingress accident. It can be easily conducted by getting the OR at the end of experiment and measuring the microstructure of the oxidized graphite. At present, the mercury porosimeter is recommended for measuring the microstructure since it can obtain the information at a proper range of open pore (diameter from 3 nm to 400,000 nm). The experiment facilities in other studies can be easily shifted to our proposed method. The time of the oxidation experiment can be independent from the ML of graphite which should be longer than 40 minutes (60 minutes is better) to avoid the beginning stage of graphite oxidation with possible rapid rate change.
For the method characterizing the graphite oxidation at high gas flow rate, the recommend experiment conditions by ASTM D7542, such as air flow rate or/and geometry of specimen, should be adjusted to provide more adequate O 2 to oxidize some popular graphite, such as graphite IG-110, especially at related high temperature. In this way, we can calculated activation energy based on ML, OR and temperature because the influence of microstructure of graphite is predicted to be small.
For the method characterizing the graphite oxidation at low gas flow rate, the experiment and calculating method should take account into both O 2 supply and microstructure of oxidized graphite. The microstructure of graphite, especially the surface area of open pore, should be provided together with the activation energy.
The activation energy of nuclear graphite ET-10 obtained by our study, 357 kJ/mol (20 mol% O 2 ) or 396 kJ/mol (10 mol% O 2 ), is much higher than those of other graphite, usually around 200 kJ/mol. Although the comparison of them is not our purpose, some explanations may be needed. One reason is because the nuclear graphite ET-10 for HTGR is a newly developed graphite which is not produced in a commercial scale at present. The main properties and main impurities of graphite ET-10 and IG-110 are shown in Tables 2 and 3 respectively. The quality of obtained specimens of graphite ET-10, such as impurity, is predicted to be lower than that provided by the manufacture (Table 3) which is predicted as the upper limit of future products for the commercial scale. The metallic impurities, such as V, K, Fe, Ca, Al and Mg, usually have a catalytic effect on graphite oxidation by reducing the activation energy 33 . The contents of K and V in graphite ET-10 is apparently lower than those in graphite IG-110. Among the main metallic purities in Table 3, K and V are the strongest accelerators on the OR of graphite 34 . The second reason may exist in the surface of the test specimen. The SEM pictures of surface of pristine graphite IG-110 and ET-10, Fig. 6, shows the smaller number of powder and defect of specimen ET-10. The test specimens of nuclear graphite ET-10 were provided piece by piece by the manufacturer. On contrast, other graphite was usually provided in a big block by the manufacturer and then was machined to small specimens by the experimenters. The powder or defect may accelerate the oxidation of graphite especially at related low temperature. The recent Table 3. Main impurities of graphite ET-10 and IG-110. -not detected. study indicated that the ignition temperature of the specimen of graphite IG-110 machined by experimenters is around 400 °C with 0.2 L/min reactant gas flow while the ignition temperature of graphite ET-10 machined by the manufacturer is around 700 °C with same oxidation conditions 24 . The third reason is due to the calculation method. Our proposed method usually got higher values of activation energy because of the decreased surface area of oxidized graphite with the increase of temperature. The fourth reason is the different temperature ranges, 700-800 °C for graphite ET-10 and 600-750 °C for graphite IG-110. The activation energy sometimes depended on the temperature range at which the graphite was oxidized. Influences of CO combustion and the volume of open pore on oxidation rate increased with the increase of temperature and the decrease of gas flow rate. When characterizing the graphite oxidation, we may need further consideration of CO combustion and volume of open pore especially at related high temperature and low gas flow rate. CO combustion can change not only the contents in the exhaust gas and the actual O 2 supply to graphite oxidation but also energy balance of graphite oxidation because of its related high reaction heat.
We are now also planning the oxidation experiments of graphite ET-10 under conditions close to ASTM D7542 and the oxidation experiments of graphite IG-110 at low gas flow rates. The microstructure of oxidized graphite will be measured by a mercury porosimeter and other means. Further studies also include related experiments on other graphite with different grain sizes and porosities, such as PCEA, NBG-18 and NBG-25.
Methods
Method for calculating activation energy. Fundamentally, the reaction rate of O 2 oxidation of graphite at a time point (t) relates with oxidation temperature (T), activation energy of graphite oxidation (E a ) and reactant supply including gas flow rate (ν), O 2 concentration (C) and graphite microstructure (MS): According to definition of activation energy, we get: ( ( , ( )), ( , ( )), ( , , , )) (4) Usually, the surface area of open pore of graphite is the main factor of microstructure related with graphite oxidation. If the reaction temperature and O 2 supply (gas flow rate and mole fraction of O 2 ) at ordinary condition are stable, then we can get: In case of oxidation using air with same volume flow rate such as 10 L/min, the mole fraction of O 2 is around 21% and the values of C at different temperature in kinetic regime are close. In addition, the values of ASA (Active Surface Area) at different temperature in kinetic regime were usually determined by the ML of graphite independent from the oxidation conditions such as temperature, O 2 concentration and gas flow rate 16,17 : If the experiments for different oxidation temperature measured the average value of oxidation rate at same Mass Loss Range (MLR), the average value of ASA will be a nearly constant value. In other words, the oxidation rate will be nearly constant in the period during which the change of ML is small. The experiment study for some graphite (PGXW, NBG-10 and R4-650) confirmed this situation where the oxidation rate of related graphite became nearly constant when the ML was from 5% to 10% 31 . In this way, we get: where Z is a constant pre-exponential factor. Finally, the activation energy of graphite in kinetic regime can be calculated according to the slop of Arrhenius plot based on the recommended condition by ASTM D7542 15 :
MLR a
However, several studies 12,[19][20][21][22] found the activation energies of graphite IG-110 were quite conditionally dependent on O 2 supply even if the experiment condition was close to that recommended by ASTM D7542. The change of pore areas of oxidized graphite IG-110 is determined not only by ML but also by oxidation temperature and possibly other factors, such as oxidant flow rate and O 2 concentration. Graphite IG-110 demonstrated the obvious decrease of surface area with the increase of oxidation temperature (600, 650, 700 and 750 °C) at same ML (5% and 10%) 32 which was different from the predicted stableness of surface areas 16,17 .
In addition, regarding the actual situation of HTGR, some other studies 19,[26][27][28] had to concern the oxidation behaviors of non-standard shape graphite with a much lower gas flow. The rationality of the results of these studies was in doubt since the calculation method and experiment conditions recommended by ASTM D7542 were required to be strictly obeyed 18 .
Furthermore, even at same MLR of 5-10%, some recent studies revealed that the complexities of ORs when increasing the oxidant flow rate 25,26 . These phenomena suggested the influence of flow rate on micro surface area cannot be ignored for some graphite. In summary, the influence of gas flow rate, O 2 concentration and microstructure of some graphite on OR under different temperature cannot be combined to a constant pre-exponent factor: ( , ) ( ( , ( )), ( , ( )), ( , , , )
OR t T f t T t C t T t MS t T C e Ze
Consequently, when characterizing the kinetic parameters of graphite, we need to consider the changes of microstruture, O 2 concentration and oxidant flow rate. If the contributions of graphite microstruture, O 2 concentration and oxidant gas flow rate are considered equally and surface area of open pore is applied to represent the microstruture, then: where, B is a pre-exponential factor. We can calculate the apparent activation energy of graphite according to linearized form of the following equation: O a where the units of OR, ν, C and SA O are g/(g ⋅ s), m/s, g/m 3 and m 2 /g respectively for actual calculation.
Here, the OR can be indicated by: where m is residual mass of specimen. The ML rate of the graphite is calculated according to the contents of CO 2 and CO in the exhaust gas: where t is oxidation time in s, m is residual mass of specimen in g, ν v is the volume flow rate of in m 3 /s, f(CO 2 ) and f(CO) are mole fractions of CO 2 and CO in exhaust gas respectively, ρ a is density of air at ordinary temperature and pressure with value of 1.293 g/m 3 , M c is atomic weight of carbon element and M a is average molecular weight of air. For the studies regarding the graphite oxidation under accident conditions of nuclear reactor, the time after the air ingress accident were usually concerned and therefore these studies usually adopted the OR according to the time points after the beginning of oxidation, such as 1 or 3 hours 11 , 60 or 80 minutes 24,26 and 4 hours 27 . Here, we also included calculation result based on the OR at the same time point, 90 minutes after the beginning of oxidation. The activation energy can be calculated by: t a Test specimen and conditions. The test specimen of nuclear graphite ET-10 was provided by IBIDEN Co.
Ltd., Japan. The main properties of graphite ET-10 are shown in Table 2. The main impurities of graphite ET-10 are shown in Table 3 which are provided by the manufacturer. The dimensions of the oblate rectangular specimen are 30.0 mm × 29.5 mm × 1.95 mm. The graphite ET-10 was oxidized by the 0.2 L/min oxidant gas (O 2 (10 or 20 mol%) and helium) at 650-850 °C. The test specimen of nuclear graphite IG-110 was provided by Toyo Tanso Co. Ltd., Japan. The main properties of Graphite IG-1110 are also shown in Table 2. The main impurities of graphite IG-110 are shown in Table 3. The experiment conditions 32 are same as those recommended by ASTM D7542 15 Because the change of the surface area with temperature, not the absolute values of it, determines the calculation of activation energy, this conversion did not change the calculation result of activation energy. It only made the situation more comparable, e.g. that in Fig. 1.
Test facility and procedure. The test facility for oxidizing the graphite ET-10 and measuring components of the exhaust gas is same as the previous study 26 . O 2 and helium flowing through related mass flow meters are mixed in a mixer, and the pressure of mixed gas is reduced by a pressure reducing valve before entering a quartz reaction tube heated by three electric heaters. The quartz tube has a 40 mm diameter and a 1200 mm length. The heating area is divided into three zones heated by three electric heaters separately whose temperature is detected by three Pt-Rh thermocouples respectively. The specimen lying on a ceramic crucible is located in the middle heating zone. In the quartz tube, another thermocouple is inserted to the side of the specimen to detect the temperature of specimen.
The components of the exhaust gas produced by oxidation reaction are measured by an on-line gas chromatography (GC-1100, Beijing PERSEE General Instrument, INC.) after flowing through a counterbalance valve. The surface areas and volumes of the open pore of the pristine and oxidized specimens are measured by a mercury porosimeter (AutoPore IV 9500, Micromeritics Instrument Corp.).
Before the oxidation reaction, pure helium (99.995%) was injected into the quartz tube. Then the test specimen was heated to a certain temperature in the inert atmosphere whose process took around 90 minutes. After that, pure O 2 (99.999%) and pure helium were mixed and injected into the quartz tube to oxidize the graphite for 90 minutes. At the same time, the contents of exhaust gas were measured by the gas chromatography. Finally, the test specimen was cooled to the room temperature in an inert atmosphere.
Data Availability
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. | 7,046.2 | 2018-09-07T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Processing, Carbonization, and Characterization of Lignin Based Electrospun Carbon Fibers: A Review
Greenhouse gas emissions and environmental impacts of petroleum-based fuels and materials have necessitated the development of renewable resource-based alternatives. In the process to extract cellulose for converting it to bioethanol (bio-based gasoline) large quantities of lignin are produced as the main byproduct-making it useful for further processing application for sustainable materials. Lignin is the second abundant source of renewable carbon with an aromatic structure which makes it a potential candidate for carbon fiber production. Since lignin can be dissolved in a variety of organic and non-organic solvents, electrospinning has been used to produce precursor fibers for carbon nanofiber production. These carbon nanofibers have been tested as a potentially sustainable alternative for the current non-renewable electrodes in energy storage and conversion devices such as supercapacitors. Using lignin by-products from the fuel energy sector in making devices for electrical energy sector provides a great opportunity for promoting a circular economy from sustainable materials while also contributing to researching alternative sustainable materials in light of a global pandemic. This review presents a summary of the processing conditions for electrospinning different varieties of lignin, characterization of the electrospun fibers and the carbonization conditions for converting fibers. Different techniques that of the structural properties of the precursor fibers, characteristics of carbon nanofibers and their performance in energy storage devices are discussed. Compared to the other published reviews in this field, this review aims to present the current knowledge on material-processing-lignin-carbon fibers properties relationship. Graphical Abstract Electrospun lignin fibers are sustainable and renewable resource-based alternatives to current petroleum-based precursors for carbonized fibers. Production, carbonization, and characterization of lignin fibers are essential for their effective utilization.
INTRODUCTION
Environmental institutes set new targets for vehicle engine emissions. Two of the approaches to reach the targeted values are reducing the vehicle weight by using high-performance, lightweight materials and also substituting a fraction of gasoline with bioethanol or other alternative sources of energy (Mainka et al., 2015b;Mohanty et al., 2018).
Addition of bioethanol to gasoline improves the engine performance and reduces emissions (Zhao and Wang, 2020). The economic viability of bioethanol production from sustainable lignocellulosic resources highly depends on the utilization of the process' byproducts for highvalue applications. The major byproduct of bioethanol production from lignocellulosic materials is lignin which is one of the most highly available resources of renewable carbon (Solomon et al., 2007;Dessureault, 2014). Compared to cellulose and other sources of renewable carbon, lignin is a good candidate for the production of carbon fibers owing to the aromatic structure, presence of phenolic and aliphatic hydroxyl groups, high carbon content, thermal stability and availability as a waste product from biomass (Fang et al., 2017a).
Advances in the production of efficient energy storage and conversion devices, such as electrodes have led to the emerging utilization of carbon nanofibers. The average annual growth of global carbon fiber is around 58,000 tons in 2015 (Figure 1; Fang et al., 2017a). It is estimated that carbon fiber polymer composites will reach 197,000 tons in 2023 (Sauer, 2019). For an economic comparison, current petroleum-based carbon fibers have a high manufacturing cost of ∼USD $10.20 per lb (according to the annual production of 1,500 tons/year (Mainka et al., 2015b;Yoo et al., 2017) whereas utilization of lignin costs ∼USD $ 0.5 per lb, including spinning for the production of carbon fibers which can reduce the cost to ∼USD $2.86 per lb (Fang et al., 2017a). The lower cost of lignin-based carbon fibers provides an opportunity for a wider range of industries to benefit from the advantages of these renewable sustainable fibers. Carbon nanofibers have been used to replace the inorganic components and improve the efficiency and performance of renewable energy storage/conversion devices (Zhang et al., 2016). Several review papers have been published to summarize the research data on electrospinning various types of lignin, their carbonization and properties of the devices fabricated using the carbon nanofibers (Li et al., 2016;Fang et al., 2017a;García-Mateos et al., 2019;Kumar et al., 2019;. There are a limited number of comprehensive studies that determine the relationship between the type and properties of lignin -electrospinning process parameters -electrospun fiber properties -carbonization parameters -carbon nanofiber properties, and final performance of the energy storage and conversion devices. The goal of this review is to present a summary of the research reports which describe the materialsprocessing -performance relationship.
Lignin Structure, Sources, and Applications
Lignin is a heterogeneous, polyaromatic biopolymer comprising ∼30% of organic carbon on the earth (Suhas et al., 2007;Achyuthan et al., 2010;Saito et al., 2012). The lignin content of different biomass varies by type, and it is present in the cell walls of lignocellulosic material (Suhas et al., 2007;Kumar et al., 2009;Laurichesse and Avérous, 2014;Zeng et al., 2014). The monomers and the proposed structure of lignin are represented in Figure 2. The lignin monomer structure is attached with different types of bonds and about 48-60% of the interunit linkages are β-O-4 linkage (aryl-glycerol-β-O-4 aryl ether) (Braun et al., 2005).
Softwoods primarily compose of guaiacyl (G) units, with traces of syringyl (S) and p-hydroxyphenyl (H) units present as well. Softwood lignins only have coniferyl alcohol units. Hardwood lignins have both G and S units. Grass or yearly FIGURE 2 | Proposed structure of lignin with the main functional groups and the monomers of lignin (Fang et al., 2017a) with permission from The Royal Society of Chemistry (Order license ID: 1029450-1).
plant lignins have all three units (G, S, and H units) which likely increase economic efficiency in mass production of lignin (Suhas et al., 2007).
Cellulosic ethanol and paper/pulp production are two leading industries that produce lignin as their coproducts (Hu and Hsieh, 2013;Poursorkhabi et al., 2013;Chen et al., 2014;Abdelwahab et al., 2015Abdelwahab et al., , 2019Adams et al., 2018). Alkaline pulping (soda and kraft) are the most common methods to extract lignin from cellulose in paper/pulp manufacturing (Suhas et al., 2007). Paper processes often have high salts and ash content in lignin as well as sulfur-based groups contamination (Lora and Glasser, 2002;Lallave et al., 2007; Hatakeyama and Hatakeyama, 2010).
Biomass conversion technologies use sulfur-free processes. Lignin obtained from these processes has a wide molecular weight distribution and displays different characteristics compared to sulfite and kraft lignin (Lora and Glasser, 2002). The properties of lignins, the extent of hydrolytic degradation, chemical functionalities, and molecular weight depends on the processing conditions and the nature of biomass (Braun et al., 2005;Hu and Hsieh, 2013). Both softwood (Aslanzadeh et al., 2017;Cho et al., 2017Cho et al., , 2018Roman et al., 2019) and hardwood lignin (Teng et al., 2013;Schreiber et al., 2015;Culebras et al., 2019;Schlee et al., 2019b;Yun et al., 2019) have been used for producing carbon nanofibers. However, most of the literature has studied hardwood lignin. More research is needed to be done to compare the processing and properties of carbon nanofibers obtained from softwood and hardwood lignin under similar processing conditions. Since lignin is produced as a by-product of another industry, i.e., paper and bioethanol, availability of any kind of it is dependent on the economy of these industries. Therefore, the best option is to optimize the production of carbon nanofibers based on the highest available types resulting from the most conventional paper or bioethanol processes.
Electrospinning
Electrospinning process is a convenient and low-cost method to produce continuous fibers (1D structure) at ambient temperatures with micro-to nanometer diameters (Bhardwaj and Kundu, 2010;Inagaki et al., 2012;. Nanoparticles (e.g., CuO, NiO, ZnO, etc.) dispersed or embedded in the solution can be incorporated into the polymer to enhance the mechanical and physical characteristics of the matrix providing new advantages for carbon nanofibers in chemical sensing, energy, and catalysis applications (Huang et al., 2003;Tan et al., 2007;Kumar et al., 2012;Xu et al., 2013). In the electrospinning process (Figure 3), a polymer solution is placed into a reservoir (syringe), which is attached to a needle in the top position (vertical set-up) or on the side position (horizontal set-up) of a conductive surface (collector). The spinning begins by charging the needle and pumping the solution that produce drops at the tip of the needle. The Taylor cone (the droplet) gets charged and suctioned into the electrical field between needle and collector, which is grounded. A thin electrified solution jet is suctioned from the droplet when the electrostatic force on the droplet becomes more significant than the surface tension. The jet goes through rapid and unstable whipping motion between the tip and the collector in which the solvent evaporates leaving a fiber material on the collector (Reneker et al., 2000;Ramakrishna et al., 2005;Reneker and Yarin, 2008).
Parameters that affect the electrospinning process and quality of the fibers are material types, processing, environmental conditions, and design. Material properties determine their ability to create fibers. The most important properties in this group are solution viscosity or concentration and molecular weight of the polymer. Higher molecular weight polymers are more suitable for spinning (Deitzel et al., 2001;. Different processing parameters lead to different geometry, and fiber diameter properties (Thompson et al., 2007;Alghoraibi and Alomari, 2018;Prabu and Dhurai, 2020). The main processing parameters which can be modified include voltage and feeding rate (Teo et al., 2011;Kumar et al., 2012). Increased voltage and decreased feeding rates reduce the fiber diameter (Beachley and Wen, 2009;García-Mateos et al., 2019). Environmental conditions such as temperature and humidity can affect the morphology and the diameter of the fiber .
Various designs of electrospinning set-up have been used to increase the throughput of the process or to create different fiber geometries. In needleless setup, the polymer solution is kept in a charged bath and collector is mostly in the shape of a drum rotating on top of the solution bath (Wei L. et al., 2019). During the process, jets of solution emerge from the solution bath and go towards the collector. Another method is co-axial electrospinning where a hollow or core-shell geometry is generated depending on the geometry of the needle (Teo et al., 2011;Persano et al., 2013;Han and Steckl, 2019).
Electrospinning of Lignin
Electrospinning solutions of lignin alone result in electrospraying of particles because lignin cannot create enough chain entanglements within the solution (Lai et al., 2014b). Multiple approaches used to electrospin lignin include; blending with a second polymer (binder polymer), co-electrospinning, and solvent fractionation to remove the low molecular weight fractions (Table 1).
Poly (ethylene oxide) (PEO) is particularly suited polymer to combine with lignin for electrospinning. PEO is miscible with lignin and is water-soluble. A solution of lignin with high molecular weight (M w ) PEO in either alkaline water or organic solvents enhances the spinnability and solution elasticity (Kadla and Kubo, 2003;Kubo and Kadla, 2006;Dallmeyer et al., 2010;Schreiber et al., 2012;Poursorkhabi et al., 2015). A higher concentration of alkali hydroxides in the aqueous solutions of PEO and lignin causes further deterioration in fiber diameter due to improved charge density as well as charge dissipation (Hu and Hsieh, 2013).
Solutions of polyacrylonitrile (PAN) and lignin in DMF (N,Ndimethyl formamide) have been electrospun to replace a part of PAN with lignin in the fibers (Seo et al., 2011;Choi et al., 2013). The conductivity and viscosity of PAN-lignin solutions decreased by increasing the lignin concentration (Seo et al., 2011). Thinner fibers were spun from solutions with lower viscosities (Choi et al., 2013). Core-shell fibers were formed by electrospinning cellulose nanofibers as core and PAN/lignin solution as a shell .
Electrospinning lignin blends with biobased polymers such as cellulose (Ahn et al., 2014), soy protein (Salas et al., 2014), cellulose acetate (Schreiber et al., 2015) and chitosan (Schreiber et al., 2014) have also been reported. Ahn et al. (2014) investigated the electrospinning of lignin and cellulose by direct blending and by partially delignification of hemp fibers to have natural blends of cellulose and different amount of lignin. Salas et al. (2014) investigated the electrospinning of kraft lignin and soy protein in the presence of coadjutant PEO and found an enhancement in the fiber diameter by an increasing amount of lignin and there is a H-bond interaction between lignin and soy protein. Schreiber et al. studied electrospinning of lignin with cellulose acetate (Schreiber et al., 2015) and with chitosan (Schreiber et al., 2014) to improve the carbonization process. They found a good interaction between lignin and cellulose acetate. Lignin/chitosan (in a ratio of 4:3) solution was found to be the best ratio to produce a good fiber due to the balanced charge ratio of lignin and chitosan functional groups.
Polymer blending of lignin with other polymers enhances the characteristics of lignin, however, it will increase the cost of the final product. Hence, retaining the proportion of the polymer binder as low as possible is preferred. Without the addition of an extra polymer (free binder), Lallave et al. (2007) produced coelectrospinning method for spinning lignin (organosolv lignin with low M w ). In the co-electrospinning process, a tri-axial configuration was used. In this configuration, ethanol, lignin solution, and glycerin solution were the outer layer, middle layer, and the inner layer, respectively. Hollow carbon fibers have been produced from co-electrospun fibers.
Production of Carbon Fiber From Lignin-Based Fibers
For producing carbon fibers, a thermostabilization process precedes carbonization. Thermostabilization involves slow air oxidation and cross-linking of the material to raise or remove its thermal transition points. This prohibits the melting of the fibers during carbonization (Braun et al., 2005;Ruiz-Rosas et al., 2010b). There is no melting point for lignin, however, glass transition temperature (T g ) is an important parameter that impacts lignin's mobility and thermal flow. The T g of lignin and most of the carbon fiber precursors are lower than their thermal decomposition temperature. Therefore, the first processing step is to thermally stabilize the material and prevent fibers softening and fusion prior to the carbonization (Ruiz-Rosas et al., 2010b). The second thermal treatment process, known as carbonization, is to obtain the final carbonized fibers (Ruiz-Rosas et al., 2010b). Stabilization time is an essential task for industrial carbon fiber production. Depending on the scale of the process, Alcell lignin can take several days (Mainka et al., 2015a). This is due to the devolatilization of lignin which increases the stabilization time. Studies showed that the thermal stabilization of softwood lignin was achieved in a shorter time compared to hardwood lignin (Norberg et al., 2013). However, softwood lignin has lower spinability than hardwood lignin due to the high melting point and cross-linked structure of softwood lignin (Zhang and Ogale, 2014;Cho et al., 2019a). Moreover, studies have shown that lignin fibers containing certain amounts of NaOH or KOH do not require this thermostabilization step. They can be directly carbonized at a low heating rate without having to fuse fibers (Hu and Hsieh, 2013;Schlee et al., 2019a).
Several temperatures and heating rates have been applied for lignin as well as rapid variations in the total oxygen, hydrogen, carbon and mass content above 190 • C (Braun et al., 2005). It is proposed that thermal decomposition of lignin starts with hemolytic dissociation of the weakest bond (β-O-4 linkage) (Britt et al., 1995(Britt et al., , 2000aFang et al., 2017a). Decomposition continues by a variety of oxidation, rearrangement and elimination reactions initiated by the radicals. Hemolysis of −OCH 3 (methoxyl) groups require a slower rate at higher temperatures. The extent and rate of decomposition depend on factors like temperature, heating rate, and oxygen content. Besides hemolysis, autooxidation is an alternative reaction that occurs in the existence of air and forms carbonyl and carboxyl groups on the lignin structure (Fenner and Lephardt, 1981;Braun et al., 2005). For temperatures up to 200-250 • C, the formation of carboxyl and carbonyl groups increase the oxygen content of the fiber. At higher temperatures, these functional groups lose oxygen and form anhydrides, esters and crosslinks inside the structures of lignin. At temperatures above 250 • C, carbon-carbon aromatic bonds are produced which is suited for stronger materials (Hu and Hsieh, 2013). There is an opposite relation among the glass transition temperature (T g ) and the hydrogen content of lignin. For example, materials with lesser hydrogen content have higher T g (Braun et al., 2005).
Analysis of evolved gasses during the carbonization of lignin showed that the non-carbon atoms break apart the sample and evolve CO 2 , CO and CH 4 gases as well as H 2 O (Yang et al., 2007;Ruiz-Rosas et al., 2010b). During carbonization, water molecules are released between 100 and 600 • C. The hydroxyl (-OH) groups break at high temperatures, however, the moisture in the samples evolved at lower temperatures ∼100 o C. CO 2 is evolved in a temperature range like water desorption. CO is produced in a range from 200 to 800 • C resulting from breakage of carbonyl or carboxyl groups or char forming reactions that were dependent on the type of lignin. Chatterjee and Saito (2015) displayed that the content of lignin controlled the char yield and consequently controlled the activated carbon yield. Additionally, the microstructure of char yield depends on the source of biomass and amount of cellulose and lignin. The author showed a reduction in the char yield with increasing low molecular weight fraction of lignin. Releasing of H 2 starts at temperatures above 500 • C and reaches to around 700 • C. These gases assigned to the volatiles are produced from the reaction of hydrocarbons or depolymerization of phenyl groups (Blanco López et al., 2002;Yang et al., 2007;Ruiz-Rosas et al., 2010b;Foston et al., 2013).
The sample structure is composed primarily of condensed aryl structures at 1000 • C. At temperatures assigned ∼1000 • C, the carbonization can sometimes degrade the aryl structures that are produced at lower temperatures (like 800 • C) (Foston et al., 2013). Thus, the assembly of the formed carbon fiber depends on the heating condition. The carbonization conditions which are reported in the literature have heating rates of 5 or 10 • C/min and maximum temperatures ranging from 600 to 2200 • C with residence times at a maximum temperature between 60 and 150 min (Ruiz-Rosas et al., 2010b;Seo et al., 2011;Choi et al., 2013;Dallmeyer et al., 2013;Hu and Hsieh, 2013;Wang et al., 2013;Xu et al., 2013Xu et al., , 2014Dallmeyer et al., 2014;Hu et al., 2014;Lai et al., 2014a,b;Guo et al., 2015;Schreiber et al., 2015). A recent study showed that the carbonization time of lignin carbon fiber decreased from 708 to 24 min without losing mechanical properties (Bengtsson et al., 2020). Moreover, increasing the carbonization temperature from 600 to 1600 o C, improved the modulus from 18 to 77 GPa due to the formation of nanocrystalline graphite (Bengtsson et al., 2020). Most of the literature involves using an inert atmosphere (nitrogen and argon gas). During carbonization, the fibers are free or clamped (Teng et al., 2013) to induce stretching. Stretching of the fiber during thermal treatment by clamp will improve the orientation of the fiber and decrease the fiber diameter consequently enhancing the mechanical properties (tensile modulus < 100 GPa) of the fiber (Reneker and Yarin, 2008).
The yield is calculated according to fiber weight and depends on heating conditions along with the type of lignin. The carbonization conditions applied to electrospun lignin fibers are shown in Table 2. Carbonization of lignin fibers is a less energy and time-intensive process (less than 2 h) than the carbonization of PAN fibers (Liu and Kumar, 2012;Baker and Rials, 2013). Bengtsson et al. (2019) showed a reduction in the time required to stabilize lignin fiber from 16 h to less than 2 h at 250 o C due to the high carbon content of lignin (60-65%). In the literature, the influences of carbonization temperature on characteristics of carbonized electrospun lignin fibers have been extensively studied (Ruiz-Rosas et al., 2010b;Choi et al., 2013;Dallmeyer et al., 2014;Schreiber et al., 2015). The carbonization temperature has a direct effect on the tensile characteristics of the carbon fiber (Bengtsson et al., 2020). Increasing carbonization temperature led to an enhancement of the tensile modulus, however, decreased elongation at break and reduced fiber diameter. Interestingly, the tensile strength increased till 1000 o C and started to decrease above this temperature due to the defect on the fiber at elevated temperature (Bengtsson et al., 2020).
Hollow nano-carbon fibers were shaped by carbonization of Alcell lignin fibers collected from a tri-axial configuration on the three-layer electrospinning (co-electrospinning) (Lallave et al., 2007). Alcell lignin in ethanol solution was the middle layer, glycerine was the core, and ethanol was the outer sheath. The formation of hollow nanocarbon fiber produced Simultaneous thermostabilization and carbonization 1) 10 2) 10 600 or 850 0.5 N 2 With NaOH = 300-500 nm with KOH = 100-300 nm *Numbers in a single cell show a multi-step process was performed. a 500 mL/min hydrogen (H 2 ) and 1 mL/min argon (Ar). b Acetylene/argon mixture (100:300 mL/min). fiber with superior mechanical properties and lower weight (Köhler et al., 2017). The activation of carbon fiber (porous carbon fiber) enhanced the fabrication in different forms and facilitated robust fiber formation. Activated carbon fibers were produced by carbonization of alkaline solutions of PEO/lignin electrospun fibers at 850 • C which significantly decreased the impregnation ratio (Hu and Hsieh, 2013). The existence of alkali elements enhanced the shape retention and fiber thermal stability demonstrating that the fibers could be thermostabilized and carbonized in one step under a nitrogen atmosphere.
Multi-walled carbon nanotubes (MWNTs) (Teng et al., 2013) and metal particles such as Platinum (Pt), (Ruiz-Rosas et al., 2010b;Gao et al., 2015). Palladium (Pd) and Gold (Au) (Gao et al., 2015) were also added to the fibers, and their effect on the characteristics of carbonized fibers was investigated. Incorporation of the nanoparticles to lignin carbon fibers improved the mechanical properties, crystallinity of the material and provided further functionality (active phase) as well as value to the material (Beisl et al., 2017;García-Mateos et al., 2019).
CHARACTERIZATION OF THE CARBONIZED FIBERS Electron Microscopy
Scanning electron microscopy (SEM) is the best technique to evaluate the performance and diameter of fibers. SEM allows one to understand the quality of fibers by characterizing smoothness, roughness or porosity of a fiber surface along with discerning if the surface is non-uniform, beaded, or interconnected (Tagawa and Miyata, 1997;Tanaka et al., 1999;Dallmeyer et al., 2010). Prior to imaging, the electrospun lignin fiber was sputter coated with gold for 10-20 s and examined using accelerating voltages of 5-20 kV. The lignin-carbon fibers formed had a diameter of 80-30 µm. The challenge in decreasing the diameter is restricted by the preparing conditions, for example, clogging melt extrusion die (Kadla et al., 2002;Kadla and Kubo, 2004;Kadla, 2004, 2005).
Electrospinning can produce lignin fibers ranging in size from sub-micron to a few micrometers in diameter. The diameter of the lignin fibers depends on multiple variables including type of lignin, solvent, binder polymer; viscosity, concentration, conductivity, surface tension, electrospinning voltage, feed and rate, and the nozzle to collector distance (Huang et al., 2003;Ramakrishna et al., 2005;Dallmeyer et al., 2010). A relatively broad range of diameters has been reported in the literature (Dallmeyer et al., 2010;Teng et al., 2013) and are shown in Figure 4. Figure 4 compares the SEM images of electrospun softwood kraft lignin (MeadWestvaco Indulin-AT, United States) fibers and carbonized fibers. Figures 4a,b shows electrospun blend lignin/PEO fibers from aqueous NaOH and DMF solutions, respectively. Figures 4c-f demonstrates electrospun fibers from PEO and fractionated lignin and their carbonized fibers. It was observed that enhancing the low-molecular-weight content of lignin increased the chance of fusion of fibers together.
Transmission electron microscopy (TEM) is used to study the fibers surface, strength-structure relationship of the carbon fiber, and dispersion of additives such as metal particles (Ruiz-Rosas et al., 2010b;Lai et al., 2014a) or carbon nanotubes (Teng et al., 2013) in the carbonized fibers. This method is also used to study the carbon structure of the fibers (Hu and Hsieh, 2013;Lai et al., 2014b). The TEM of carbonized samples (powder) were dispersed in ethanol or distilled water by ultrasonic treatment then a droplet was placed on a copper grid supports which was then examined by a field emission TEM at an accelerating voltage of ∼100-200 kV. Electrospinning of PAN-lignin fibers displayed a homogeneous circular cross-section with 300 nm diameter at a concentration of 50 wt.% lignin. At 60 wt.% lignin, the formation of beaded fibers occurred and at 80 wt.%, electrospraying had taken place (Seo et al., 2011). Ruiz-Rosas et al. (2010b) obtained Alcell lignin composite fibers with platinum by co-electrospinning. The fiber diameters were between ∼800 nm and 3 µm. Thermostabilized fibers were not fused and diameters remained in the same range. The maximum deterioration in diameter was between 400 nm and 1 µm after carbonization at the highest temperatures. TEM displayed a smooth surface of the fiber with Pt dispersion of 10.5%.
Hollow carbon fibers have been produced from co-electrospun fibers. In the co-electrospinning process a tri-axial configuration was used. In this configuration, ethanol, lignin solution, and glycerin solution were the outer layer, middle layer, and the inner layer, respectively (Lallave et al., 2007). The fiber was between 400 nm and 2 µm and reduced to 200 nm after carbonization due to the removal of non-carbon compounds, i.e., Oxygen, Hydrogen and Sulfur elements for kraft lignin (Lai et al., 2014b).
Atomic Force Microscopy (AFM)
AFM is used to study the roughness and porosity of the surface of carbon fibers (Hu and Hsieh, 2013). The images obtained from the AFM depends on the shape, size and functionality of the probe tips (Furuno et al., 1998). AFM has different assortment for imaging such as height (smoothness of surface), phase images (sample viscosity and hardness) and amplitude (vibrating the probe) depending on the working environments. The electrospun and carbonized fiber was prepared by fixing the fibers on the surface of cleaved mica and the tip scans over the surface by continuous tapping. Figure 5 shows AFM images of carbonized (alkali lignin/PEO: 9/1 wt/wt) fibers (Hu and Hsieh, 2013). The fibers were produced from aqueous solutions of NaOH or KOH and carbonized at 850 • C for 30 min under a nitrogen blanket. Both images in Figure 5 demonstrate a porous fiber surface.
Elemental Analysis
CHNS/O analysis is used to measure the degree of carbonization and the percentage of elements in thermally treated lignin fibers (Lallave et al., 2007). The existence of C-O groups has an influence on the solid electrolyte interphase layer forming in the reaction (Choi et al., 2013). Lallave et al. (2007) characterized the percentage of elements in the lignin fibers and thermally treated fibers by elemental analysis. The authors found that thermostabilized fibers have a higher oxygen percentage depending on the temperature and heating rate. Fibers carbonized at 900 • C, and above have a carbon content of more than ∼90%.
X-Ray Photoelectron Spectroscopy (XPS and EDX)
XPS and EDX is used as an alternative method for elemental analysis of the fibers (Ruiz-Rosas et al., 2010b;Hu and Hsieh, 2013;Wang et al., 2013). This method was used to investigate the surface chemistry of the samples (Ruiz-Rosas et al., 2010b;Wang et al., 2013).
The surface concentration of oxygen in carbonized fiber (carboxyl, carbonyl, ester, and anhydride groups) increased and the percentage of hydrogen decreased after thermostabilization (in the form CO or CO 2 ) which resulted in an increase in the T g of the material (Lallave et al., 2007;Ruiz-Rosas et al., 2010b). Wang et al. (2013) analyzed the percentage of carbon, nitrogen and oxygen of the carbonized lignin fibers by XPS. The carbonized (lignin/PEO: 90/10) fibers at a 900 • C had 89.11% carbon and 10.89% oxygen. The N-doped fibers were produced by the second heating of urea impregnated carbon fibers to the 900 • C. These fibers had a 76.85% carbon, 9.94% oxygen, and 13.21% nitrogen.
Fourier Transform Infrared Spectroscopy (FTIR)
FTIR is used to detect changes in chemical functionalities after thermostabilization (Hu and Hsieh, 2013;Cho et al., 2019b). It is also used to confirm the complete removal of chemical groups other than carbon in the carbonized fibers (Choi et al., 2013). Choi et al. (2013) reported producing carbon fibers from lignin/PAN blend fibers. FTIR of the carbonized fibers made from blends of lignin/PAN showed peaks at around 1000 cm −1 indicating the presence of C-O groups while the fibers produced only from PAN did not show any significant peak, presenting complete carbonization.
Thermal Analysis
The thermal stability of the electrospun fibers and carbonized fibers up to 1000 • C was investigated by thermogravimetric analysis (TGA) in both an air and nitrogen atmosphere (Ruiz-Rosas et al., 2010b;Seo et al., 2011). Non-isothermal oxidation profiles of thermo-stabilized lignin fibers displayed an important oxidation rate at 250 • C. Resistance to oxidation of carbonized fibers will be enhanced at higher carbonization temperatures. Lack of superficial defects, as well as ordered carbon, were presented as the causes for higher oxidation onset temperatures (Ruiz-Rosas et al., 2010b). TGA examination of fibers in N 2 atmosphere is used to calculate the carbonization yield as an alternative of direct weight measurements (Lallave et al., 2007;Seo et al., 2011).
Differential scanning calorimetry (DSC) studies are used to determine the transition points of the fibers. DSC results for Lignin/PEO fibers showed the disappearance of the melting point of PEO (Hu and Hsieh, 2013). It is due to a good spreading of PEO in the reinforcing agent (lignin), higher chain entanglement between the two polymers, and the inability of the PEO to crystalize. The other application of DSC is detecting the softening point of the fiber after thermostabilization.
Raman Spectroscopy
Ramen spectroscopy has been used to study the carbon structure of fibers. Raman spectrum of carbonized lignin fibers are displayed as two broad overlapping peaks resulting from disordered carbon with semi-structural organization. The first peak (D band) at ∼1300 cm −1 is assigned to carbon hybridization (sp 3 ) in polycrystalline graphite. It represents the disordered structure, which is called "turbostratic carbon structure" and can constitute the imperfections in the graphitized structure. Another peak (G band) at ∼1576 cm −1 because of carbon hybridization (sp 2 ) as well as stretching mode in the graphite plane (Ferrari and Robertson, 2000). Usually, deconvolution of the Raman spectra peaks are used to measure the intensity and position of each band. The deconvolution of the spectra to two (Choi et al., 2013;Teng et al., 2013;Dallmeyer et al., 2014;Youe et al., 2015), three (Jawhari et al., 1995), or four peaks (Ruiz-Rosas et al., 2010b;Hu et al., 2014;Berenguer et al., 2015) has been reported in the literature. After deconvolution of the spectrum to 4 peaks, two smaller bands at ∼1515 and 1170 cm −1 appear. These bands are attributed to impurities such as ions or oxygen superficial groups (Ruiz-Rosas et al., 2010b).
The ratio of intensities of D to G band (R = I D /I G ) is used to study the extent of the ordered graphitic structure. Increasing the R-value reveals the enhance in the disordered structure and decreased size of graphite sheets. Crystallite size of graphite (L α ) (nm) and graphitic mole fraction (x G ), can be calculated from Eqs (1) and (2) (Cançado et al., 2006): Where λ is the incident laser wavelength (nm).
Vicinity of the center of deconvoluted G band to the value for graphite (1582 cm −1 ) was attributed to the onset of the structural organization although there were still some disordered carbon present (Lallave et al., 2007).
The effects of carbonization temperature were studied with Raman spectroscopy between 600 and 1400 • C to calculate the degree of graphitization (Ruiz-Rosas et al., 2010b;Dallmeyer et al., 2014;Liu H.C. et al., 2015;Schreiber et al., 2015;Youe et al., 2015). The results showed that at higher temperatures a narrower G and D band occurred and enhanced the intensity of the G band signal. Increasing the carbonization temperature leads to a decrease in the contribution of the disordered structure of carbonized fibers (Rodríguez-Mirasol et al., 1996;Ruiz-Rosas et al., 2010b). Disordered structure of carbonized fibers which had dispersed metal particles such as platinum or nitrogen-doped fibers increased due to the steric hindrance effect of particles or nitrogen induced defects (Keskar et al., 2005;Ruiz-Rosas et al., 2010b;Wang et al., 2013). Structure of carbonized lignin fibers blended with a crystalline polymer such as cellulose acetate or PVA became more disordered by increasing the content of lignin which has a more amorphous structure (Choi et al., 2013;Lai et al., 2014b;Schreiber et al., 2015).
X-Ray Diffraction (XRD)
The peaks of lignin/PAN fibers at 2θ of 28.8 • and 16.8 • are due to (110) and (100) crystallographic planes of PAN. These peak intensities decrease after thermostabilization due to variation in the structure of PAN (Seo et al., 2011).
The carbonized lignin shows a broad diffraction peak centered at 2θ ≈ 25-26 • which is due to (002) crystallographic planes of graphite crystallites (Seo et al., 2011;Teng et al., 2013). For some samples, peaks were detected at around 63.3 • and 50.4 • assigned to (004) and (100) planes in graphitic structure. The interplanar spacing, d 002 , of the carbonized fibers improved by increasing the lignin content and reduction of density. Thus, higher lignin content in fiber will lead to the formation of higher porous carbon structures (Lai et al., 2014b). All the lignin-based carbon fibers had a broadband from 15 to 33 • due to an amorphous carbon structure.
Surface Area Analysis
Porosity and surface area of the fibers were calculated by N 2 adsorption-desorption at −196 • C and application of Brunauer, Emmett, and Teller (BET) equation (Lallave et al., 2007;Ruiz-Rosas et al., 2010b;Wang et al., 2013).
The porosity of the lignin fibers increased during the carbonization process by the removal of volatile material. It was observed that the surface area of carbonized lignin improved from 524 to 1195 m 2 /g by increasing the temperature from 600 to 900 • C. After a certain temperature, further increasing the temperature to (1000 • C) had the opposite effect. It destroyed the porous structure and reduced the surface area (821 m 2 /g) due to reorganization of the solid at 1000 o C (Ruiz-Rosas et al., 2010b). Wang et al. (2013) showed a decrease in the surface area from 473 to 381 m 2 .g −1 of lignin-carbonized fibers doped with nitrogen which resulted in the formation of meso/macroporous structure.
The pore volume of lignin-carbonized fibers enhanced by increasing the temperature due to the activation by oxygen (activating agent) during carbonization (Ruiz-Rosas et al., 2010b).
The surface area of PAN/lignin-carbonized fibers reduced from 12.9 to 6.3 m 2 .g −1 with increasing content of lignin, in spite of reduction in diameter due to fiber fusion in the network of the fiber (Choi et al., 2013).
Addition of alkali metal hydroxides such as NaOH or KOH to the lignin solutions traps these materials inside the fibers. Activated carbon fibers were produced from carbonized lignin fibers containing alkali metal compounds (Hu and Hsieh, 2013).
The carbonized lignin-based fibers showed different types of isotherms for porous materials such as types I, II, or IV depending on the fiber composition and carbonization conditions (Ruiz-Rosas et al., 2010b;Hu and Hsieh, 2013;Wang et al., 2013;Lai et al., 2014b). Reversible adsorption was monitored by comparing the desorption and adsorption curves (Ruiz-Rosas et al., 2010b).
Mechanical Properties
Achieving high modulus and strength, lignin carbon fibers require highly oriented anisotropic-graphitic structures along the fiber axis (Davé et al., 1993). Thus, the polymer chains have to be oriented and aligned in one direction before fiber solidification. Fiber diameter also affects the strength. Fibers with a smaller diameter will have less defects and will be molecularly oriented along the fiber axis (Tagawa and Miyata, 1997;Tanaka et al., 1999;Dallmeyer et al., 2010).
Lignin is an amorphous aromatic biopolymer that has a 3D structure. Because of this, molecular orientation is the key to high mechanical properties (modulus and strength) which is limited (Davé et al., 1993). The mechanical characterization of ligninbased carbon fibers were tested using tensile testing machines according to ASTM D638 (Seo et al., 2011;Teng et al., 2013). Seo et al. (2011) displayed the enhancement of tensile strength of PAN-lignin (50/50) fiber from around ∼100 to more than 800 MPa (around 480%) for fiber irradiated by 2000 KGy dosage compared to PAN-lignin fiber non-irradiated. This is due to stabilization of PAN during irradiation. Teng et al. (2013) measured the tensile strength of MWNTs dispersed in PEO-lignin carbonized fiber using a micro-tensile tester instrument at rate of 0.02 cm s −1 . The tensile strength reduced by inclusion of 1 wt.% MWNTs but stayed equivalent by increasing the concentration. Moreover, the tensile strength decreased by increasing the diameters of the fiber. The strength of PEO-lignin fibers without MWNTs increased from 5.13 to 45.03 MPa after carbonization. The inclusion of 4 wt.% MWNTs decreased the strength to 2.46 MPa due to the poor dispersion. The modulus of PEO-lignin fibers enhanced from 5.1 to 6.2 GPa for carbonized fibers. Incorporation of MWNTs enhanced the modulus but reduced after carbonization from 6.2 to 2.4 GPa due to the change in the morphology of the fibers. The percentage of elongation and toughness reduced after incorporation of MWNTs but the percentage of elongation enhanced after carbonization with no effect on toughness. For carbon fibers, no enhancement was observed by inclusion of MWNTs. It is opposite than the trend observed for PAN, in which addition of MWNTs improved the fiber strength due to the random order of MWNTs in carbon fibers and poor adhesion and dispersion lead to decreasing in the mechanical characteristics.
APPLICATION OF THE CARBONIZED FIBERS
In recent years, the production of carbon fiber from by-product lignin is a key material directly linked to automotive industry, construction aerospace and energy industry (Liu W.-J. et al., 2015;Kai et al., 2016). Energy manufacturing is the fast-growing industry for renewable resource-based alternatives to traditional petroleum-based materials. In particular, supercapacitors, energy storage and dye-synthesized solar cells (DSSCs) are increasingly in demand as the solution to providing low-cost, light-weight, and more energy-efficient devices (Fang et al., 2017a).
Polyacrylonitrile (PAN) is an important precursor material for the production of carbon fiber. However, PAN has a nonrenewability and intensive processing conditions as well as a high cost which adds to the product's final cost limiting the application for anode production in batteries (Choi et al., 2013;Wang et al., 2013). Besides lignin, several types of materials have been tested as a potential source of biomass-based carbon such as rice straw (Zhang et al., 2009), bacterial cellulose (Wang et al., 2015), egg protein , sugar (Xing et al., 1996), olive, cherry stones (Caballero et al., 2011), and peanut shells (Fey et al., 2003), etc. However, the disadvantage of these resources is complicated processing .
Graphene and non-graphene carbons were applied as a major component to produce the electrode for lithium-ion battery anodes (LIBs). This has resulted from their porous structure, processability, availability, chemical stability and low cost . One of the most important factors for the anode material is to have a high rate capability and high capacity (Choi et al., 2013).
The structural and chemical stability of these materials through Li-ion insertion/de-insertion is the essential property that allows for a reversible charge/discharge process (Choi et al., 2013). Limited theoretical capacity (372 mA.h.g −1 ) and long charging times are limitations of commercial graphite anodes (Choi et al., 2013). Nano-carbon materials that enhance the rate capability of anodes have high electrical conductivity and large specific surface area (Choi et al., 2013;Wang et al., 2013). Examples of these nano-carbon materials are carbon nanofibers, carbon nanotubes, and graphene. Smaller carbon fiber diameters will lead to higher surface areas as well as better rate capability. Choi et al. (2013) showed that the carbon nanofibers diffusion length of Li-ion is shorter than micro-size graphite materials.
The electrochemical properties of carbonized fibers as lithiumion battery anodes are summarized in Table 3. For fabricating electrodes, carbon nanofibers are blended with other materials as shown in Table 3. Choi et al. (2013) made electrodes from lignin carbon nanofibers. They obtained an electrode loading level around 1-2 mg/cm 2 using lithium metal as anode and a polypropylene separator. The fibers from blended lignin/PAN precursors had a smaller surface area. However, the plateau was longer for these fibers due to oxygen groups on the fiber surface. Their results showed that the carbon fibers from lignin/PAN (30/70 and 50/50 wt.%) had a similar rate capability except for initial irreversible capacity. The fibers showed a cycle performance and a high rate compared to traditional PAN fibers. Wang et al. (2013) fabricated Li-ion batteries by using a carbon fiber mat and a lithium foil as the anode and counter electrode, respectively, made from lignin with different percentages of PEO. The electrical conductivity of interconnected fiber mats was measured at 10.53 S/cm which fared better than separated fiber mats (7.34 S/cm). The charge capacity and electrical conductivity of interconnected fiber mats were enhanced by the incorporation of 12.6 wt% nitrogen to the surface ( Table 3). The initial discharge curves of carbon fibers showed the plateau corresponding to SEI formation around 1.2 V which disappeared for N-doped carbon fibers. Therefore, SEI formation was hindered by N-doping and the initial capacity loss was reduced. Dalton et al. (2019) reported that the electrical conductivity of lignin-electrospun CNFs produced at 900 and 1100 o C with 70% lignin is 9.65 and 24.47 S/cm, respectively. This enhancement in the electrical conductivity resulted from increasing the degree of graphitization. These new CNF materials with both n-type and p-type semiconducting behaviors can be used in thermoelectric generators. Zhao et al. (2018) fabricated a new sustainable carbon fiber from lignin/PVA as a highly efficient and binder-free counter electrode for DSSCs. The new electrode of DSSCs has a conversion efficiency of 7.60% which can be used as a substitute to expensive and commercial Pt electrodes (conversion efficiency of 7.67%). García-Mateos et al. (2017) also assembled new Ptcontaining carbon fibers which are applied as electrodes with no conductivity promoter or binder.
Various types of carbon materials such as graphene, carbon nanotubes and activated carbon have been reported to manufacture electrodes for supercapacitors. Lignin-carbon fiber has been studied in different publication to manufacture electrodes for capacitors/supercapacitors (Fang et al., 2017b;Lei et al., 2017;Ma et al., 2018;Yu et al., 2018;Perera Jayawickramage et al., 2019;Roman et al., 2019;Schlee et al., 2019a,b;Yun et al., 2019). Lai et al. (2014b) applied lignin-carbon fibers as electrodes to prepare electrochemical supercapacitors as summarized in Table 4. The highest gravimetric capacitance was obtained with high alkali lignin content (70%) due to the increase in surface area and decreasing in pore size of fibers. During the test, the electrodes were electrochemically stable or durable. Figure 6 displayed the cyclic voltametry of lignin/PVA with differernt surfactant which presented largest loop area and superior electrical double-layer capacitive behavior except for carbon fiber with nonionic surfactant Triton X-100 (CNF-TX) (Fang et al., 2017b). CNF with 1.0% anionic surfactant sodium dodecyl sulfate displayed the longest discharge time which resulted in better material capacitance. Due to high surface area, high gas permeability and high porosity, lignin-carbon fiber is applied as an adsorbent for water purification and adsorption of volatile organic compounds (Beck et al., 2017;Song et al., 2017Song et al., , 2019Zhang et al., 2019). Zhang et al. (2019) prepared a nanocarbon fiber membrane from Lignin/PVA to adsorb cationic dye (Safranine T). The adsorbent membrane showed superior desorption behavior with the ability to be recycled with constant adsorption performance.
These accomplishments suggest the possible utilization of lignin-carbon fibers for a wide variety of applications, such as supercapacitors, separation, electrodes, catalysis and dye-synthesized solar cells (DSSCs) (Fang et al., 2017a;Zhao et al., 2018;Cao et al., 2020). However, these sustainable materials should be enhanced further especially the mechanical properties and scale-up for industrial commercialization. Recent reviews showed a different technology to scale-up the electrospinning process for biomedical applications (Vass et al., 2020). The technology of co-axial and multi-axial system offer an opportunity for industrial scale-up of electrospinning.
CONCLUSION
Electrospun fibers have been used as precursors for the production of carbon nanofibers. Electrospinning is a convenient method for the spinning of thermally sensitive biopolymers. This method provides an easy way for the incorporation of metal particles and fillers in the fibers. Additionally, nanofibers have exceptional properties like high surface area and porosity, which is valuable for multiple applications. Due to increasing environmental awareness, renewable resource-based and sustainable materials have been studied as an alternative to the petroleum-based materials. Electrospinning of biopolymer lignin fibers and conversion to carbonized fibers have been studied to develop a convenient process for the production of lignin fibers and carbon fibers. Carbon fibers have been characterized to determine their morphology, physical properties and degree of graphitization. Energy storage devices mainly batteries and supercapacitors have been fabricated by using lignin-based carbon fibers and tested. Promising results have shown these materials have the potential for the next generation of renewable electronics and energy storage devices.
The limitation of lignin-based carbon fiber is the heterogeneity and diversity of lignin which resulted in different characteristic of the fiber produced. By lowering the cost of organosolv lignin, it can be an excellent candidate to overcome this shortage. Another limitation is the necessity of scaling up electrospinning for industrial commercialization, which is the one of the main challenges for electrospinning of polymers. The technology of co-axial and multi-axial system offer an opportunity for industrial scale-up of electrospinning.
The challenge of lignin-based carbon fiber is to apply these fibers in the composite application especially in the automotive industry and in biomedical applications such as facemasks, and shields which are in short supply during global pandemics. Promising results have shown these materials have the potential for the next generation of renewable electronics and energy storage devices. New studies are proceeding to investigate further ways to improve the processing efficiency and determine process-property relationships such as effects of electrospun fiber orientation on the conductivity of the carbonized fibers. These detailed studies will help to further tailor the fiber properties for targeted applications. | 10,689 | 2020-09-09T00:00:00.000 | [
"Materials Science",
"Environmental Science",
"Engineering"
] |
Double insertions of SMEFT operators in gluon fusion Higgs boson production
Deviations from the Standard Model (SM) can be parameterized in terms of the SM effective field theory (SMEFT), which is typically truncated at dimension-6. Including higher dimension operators -- as well as considering simultaneous insertions of multiple dimension-6 operators -- may be necessary in some processes, in order to correctly capture the properties of the underlying UV theory. As a step towards clarifying this in the Higgs boson production in gluon fusion process, we study double insertions of dimension-6 operators in the 1-loop virtual amplitude. We present needed Feynman rules up to $\mathcal{O}(1/\Lambda^4)$ and we numerically study the impact of various approximations to the $\mathcal{O}(1/\Lambda^4)$ expansion.
I. INTRODUCTION
Current measurements of LHC experiments are in excellent agreement with theoretical predictions, but with uncertainties at the O(5−20 %) level [1].As a result, the High Luminosity LHC program will be focussed on high precision measurements.It is expected that the experimental uncertainties will reduced to O(1 %) for many observables [2].This requires precise theoretical Standard Model (SM) predictions, but also precise computations in specific Beyond the Standard Model (BSM) scenarios to describe potentially emerging small non-SM signatures.A more general approach is also possible; BSM physics which contains no new light particles and which respects the SM gauge symmetries can be parameterized using the Standard Model effective field theory (SMEFT) [3].This consists of an expansion around the SM Lagrangian L SM in terms of an infinite tower of higher dimension operators, where Λ is chosen to be the scale of new physics, O d i are operators of dimension d, and C d i the corresponding dimensionless SMEFT Wilson coefficients (WC).Fits to the latter have been made using Higgs, di-boson, electroweak precision, and top data [4][5][6][7].Such analyses are usually done by terminating the series in Eq. ( 1) after dimension-6 operators.Yet, the need for precision calls for an investigation beyond O(1/Λ 2 ).At the next non-trivial order, this includes studying the impact of dimension-8 SMEFT operators, but also double insertions of dimension-6 operators [8][9][10][11][12][13][14][15][16].An amplitude, A i , for a lepton number conserving process can be parameterized in the SMEFT as a power series in 1/Λ 2 , where the α coefficients are process dependent.The terms proportional to C 6 j C 6 k /Λ 4 are the double insertions of interest here.The amplitude-squared corresponding to a cross section is then expanded generically as, If a coefficient is well constrained by data, it may be sufficient to retain only the O(1/Λ 2 ) contributions to observables.This is typically the case in fits to electroweak precision observables [17][18][19].However, for most of the SMEFT coefficients contributing to predictions for LHC observables, the O(1/Λ 4 ) terms play an important role.
Global fits [4][5][6][7] include the first term on the second line of Eq. ( 3) (required to make the cross sections positivedefinite), but the other terms of O(1/Λ 4 ) are more subtle.
For tree-level processes, the second term on the second line of Eq. (3) (which corresponds to a double insertion) is easily included [20,21] and can have important numerical effects [22].The dimension-8 contributions (first term on the third line of Eq. ( 3)) have been studied in only a few special cases and the numerical importance of these terms is not known in general [8][9][10]23].In the case where the new physics that generates the SMEFT coefficients corresponds to a strongly interacting theory, it has been argued that the dimension-8 contributions are small [24].
In the following, we present a preliminary investigation of the impact of double insertions on the inclusive gluon fusion Higgs boson production process.This production channel has recently been calculated in the SM to N 3 LO QCD [25][26][27].In the SMEFT, the NLO result with single insertions of dimension-6 operators is well known [28][29][30][31][32]. Gluon fusion Higgs production has also been calculated to all orders in v 2 /Λ 2 using the GeoSMEFT approach [33,34].Here, we present a study of the 1loop contributions to the gg → h amplitude including all terms of O 1/(16π 2 Λ 4 ) and we investigate the numerical effects of double insertions of a consistent subset of dimension-6 SMEFT operators.
The paper is organized as follows.Section II contains a brief description of the SMEFT to O 1/Λ 4 .The 1-loop calculation of gg → h to O 1/(16π 2 Λ 4 ) is presented in Section III, including the insertion of two dimension-6 operators in the 1-loop amplitude and the required counterterm for the gg → h process corresponding to the dimension-8 (φ † φ) 2 G A,µν G B µν operator.Numerical effects of the double insertions are investigated in Section IV, along with a discussion of the potential effects of neglected contributions.Finally, we conclude in Section V with a discussion of the path forward to a more complete study of the impact of O(1/Λ 4 ) effects.
II. SMEFT TO O Λ −4
We start by presenting the pieces of the dimension-6 SMEFT Lagrangian (in the Warsaw basis [35]) which are relevant for the calculation of the virtual 1-loop gg → h diagrams containing double insertions.All the remaining necessary terms of the Lagrangian can be found in Ref. [36].In the end of this section, we present the relationships up to O(1/Λ 4 ) between the original parameters of the Lagrangian and our input parameters [23].
We neglect finite contributions from dimension-8 terms.Although such contributions enter in the cross section at the same order as double insertions of dimension-6 operators, they can be treated separately, as they are not required to obtain a gauge-independent result.Yet, the dimension-8 operators are in general required to absorb ultraviolet (UV) divergences of O 1/Λ 4 .There is a single dimension-8 operator that can be used to this end [37,38], When renormalizing the theory, the counterterm δC G 2 φ 4 is generated from Eq. ( 4).Below, we present the result δC G 2 φ 4 using minimal subtraction.We work in minimal subtraction, which amounts to dropping all poles.A complete understanding of dimension-8 renormalization in the SMEFT, including fermionic operators, does not yet exist, although significant progress has been made in understanding the bosonic operators [39][40][41][42][43].
A. Lagrangian and field redefinitions
The relevant pieces of the dimension-6 SMEFT Lagrangian can be grouped into three terms, The first one is the Higgs Lagrangian, where φ represents the Higgs doublet, which we parametrize as Here, v T is the vacuum expectation value (vev) that minimizes the Higgs potential in the presence of the SMEFT operators, and h, φ 0 and φ + represent the Higgs, the neutral Goldstone, and the charged Goldstone boson fields, respectively.The second term in Eq. ( 5) is the QCD Lagrangian, with where g A µ is the gluon field.Finally, L fermions is the fermionic Lagrangian,
and we retain only the top quark contributions.
To ensure that all fields have canonical kinetic terms, we need to perform the following shifts, where with X h in Eq. (12a) defined as
B. Input Parameters
We choose as independent parameters where G F is the Fermi constant, α s is the strong coupling constant and M Z (M W ), M h and m t are the gauge boson, Higgs and top masses.
The expression for v T can be determined through the amplitude for muon decay, including double insertions of dimension-6 operators.Assuming flavor universality of the WCs which can be inverted to yield
+ 2C
(3) The parameters µ 2 and λ are fixed by the requirement that the coefficient of the Higgs tadpole contribution vanishes (i.e. that v T is the true vev) and that the mass of the Higgs field in the Lagrangian is given by M h .Using also Eq. ( 16), we find The top quark Yukawa coupling is determined by requiring that the mass of the top-quark field in Eq. ( 10) is given by m t , Finally, g 2 s can be related to 4πα s through the inverse transformation of Eq. (11c) and we find where we defined
III. CALCULATION
We now describe the 1-loop calculation of the gg → h amplitude to O 1/(16π 2 Λ 4 ) .The Feynman rules accurate to O(1/Λ 4 ) that are relevant for our calculation are given in Appendix B. Lorentz and gauge invariance imply that at any order, the amplitude for g A (p µ 1 )g B (p ν 2 ) → h must have the form, where, up to 1-loop, 2) and (6) also contribute with crossed initial states (not shown for compactness).with F 0 representing the tree-level SMEFT contribution, F V the virtual 1-loop amplitude and F CT the total counterterm.
The tree-level contribution is given by φl − 2C ll . ( F V is computed from the diagrams shown in Fig. 1, using the software FeynMaster [44][45][46][47].We use the true vev up to 1-loop order [48] and we work in the Param-eter Renormalized tadpole scheme [49].Analytic results for F V can be found in the auxiliary file submitted with this paper.Finally, F CT is determined by identifying the original parameters and fields in Eqs (4, 5) as bare parameters (with index "(0)") and by expanding them into renormalized quantities, where C X represents a generic WC.The expression for F CT is given in Appendix A. 1 This allows us to determine δC G 2 φ 4 by requiring Eq. ( 23) be free from divergences.We work in dimensional regularization, using D = 4 − 2ϵ for the spacetime dimension, and fix the counterterms of the WCs in the minimal subtraction scheme [50].We perform the calculation in two independent ways: i) we subtract known infrared (IR) poles using results of Ref. [51]; and ii) we use Package-X [52] and consider only UV poles.
It is sufficient to compute the counterterms in Eq. (A2) to order O(1/Λ 2 ), since Eq.(A2) is already O(1/Λ 2 ).δZ h and δZ g can be computed from the Higgs and gluon self energies at 1-loop, respectively; explicit expressions can be found in Appendix A. δG F is given by where the expressions for ∆r SM and ∆r EFT can be found in Appendix D of Ref. [53].The contributions from δC (3) φl and δC ll cancel when Eq. ( 26) is used in Eq. (A2).The contribution to δC φG of O(1/Λ 2 ) can be obtained from Refs [54]; we confirmed their result by requiring that Eq. 22 be finite to O(1/Λ 2 ) and present it in Eq. (A1).Combining these elements, we find the expression for δC G 2 φ 4 given in Eq. (A5).
IV. IMPACT OF DOUBLE INSERTIONS
To study the impact of double insertions on the 1-loop amplitude of the gluon fusion process, we compute the amplitude squared in two ways: i) we truncate the amplitude at O(1/Λ 2 ) and then compute the amplitude squared; ii) we compute the amplitude to O(1/Λ 4 ) and then truncate the amplitude squared at O(1/Λ 4 ).The first truncation is not sensitive to the double insertions of the dimension-6 operators, and we label it as "single".The second truncation is sensitive to the double insertions of SMEFT operators, and we label it as "double". 1As discussed in Section II, we ignore finite effects from dimension-8 operators (i.e.we set the renormalized WC C G 2 φ 4 to zero).We note that the latter is in fact a complete computation of the virtual amplitude up to O 1/Λ 4 at 1-loop, neglecting finite contributions from dimension-8 operators.Since the WC C φG contributes at tree-level, the double insertions proportional to C φG require the computation of 2-loop virtual graphs with single insertions of dimension-6 operators, along with 1-loop virtual graphs proportional to C φG to obtain an IR finite result.
As a first step in understanding the relevance of double insertions, we consider a scenario where C φG is generated at loop level and thus can be consistently set to zero after renormalization.This is a realistic scenario from a model building point of view.At tree-level, scalars, vector-like quarks, and vector particles in arbitrary representations that contribution to the dimension-6 SMEFT Lagragian do not generate C φG contributions [55].It is interesting to note that vector-like quarks generate C φG at 1-loop consistent with our assumption.When we set C φG = 0, there are no real corrections and we can study the numerical effects of the double insertions from the remaining operators using our finite results for the renormalized amplitude to construct a cross section normalized to the SM result. 2 For the numerical results reported below, we use M h = 125 GeV, M W = 80.377 GeV , M Z = 91.1876GeV, m t = 172 GeV, G F = 1.166 • 10 −5 GeV −2 and α s = 0.1179.
The renormalization scale µ is chosen to be equal to the Higgs mass M h .Finally, we write the virtual amplitude squared as, In the C φG = 0 limit that we are working in, Numerical results for a i and b ij in the 2 expansions at O(1/Λ4 ) are presented in Table I.
We first note that some contributions that contain C ll or C φl are present in the single but vanish in the double setup.From the Feynman diagrams shown in Fig. 1 it can be easily seen that these contributions are proportional to 1/(R 2 φ v 2 T ), which vanishes in the double expansion.Consequently, the functional dependence of the amplitude on these WCs in the two expansions is quite different; for example, we show this for the combination of C (3) φl and C tG in the upper plot in Fig. 2. In this figure we show the regions where |µ ggh − 1| is less than 5 %.For a given value of C (3) φl and C tG , the remaining coefficients C ll , C φ□ , C φD , and C tφ are varied over the region allowed by the 95% CL individual fits of Ref. [5]. 3 It is clear that the difference between the single and double insertion expansions has no phenomenological relevance, since the values of the parameters plotted are excluded by fits to Higgs data [4,5,7].We do not show it explicitly, but we have checked that the same conclusion holds for all other combinations that include C ll and/or C (3) φl . 2 We have explicitly checked the gauge independence of our results. 3Limits used in all figures for WCs not shown explicitly are −0.5 ≤ We also observe a non-trivial change in the coefficient of C tG and we show a fit in combination with C φ□ to the value of the SM amplitude squared in Fig. 2 (bottom).Also in this case, significant differences between single and double expansions only occur for values of the WCs far beyond current single parameter limits [5].
The biggest change is in the coefficient of C tG C tφ .For this combination of WCs, the allowed parameter space is available in Ref. [5] from 2-parameter fits to Higgs and Higgs plus top data at 95 % CL.In Fig. 3, we show these regions together with a fit to |µ ggh − 1| < 5%.The difference in the results for single and double expansions is small and demonstrates the power of including top data in the fits.While fits to Higgs data alone show a small sensitivity to the expansion, when top data is included with the Higgs data, there is again no difference between the two expansions in the region allowed by global fits. 4
V. CONCLUSIONS
We computed the 1-loop amplitude for the gluon fusion process gg → h including all contributions of dimension-6 operators up to O(1/(16π 2 Λ 4 )).This includes double insertions of dimension-6 operators and the relationships between parameters in the SMEFT Lagrangian and physical observables to this order.We derived the necessary Feynman rules that are valid up to O(1/Λ 4 ) and determined the required counterterm to obtain a UV finite result at this order.For our numerical studies, we considered the limit C φG = 0 which ensures that there are no infrared singularities.We note that this is a well motivated scenario, since in many BSM models C φG is only generated at 1-loop level.We then compared the gluon fusion cross section in different expansions up to O 1/Λ 4 and found that the impact of the double insertions is negligible for values of the WCs allowed by global fits and neglecting the unknown dimension-8 contributions.
An extension of this study including the effects of C φG and double insertions would require 2-loop virtual amplitudes with up to two insertions of dimension-6 SMEFT operators as well as real-virtual and double real emission contributions.We leave this exercise for future investigations.
Digital data associated with this research is contained in the auxiliary file attached to this paper.
and the O(1/Λ 2 ) contribution as δC 6 φG , The quantity F CT defined in Eq. 23 is given by The poles of δZ h and δZ g are respectively such that C tφ , (A3) C ll C tG − ϵ δC 8 φG . (A5)
Figure 1 .
Figure 1.Virtual 1-loop contributions to the gluon fusion to Higgs amplitude including contributions from both single and double insertions of dimension-6 SMEFT operators.Conventions used throughout the paper concerning 4-momenta, Lorentz indices and colour indices are shown in diagram (1).Note that diagrams (1), (2) and (6) also contribute with crossed initial states (not shown for compactness).
Figure 2 .
Figure2.Regions where |µ ggh −1| < 5% are shown for single insertions (squared blue) and double insertions (orange).The limits from global fits to individual operators at 95% CL are denoted by the black cross.[4,5,7].The WCs not shown are varied over values allowed by the 95% CL fits to individual coefficients of Ref.[4].
Figure 3 .
Figure 3. Allowed parameter space from a 2-parameter fit to Ctφ and CtG.Yellow (hashed) and green (fine hashed) ellipses show constraints from linear fits at 95% CL to Higgs data and Higgs plus top data respectively[5].Regions where |µ ggh − 1| < 5% are shown for single insertions (squared blue) and double insertions (orange).The WCs not shown are varied over values allowed by the 95% CL fits to individual coefficients of Ref.[4].
Table I .
(27)rical results for linear coefficients ai and coefficients bij of pairs of SMEFT WCs, c.f. Eq.(27).Results are shown with (third column) or without (second column) double insertions.In the fourth column we show the ratio of single coefficients over double coefficients.Ratios given as rational numbers are exact.Numerical values for physical parameters are reported in section IV.See text for further details. | 4,348.4 | 2022-12-06T00:00:00.000 | [
"Physics"
] |
An Effective Framework for Weakly-Supervised Phrase Grounding
Phrase localization is a task that studies the mapping from textual phrases to regions of an image. Given difficulties in annotating phrase-to-object datasets at scale, we develop a Multimodal Alignment Framework (MAF) to leverage more widely-available caption-image datasets, which can then be used as a form of weak supervision. We first present algorithms to model phrase-object relevance by leveraging fine-grained visual representations and visually-aware language representations. By adopting a contrastive objective, our method uses information in caption-image pairs to boost the performance in weakly-supervised scenarios. Experiments conducted on the widely-adopted Flickr30k dataset show a significant improvement over existing weakly-supervised methods. With the help of the visually-aware language representations, we can also improve the previous best unsupervised result by 5.56%. We conduct ablation studies to show that both our novel model and our weakly-supervised strategies significantly contribute to our strong results.
Introduction
Language grounding involves mapping language to real objects or data. Among language grounding tasks, phrase localization-which maps phrases to regions of an image-is a fundamental building block for other tasks. In the phrase localization task, each data point consists of one image and its corresponding caption, i.e., d = I, S , where I denotes an image and S denotes a caption. Typically, the caption S contains several query phrases P = {p n } N n=1 , where each phrase is grounded to a particular object in the image. The goal is to find the correct relationship between (query) phrases in the caption and particular objects in the image. Existing work (Rohrbach et al., 2016;Kim et al., 2018;Li et al., 2019;Yu et al., 2018;Liu et al., 2020) mainly focuses on the supervised phrase localization setting. This requires a large-scale annotated dataset of phrase-object pairs for model training. However, given difficulties associated with manual annotation of objects, the size of grounding datasets is often limited. For example, the widely-adopted Flickr30k (Plummer et al., 2015) dataset has 31k images, while the caption dataset MS COCO (Lin et al., 2014) contains 330k images.
To address this limited data challenge, two different approaches have been proposed. First, a weakly-supervised setting-which requires only caption-image annotations, i.e., no phrase-object annotations-was proposed by Rohrbach et al. (2016). This is illustrated in Figure 1. Second, an unsupervised setting-which does not need any training data, i.e., neither caption-image and phraseobject annotation-was proposed by Wang and Specia (2019). To bring more semantic information in such a setting, previous work (Yeh et al., 2018;Wang and Specia, 2019) used the detected object labels from an off-the-shelf object detector (which we will generically denote by PreDet) and achieved promising results. In more detail, for a given im-An older gentleman is standing next to the man with a red accordion over his shoulder. . Afterward, all the query phrases P and the detected objects O are fed into an alignment model to predict the final phrase-object pairs. However, purely relying on the object labels causes ambiguity. For example, in Figure 2, the grounded objects of phrases "an older man" and "the man with a red accordion" are both labeled as "man," and thus they are hard to differentiate.
Given these observations, we propose a Multimodal Alignment Framework (MAF), which is illustrated in Figure 3. Instead of using only the label features from the PreDet (in our case, a Faster R- CNN (Ren et al., 2015;Anderson et al., 2018a)), we also enhance the visual representations by integrating visual features from the Faster R-CNN into object labels. (This is shown in Figure 2.) Next, we build visually-aware language representations for phrases, which thus could be better aligned with the visual representations. Based on these representations, we develop a multimodal similarity function to measure the caption-image relevance with phrase-object matching scores. Furthermore, we use a training objective to score relevant captionimage pairs higher than irrelevant caption-image pairs, which guides the alignment between visual and textual representations.
We evaluate MAF on the public phrase localization dataset, Flickr30k Entities (Plummer et al., 2015). Under the weakly-supervised setting (i.e., using only caption-image annotations without the more detailed phrase-object annotations), our method achieves an accuracy of 61.43%, out-performing the previous weakly-supervised results by 22.72%. In addition, in the unsupervised setting, our visually-aware phrase representation improves the performance from the previous 50.49% by 5.56% up to 56.05%. Finally, we validate the effectiveness of model components, learning methods, and training techniques by showing their contributions to our final results.
Related Work
With the recent advancement in research in computer vision and computational linguistics, multimodal learning, which aims to explore the explicit relationship across vision and language, has drawn significant attention. Multimodal learning involves diverse tasks such as Captioning (Vinyals et al., 2015;Xu et al., 2015;Karpathy and Fei-Fei, 2015;Venugopalan et al., 2015), Visual Question Answering (Anderson et al., 2018a;Kim et al., 2018;Tan and Bansal, 2019), and Vision-and-Language Navigation (Anderson et al., 2018b;Thomason et al., 2020). Most of these tasks would benefit from better phrase-to-object localization, a task which attempts to learn a mapping between phrases in the caption and objects in the image by measuring their similarity. Existing works consider the phrase-to-object localization problem under various training scenarios, including supervised learning (Rohrbach et al., 2016;Yu et al., 2018;Liu et al., 2020;Plummer et al., 2015;Li et al., 2019) and weakly-supervised learning (Rohrbach et al., 2016;Yeh et al., 2018;Chen et al., 2018). Besides the standard phrase-object matching setup, previous works (Xiao et al., 2017;Akbari et al., 2019;Datta et al., 2019) have also explored a pixellevel "pointing-game" setting, which is easier to model and evaluate but less realistic. Unsupervised learning was studied by Wang and Specia (2019), who directly use word similarities between object labels and query phrases to tackle phrase localization without paired examples. Similar to the phrase-localization task, Hessel et al. (2019) leverages document-level supervision to discover image-sentence relationships over the web. . A dataset of images and their captions is the input to our model. PreDet predicts bounding boxes for objects in the image and their labels, attributes, and features, which are then integrated into visual feature representations. Attention is applied between word embedding and visual representations to compute the visually-aware language representations for phrases. Finally, a multi-modal similarity function is used to measure the caption-image relevance based on the phraseobject similarity matrix.
final output feature of PreDet (denoted as f m ) as the VFR, and Wang and Specia (2019) uses the label embedding (denoted as l m ) of the predicted label from PreDet as the VFR. This unitary VFR usually lacks the counter-side information. Hence, we exploit different aspects of features extracted from PreDet for each object o m in the image. In particular, we consider the output feature f m , the label embedding l m , and the attribute embedding t m of the object o m as the VFR, where W t and W f are two projection matrices. Naively initializing W t and W f will lead the model to a sub-optimal solution. In Section 4, we discuss the effectiveness of different initializations.
Textual Feature
Representations. Existing works for textual feature representation (TFR) (Kim et al., 2018;Yu et al., 2018;Wang and Specia, 2019) commonly treat it independently of the VFR. From a different angle, we use the attention between the textual feature and the VFR v m to integrate the visual information from the object into TFR. In more detail, we first use the GloVe embedding (Pennington et al., 2014) to encode the K n words in the phrase p n to {h n,k } Kn k=1 , where h n,k ∈ R d . Here, the dimension of h n,k is the same as v m . We then define a word-object matching score a m n,k for each h n,k in the phrase to all object features v m . In particular, for each word h n,k in the phrase, we select the object with the highest matching score, Finally, we normalize the attention weights for each word in the phrase p n to obtain the final TFR, e n : where W p is a projection matrix. In Section 4, we study the (superb) performance of the weight β n,k over simply the average h n,k as well as the importance of the initialization of W p .
Training Objective and Learning Settings
Contrastive loss. For the weakly-supervised setting, we use a contrastive loss to train our model, due to the lack of phrase-object annotations. The contrastive objective L aims to learn the visual and textual features by maximizing the similarity score between paired image-caption elements and minimizing the score between the negative samples (i.e., other irrelevant images). Inspired by the previous work in caption ranking (Fang et al., 2015), we use the following loss, .
Here, sim(I, S) is the similarity function defined below. Particularly, for each caption sentence, we use all the images I in the current batch as candidate examples.
Multimodal Similarity Functions. Following the document-level dense correspondence function in Hessel et al. (2019), our multimodal similarity function is defined as: Here, A ∈ R N ×M is the phrase-object similarity matrix, and its component is computed as and sim(I, S) measures the image-caption similarity. It is calculated based on the similarity score between each phrase in the caption and each object in the image. Note that the maximum function max m A n,m directly connects our training objective and inference target, which alleviates the discrepancy between training and inference.
Weakly-supervised setting. During training, our PreDet model is frozen. The word embeddings, W t , W f , and W p are trainable parameters. Here, the word embedding is initialized with GloVe (Pennington et al., 2014). We study the different initialization methods for the rest in Section 4. During inference, for the n-th phrase p n in an image-caption pair, we choose the localized object by Unsupervised setting. In the unsupervised setting, the localized object is determined by We drop the parameters W t , W f , and W p here, because there is no training in the unsupervised setting. β n,k is only calculated based on l m (instead of v m ).
Empirical Results
Dataset details. The Flickr30k Entities dataset contains 224k phrases and 31k images in total, where each image will be associated with 5 captions and multiple localized bounding boxes. We use 30k images from the training set for training and 1k images for validation. The test set consists of 1k images with 14,481 phrases. Our evaluation metric is the same as Plummer et al. (2015). 2 We consider a prediction to be correct if the IoU (Intersection of Union) score between our predicted bounding box and the ground-truth box is larger than 0.5. Following Rohrbach et al. (2016), if there are multiple ground-truth boxes, we use their union regions as a single ground-truth bounding box for evaluation.
Weakly-supervised Results. We report our weakly-supervised results on the test split in Table 1. We include here upper bounds (UB), which are determined by the correct objects detected by the object detectors (if available). Our MAF with ResNet-101-based Faster R-CNN detector pretrained on Visual Genome (VG) (Krishna et al., 2017) can achieve an accuracy of 61.43%. This outperforms previous weakly-supervised methods by 22.71%, and it narrows the gap between weaklysupervised and supervised methods to 15%. We also implement MAF with a VGG-based Faster R-CNN feature extractor pretrained on PASCAL VOC 2007 (Everingham et al., 2010), following the setting in KAC (Chen et al., 2018), and we use the same bounding box proposals as our ResNetbased detector. We achieve an accuracy of 44.39%, which is 5.68% higher than existing methods, showing a solid improvement under the same backbone model. Unsupervised Results. 3 We report our unsupervised results for the phrase localization method (described in Section 3.2) in Table 2. For a fair comparison, we re-implemented Wang and Specia (2019) with a Faster R-CNN model trained on Visual Genome (Krishna et al., 2017). This achieves 49.72% accuracy (similar to 50.49% as reported in their paper). Overall, our result (with VG detector) significantly outperforms the previous best result by 5.56%, which demonstrates the effectiveness of our visually-aware language representations. Entities. w2v-max refers to the similarity algorithm proposed in (Wang and Specia, 2019); Glove-att refers to our unsupervised inference strategy in Section 3.2; CC, OI, and PL stand for detectors trained on MS COCO (Lin et al., 2014), Open Image (Krasin et al., 2017), and Places (Zhou et al., 2017). Ablation Experiments. In this section, we study the effectiveness of each component and learning strategy in MAF. The comparison of different feature representations is shown in Table 3. Replacing the visual attention based TFR with an average pooling based one decreases the result from 61.43% to lower than 60%. For the VFR, using only object label l m or visual feature f m decreases the accuracy by 4.20% and 2.94%, respectively. One interesting finding here is that the performance with all visual features (last row) is worse than the model with only l m and f m . Actually, we can infer that attributes cannot provide much information in localization (24.08% accuracy if used alone), partly because attributes are not frequently used to differentiate objects in Flickr30k captions. We then investigate the effects of different initialization methods for the two weight matrices, W f and W p . The results are presented in Table 4. Here ZR means zero initialization, RD means random initialization with Xavier (Glorot and Bengio, 2010), and ID+RD means identity with small random noise initialization. We run each experiment for five times with different random seeds and compute the variance. According to Table 4, the best combination is zero initialization for W f and identity+random initialization for W p . The 3 More unsupervised results are available in Appendix B. Table 3), thus using RD on initializing W f will disturb the feature from l m ; (ii) For W p , an RD initialization will disrupt the information from the attention mechanism, while ID+RD can both ensure basic text/visual feature matching and introduce a small random noise for training.
Conclusions
We present a Multimodal Alignment Framework, a novel method with fine-grained visual and textual representations for phrase localization, and we train it under a weakly-supervised setting, using a contrastive objective to guide the alignment between visual and textual representations. We evaluate our model on Flickr30k Entities and achieve substantial improvements over the previous state-of-the-art methods with both weakly-supervised and unsupervised training strategies. Detailed analysis is also provided to help future works investigate other critical feature enrichment and alignment methods for this task.
B Baselines
In Table 5, we report the results of different unsupervised methods: • Random: Randomly localize to a detected object.
• Center-obj: Localize to the object which is closest to the center of image, where we use an L 1 distance D = |x−x center |+|y −y center |.
• Max-obj: Localize to the object with the maximal area.
• Whole Image: Always localize to the whole image.
• Direct Match: Localize with the direct match between object labels and words in the phrase, e.g., localize "a red apple" to the object with the label "apple." If multiple labels are matched, we choose the one with the largest bounding box.
• Glove-max: Consider every word-label similarity independently and select the object label with the highest semantic similarity with any word.
• Glove-avg: Represent a phrase using an average pooling over Glove word embeddings and select the object label with highest the semantic similarity with the phrase representation.
• Glove-att: Use our visual attention based phrase representation, as is described in the Methodology 3.1.
Note that in all label-based methods (Direct Match (Wang and Specia, 2019), and our unsupervised method), if multiple bounding boxes share the same label, we choose the largest one as the predicted box.
C Qualitative Analysis
To analyze our model qualitatively, we show some visualization results in Figure 4 and Figure 5. Figure 4 shows examples with consistent predictions between supervised and unsupervised models. In these cases, both methods can successfully learn to localize various objects, including persons ("mother"), clothes ("shirt"), landscapes ("wave"), and numbers ("56"). Figure 5 shows examples where supervised and unsupervised methods localize to different objects. In the first image, they both localize the phrase "entrance" incorrectly. In the remaining three images, the supervised method learns to predict a tight bounding box on the correct object, while the unsupervised method localizes to other irrelevant objects. For example (bottom left figure for Figure 5), if the object detector fails to detect the "blanket," then the unsupervised method can never localize "green blanket" to the right object. Still, the supervised method can learn from negative examples and obtain more information. | 3,890.6 | 2020-10-12T00:00:00.000 | [
"Computer Science"
] |
Impact of a companion and of chromospheric emission on the shape of chromosome maps for globular clusters the shape of chromosome maps for globular clusters. Research
Context. Globular clusters (GCs) host multiple populations of stars that are well-separated in a photometric diagram - the chromosome map - built from specific Hubble Space Telescope (HST) filters. Stars from di ff erent populations feature at various locations on this diagram due to peculiar chemical compositions. Stars of the first population, with field star-like abundances, sometimes show an unexpected extended distribution in the chromosome map. Aims. We aim to investigate the role of binaries and chromospheric emission on HST photometry of globular clusters’ stars. We quantify their respective e ff ects on the position of stars in the chromosome map, especially among the first population. Methods. We computed atmosphere models and synthetic spectra for stars of di ff erent chemical compositions, based on isochrones produced by stellar evolution calculations with abundance variations representative of first and second populations in globular clusters. From this we built synthetic chromosome maps for a mixture of stars of di ff erent chemical compositions. We subsequently replaced a fraction of stars with binaries, or stars with chromospheric emission, using synthetic spectroscopy. We studied how the position of stars is a ff ected in the chromosome map. Results. Binaries can, in principle, explain the extension of the first population in the chromosome map. However, we find that given the binary fraction reported for globular clusters, the density of stars in the extended part is too small. Another di ffi culty of the binary explanation is that the shape of the distribution of the first population in the chromosome map is di ff erent in clusters with similar binary fractions. Also, the decrease of the binary fraction with radius is not mirrored in the shape of the chromosome map. Additionally, we find that the contribution of chromospheric emission lines to the HST photometry is too small to have an observable impact on the shape of the chromosome map. Continuum chromospheric emission has an e ff ect qualitatively similar to binaries. Conclusions. We conclude that binaries do have an impact on the morphology of the chromosome map of globular clusters, but they are unlikely to explain entirely the shape of the extended distribution of the first population stars. Uncertainties in the properties of continuum chromospheric emission of stars in GCs prevent any quantitative conclusion. Therefore, the origin of the extended first population remains unexplained.
Introduction
Globular clusters (GCs) host multiple stellar populations (MSPs) that are identified either photometrically or spectroscopically. In color-magnitude diagrams (CMDs) built with specific filters, they show multiple (almost parallel) sequences from the main sequence to the giant branches (red giant branch -RGBand asymptotic giant branch -AGB, see e.g., Bedin et al. 2004;Piotto et al. 2007;Soto et al. 2017). Spectroscopy indicates that a fraction of stars have chemical compositions similar to field stars, while others show enrichment in nitrogen, sodium, and (sometimes) aluminum together with depletions in carbon, oxygen, and (sometimes) magnesium (e.g., Sneden et al. 1992;Gratton et al. 2007;Lind et al. 2009;Carretta et al. 2010; Send offprint requests to: F. Martins e-mail<EMAIL_ADDRESS>Marino et al. 2011;Carretta 2015). Stars with different chemical compositions are found on different sequences of the CMDs (for recent reviews see e.g., Gratton et al. 2019).
A powerful diagram to separate MSPs through multiband photometry was introduced by Milone et al. (2015). Called "chromosome map" thanks to its morphology, it is based on two indices built on a specific combination of Hubble Space Telescope (HST) filters 1 . The first one is the color (m 275W -m 814W ), where the numbers refer to the HST filters. The second index is the color difference (m 275W -m 336W )-(m 336W -m 438W ), which we refer to as C 438W in this paper. In the (pseudo) CMDs showing, respectively, m 814W versus (m 275W -m 814W ) and m 814W versus C 438W , two lines are defined to bracket the giant branch. The rel-A&A proofs: manuscript no. chmap_bin ative position of stars with respect to these two lines is quantified by ∆(m 275W − m 814W ) and ∆C 438W in each CMD (see eqs 1 and 2 of Milone et al. 2017). The chromosome map shows the latter as a function of the former (see e.g., right panel of Fig. 2).
Several papers (Milone et al. 2017(Milone et al. , 2018Zennaro et al. 2019;Saracino et al. 2019) present a collection of chromosome maps for Galactic and extra-Galactic globular clusters observed by the HST Survey of GCs Nardiello et al. 2018) and follow-up studies. The general shape is a cloud of points stretching from the bottom right to the upper left of the diagram. Stars of the first population (i.e., with field-like surface abundances) lie at the origin -coordinate (0,0) -while stars of the second population -with peculiar abundances -are located on the rest of the observed sequence. In this paper, we refer to these populations and sequences as P1 and P2, respectively, as is commonly done (e.g., Lardo et al. 2018). Spectroscopic analysis of stars along the chromosome maps of several GCs confirms that P1 corresponds to stars that have halo-like abundances, while P2 stars align along the well-known C-N and O-Na anticorrelations (Cabrera-Ziri et al. 2019;Marino et al. 2019a).
In the chromosome map, the Y axis is sensitive to the shape of the spectral energy distribution (SED) in the UV and optical blue. This wavelength region contains molecular lines from CN, CH, NH, SiO, and OH and thus depends on the abundance of carbon, nitrogen, and oxygen (Sbordone et al. 2011;Dotter et al. 2015). In fact, given the relative changes of these three elements among various subpopulations of GCs, it is the nitrogen content that dominates ∆C 438W (see Fig. 8 of Milone et al. 2018). The X axis, ∆(m 275W − m 814W ), probes the SED between the UV and the near-infrared. At first order, it may be an indicator of effective temperature. In that case, its variation can be attributed to a modification of the helium content, as suggested by Milone et al. (2018) and Lardo et al. (2018). Indeed, a higher helium mass fraction implies a lower opacity, which in turn makes a star of a given mass, age, and metallicity hotter (e.g., Chantereau et al. 2016). Metallicity can also affect the SED, with more metalrich stars being cooler (Marino et al. 2019b). The dependence of ∆m 275W − m 814W and ∆C 438W on light element abundances naturally explains the shape of the chromosome map from the origin through the P2 sequence, since second population stars are expected to be polluted in products of the CNO cycle, meaning they should be N and He rich (e.g., Prantzos et al. 2017, and references therein).
A more surprising feature is the extension of the P1 sequence mainly along the X axis but with a small tilt compared it. This elongation is observed in some GCs (see e.g., Fig. 1 of Marino et al. 2019a). In view of the dependence on surface abundances shown by Milone et al. (2018), these authors, as well as Lardo et al. (2018), attributed the existence of an extended P1 to a variation of the helium content among first population stars. However, this variation must not be accompanied by any nitrogen enrichment to prevent a too large increase of ∆C 438W . Such a trend -helium enrichment, but with primordial nitrogen -is at odds with the current paradigm according to which hydrogen burning through the CNO cycle (and the NeNa-chain) is responsible for the observed chemical patterns (e.g., Charbonnel 2016; Gratton et al. 2019). Additionally, Tailo et al. (2019) argue that a different helium content among the first population leads to inconsistent properties of horizontal branch and RR Lyrae stars.
The shape of the SED, probed by the (m 275W -m 814W ) color, may change under other effects than variations of effective temperature. In the UV range stars on the RGB and AGB, which are used to build the chromosome map, emit relatively little flux. Any additional source of photons at those wavelengths may thus alter the magnitude in the bluest filters. In the present paper, we investigate how the presence of a companion and chromospheric emission impact the shape of the chromosome map, and especially the extension of the P1 sequence. In Sect. 2, we describe how we build synthetic GCs with multiple populations. We then study the effects of binaries (Sect. 3) and chromospheric emission (Sect. 4). We summarize our results in Sect. 5.
Synthetic cluster construction
We built synthetic CMDs from which we have extracted the chromosome map. To do so, we started from the stellar evolution models and isochrones presented by Chantereau et al. (2015) for [Fe/H]=-1.53, typical of the GCs NGC 5272, NGC 5986, NGC 6205, NGC 6254, NGC 6584, NGC 6752, and NGC 6981 (see their chromosome maps in Fig. 5 of Milone et al. 2017). The initial compositions assumed for the second population models and isochrones rely on predictions of the fast-rotating massive star scenario (FRMS; Decressin et al. 2007a,b;Krause et al. 2013). In particular, they extend up to very high values of the initial He mass fraction (Y varies from 0.248, the value assumed for P1, up to 0.8, the value corresponding to pollution by FRMS at the end of the main sequence). We note, however, that in these models, the abundance variations of the light elements relevant for our study already occur for very limited He variations. In particular, N increases by 0.5 dex for Y=0.260, and for Y=0.4 the nitrogen enhancement is already of about 1 dex. However, these enrichments (N for a given Y) are not as fast as for other scenarios (e.g., in the case of pollution by supermassive stars' nitrogen enrichment reaches ∼ 1.3 dex for Y=0.38; Gieles et al. 2018). In Fig. 1, the black isochrone is the nonpolluted (P1) case, while other colored lines correspond to different degrees of chemical pollution. For simplicity, we label them according to their initial helium content, but other elements are also changed: the higher the helium content, the higher (lower) the nitrogen (carbon and oxygen) abundances, among others. We refer to Chantereau et al. (2015Chantereau et al. ( , 2016 for further details on the evolutionary and isochrone computations. In view of the determinations of the helium mass fraction in GCs, Milone et al. (2018), we only considered the isochrones with Y up to 0.4 for this paper.
To move from the HRD to CMDs, we computed atmosphere models and synthetic spectra along the isochrones of Fig. 1. We selected points along these isochrones (symbols in Fig. 1). At these locations, we adopted the stellar parameters of the evolutionary calculations -T eff , surface gravity, surface abundances -and used them as input for atmosphere models. We used the codes ATLAS (Kurucz 2014) and SYNTHE (Kurucz 2005) for atmosphere models and synthetic spectra computations, respectively. The resulting SEDs thus have parameters fully consistent with evolutionary calculations, especially as they have the same abundances.
Once obtained, the SEDs were used to compute synthetic photometry in the HST WFC3 F275W, F336W, F438W, and ACS F814W filters. We could thus build the CMDs m 814W versus (m 275W -m 814W ) and m 814W versus C 438W . In each of these diagrams, we could thus place stars selected by their positions in the HRD -meaning selected by their effective temperature and luminosity -according to their corresponding magnitudes/colors. For each chemical composition, we subsequently interpolated between the points in the two CMDs to build a synthetic isochrone. We used an extinction E(B-V)=0.60 and a distance modulus of 13.15 to set the magnitude scale. These choices are tailored to reproduce the parameters of the cluster NGC 6752 but we stress that the color differences used to build the chromosome map are independent of them (assuming homogeneous extinction among cluster stars).
The next step consisted of the simulation of a synthetic cluster with different populations, such as stars with different chemical compositions. To do so, we first draw points at random along each isochrone, in the two CMDs, assuming the same distribution of m F814W magnitudes as that of the cluster NGC 6752. For each randomly selected m F814W , the color (either (m 275Wm 814W ) or C 438W ) was read from the synthetic isochrone. For a realistic study, we introduced in each color a dispersion drawn randomly from a Gaussian distribution centered on the theoretical color, and with a dispersion equal to 1/3 rd of the color dispersion computed by Martins (2018). This choice was made to obtain a dispersion in the chromosome qualitatively consistent with observations. The final step consisted of the selection of populations from different chemical compositions. For this, we adopted the following fractions of populations: 34% with Y=0.248, 11% with Y=0.260, 11% with Y=0.270, 11% with Y=0.300, 11% with Y=0.330, 11% with Y=0.370, and 11% with Y=0.400. As such, the cluster is made of 1/3 of stars from the first population and 2/3 from the second one, a classic ratio among GCs and the actual value for NGC 6752 (Milone et al. 2017). We created a synthetic cluster with a total of 17,000 stars. This number is tailored to have about the same number of stars on the RGB+AGB as in NGC 6752. The final synthetic cluster in the two CMDs is shown in the left and middle panels of Fig. 2. The different populations are still visible (especially in the m 814W versus C 438W diagram) in spite of the dispersion introduced along each isochrone.
The chromosome map was subsequently built from the two CMDs following the method described by Milone et al. (2017). The only difference is that we selected the so-called fiducial lines visually rather than using number counts in different mag- nitude bins. The fiducial lines are shown by the red and blue solid lines in Fig. 2. The resulting chromosome map clearly shows the groups of stars with different chemical compositions. The red and black populations, which correspond to the least and nonchemically processed ones, respectively, are separated by a region almost devoid of stars, and are located almost on top of each other. This is explained by the rapid increase in nitrogen (which mainly dominates the C 438W index) and the slowest helium enrichment in the early stages of the CNO cycle. The material that polluted the red population is made of such nitrogen-rich and helium (quasi) normal material.
Inclusion of binaries
We now proceed with the estimate of the impact of binaries on the distribution of stars in the chromosome map. We first describe our method and subsequently discuss our results.
Method
To investigate the role of binaries on the chromosome map, we first studied the impact of a companion on the magnitude m 814W and the colors (m 275W -m 814W ) and C 438W . For that, we proceeded as follows. Firstly, we selected three representative spectra of models along the Y=0.248 isochrone: the most luminous one, the middle one, and the one at the bottom of the RGB (see orange arrows in Fig. 1). For each of these models, we added the spectra of stars on the main sequence, from the turn-off to the least luminous one in Fig. 1 (purple arrows). We thus assumed that each system is made of two stars of the same population. Fig. 3 illustrates the process: the red line is the RGB spectrum, to which we added the main sequence blue spectrum to finally obtain the total black spectrum. The latter spectrum is used to compute syn-thetic photometry, from which we calculated the magnitude and color differences compared to the initial RGB model. For each RGB model, we repeated the process with the three companion's main-sequence stars, ending up with nine combinations of spectra and the associated color differences. We performed the same exercise with the RGB and main-sequence stars on the Y=0.300 isochrone to check the effects of chemical composition on binary colors but found little differences compared to the Y=0.248 case.
We then started from the synthetic cluster built in Sect. 2. We divided the RGB into three magnitude bins: m 814W > 14.5, 13.0 < m 814W < 14.5, and m 814W < 13.0. We randomly selected points from the synthetic cluster. For each selected star, its m 814W magnitude fell in one of the three bins defined above. We estimated a color correction ∆(m 275W − m 814W ) by drawing a value from a Gaussian distribution with a dispersion equal to 0.42 (see Sect. 3.3 for a discussion of that choice) and multiplying it by ∆(m 275W − m 814W ) max . The latter is the maximum correction possible in each of the three m 814W bins. It corresponds to the abscissa of the point to the left of each line in Fig. 4. We thus obtained a color correction ∆(m 275W − m 814W ) for the selected star. We subsequently read the corresponding correction on ∆C 438W and ∆m 814W from Fig. 4. For the bin with the brightest stars (m 814W < 13.0), we used the relations shown in black. For the 13.0 < m 814W < 14.5 bin (respectively m 814W > 14.5 bin), we used the relations shown in red (blue). Once obtained, the color corrections were added to the photometry of the initially single giant star. We repeated the process until 10% of the stars were replaced by binaries.
The final step consisted of building the CMDs and chromosome map from the resulting cluster that now contains 10% of stars in binary systems. The results are shown in Fig. 5. The color corrections of the Y=0.248, 0.260 and 0.270 populations are assumed to be those of the Y=0.248 binary combinations. For the other populations (Y≥0.300), we used the corrections obtained for the binaries of the Y=0.300 population. In practice, the corrections resulting from binarity depend little on the chemical composition. We also considered a binary fraction of 30%, which corresponds to the highest values reported for GCs (e.g., Milone et al. 2016). The results are shown in Fig. 6.
Do binaries explain the P1 extension?
Inspection of the synthetic chromosome maps in Figs. 2, 5, and 6 reveals that binaries impact the shape of P1. Initially centered around the (0,0) coordinates point, this population is elongated towards the left and upper part of the diagram because of binaries. The direction of the main axis is tilted by ∼14 o compared to the X axis. This is qualitatively in agreement with the 18 o measured by Milone et al. (2017) for the cluster NGC 6723. Binaries are therefore a possible explanation for the extension of the P1 sequence. This was also recently noted by Marino et al. (2019b). However, our simulations indicate that the density in the elongated part of P1 depends on the binary fraction. What are the observational values? Sollima et al. (2007) determined the minimum binary fraction in 13 GCs from photometry on the main sequence. They obtained values between 6 and 10%, except for three clusters where the numbers can reach 10 to 20%. Using various assumptions on the mass ratio distribution, they estimated a total binary fraction in the range 10-20% (40-65% for the extreme cases). They also showed that the binary fraction is larger in the core than in the outer parts of the clusters. This result was confirmed by Dalessandro et al. (2011) for NGC 6254: the binary fraction decreases from 14% in the cluster's core to 1.5% in a region lo- cated between one and two times the half-light radius. According to Sollima et al. (2007), the clusters with the largest binary fractions are the youngest, suggesting an evolution with time (but see Milone et al. 2016). Using the same method, Milone et al. (2012) extended this study to 59 GCs and found results consistent with those of Sollima et al. (2007). In addition, they showed that the binary fraction in GCs was, on average, lower than in the field, and that the binary fraction was anticorrelated with the cluster's mass. Ji & Bregman (2015) determined binary fractions mostly in the range of 3-10% for the 35 GCs they analyzed. This range is similar to those of Sollima et al. (2007) and Milone et al. (2012), although for a given cluster the binary fraction estimated by the three groups may differ significantly. Ji & Bregman (2015) confirmed the radial variation of the binary fraction. Using a different method based on the identification of radial velocity variations, Lucatello et al. (2015) reported an average binary fraction of ∼2% with a difference between the first and second populations: 4.9% in the former, 1.2% in the latter 2 . This trend -a higher binary fraction among the first population stars compared to the second population stars -was also reported by D' Orazi et al. (2010) and Dalessandro et al. (2018) with even larger differences (nearly an order of magnitude more binaries in the first population). Finally, using ESO/VLT/MUSE spectroscopy, Giesers et al. (2019) report a binary fraction of 6.75±0.72% in NGC 3201. In a follow-up study of this clus-2 The lower binary fraction compared to photometric studies is explained by the more demanding nature of observations for obtaining spectroscopic data: longer exposure times and need for multiepoch observations.
In view of the binary fractions listed above, a typical chromosome map including binaries should be relatively close to that of Fig. 5 (i.e., 10% binaries) -at least regarding the first population. The extent of P1 should be present, but relatively unpopulated. NGC 5272 (M3) is a cluster with a metallicity and age close to those of our synthetic clusters. Sollima et al. (2007) determined a binary fraction within NGC 5272 between 5 and 9%, while Milone et al. (2012) reported 3-5% of binaries. These values are relatively close to the 10% we adopted in Fig. 5. Milone et al. (2017) determined a fraction of stars in the first population of 30.5% for NGC 5272, close to the value used to build the synthetic chromosome map (see Sect. 2). Figure 7 shows the density of stars across the chromosome map in NGC 5272 and the synthetic clusters with 10% and 30% of binaries. In the latter ones, the peak density is around the (0,0) point. The binaries contribute to producing an elongated structure with a lower density (which increases with the binary fraction). In NGC 5272 3 , the P1 sequence is relatively homogeneously populated with barely a very small overdensity near the origin. The same trend is observed for NGC 6254 (not shown), from which we conclude that either the binary fractions in GCs are underestimated, or binaries contribute only part of the extension of P1, and another process is at play.
A further test of the effects of binaries on the chromosome map is shown in Fig. 8. We selected the cluster NGC 6254 since; 1) it shows an extended P1 (Milone et al. 2017), 2) its metallicity corresponds to that of our models, and 3) the binary fraction decreases from the core to the outer parts of the cluster (Dalessandro et al. 2011;Milone et al. 2012). To investigate whether or not this variation of the binary fraction is reflected in the shape of P1, we selected two regions of the cluster: the core and an annulus around it, as displayed in the left panel of Fig. 8. We subsequently selected the stars brighter than the bottom of the RGB in each region and constructed their chromosome map. The results, shown in the right panel of Fig. 8, do not show any significant variation of the extent of the chromosome map. This result does not depend on the choice of the core and annulus regions. If binaries governed the elongation of P1, one would expect a smaller extension in the outer regions of the cluster, which is not observed.
If binarity were the main reason for the extension of P1, one would also find a correlation between binary fraction and P1 extension. A simple comparison of two GCs with similar properties, NGC 5272 (M3) and NGC 6205 (M13), shows some qualitative trends. Both clusters are of about the same age, metallicity and mass (Harris 1996;Marín-Franch et al. 2009), but M3 shows an extended P1, while M13 does not -see Fig. 5 of Milone et al. (2017). According to Milone et al. (2012), the total binary fraction in M3 (M13) is between 3 and 6% (2 and 10%). Thus, M3 does not have a higher binary fraction that would explain its more extended P1.
Effect of the binary mass ratio distribution
In Sect. 3.1, we used a Gaussian distribution with dispersion equal to 0.42 to select color corrections due to binaries. This choice is motivated by the shape of the distribution of the mass of companions to stars in the solar neighborhood according A&A proofs: manuscript no. chmap_bin to Duquennoy & Mayor (1991). This distribution is best reproduced by a Gaussian function with the same dispersion. In our combinations of spectra made to estimate the effects of a companion on the photometry of giant stars (Sect. 3.1), we used the models identified by the arrows in Fig. 1. The various combinations produce systems with mass ratios between 0.97 and 0.89 since we only include main-sequence stars relatively close to the turn-off. The color corrections caused by a low-mass component are negligible since; 1) the companion is much fainter, and 2) it is cool. Consequently, the spectrum of the companion barely affects the SED of the giant star. The color corrections resulting from companions with masses lower than 0.89 × the mass of the giant star are thus extrapolated from the combinations of spectra with the most massive main-sequence stars. They are assumed to follow the linear fits shown in Fig. 4. When selecting color corrections with a Gaussian distribution, we thus assume that such corrections follow the same distribution as the mass ratio distribution in binaries.
Different assumptions can be made. For instance, Marino et al. (2019b) studied the chemical composition of P1 stars in NGC 3201. They found that two stars located at the extreme left of the sequence showed radial velocity variations, and thus concluded that they were binaries. This prompted them to study the effect of binarity on the shape of the chromosome map in a way similar to that presented here. They concluded that binaries with mass ratios larger than 0.8 could explain the shape of P1. They also suggested that binaries may be present mainly in the first population of stars since no extension of P2 is observed. We qualitatively reach the same conclusion regarding P1: binaries may explain part of its elongated shape. However, we stress that given the current knowledge of the binary fraction in GCs, binaries are probably not numerous enough to sufficiently populate the extended P1 sequence. Marino et al. (2019b) used a flat distribution of mass ratios among binaries, while we rely on the results of Duquennoy & Mayor (1991) for the solar neighborhood, meaning a distribution that favors low-mass companions. To test the effect of this assumption on the shape of the chromosome map, in Fig. 9 we present the same cluster simulation as in Fig. 6 (i.e., 30% of binaries), but using a flat distribution of the color corrections, and thus implicitly of the mass ratios of binaries. The effect is a more uniformly populated extension of P1, together with a larger number of stars with extreme ∆(m 275W − m 814W ). The explanation is as follows. With a flat-mass ratio distribution, the probability of having a companion of almost equal mass is higher. This means that main-sequence companions close to the turn-off are more likely. These companions imply the largest changes in photometry, because they are brighter (hence affect m 814W ) and hotter (hence affect (m 275W − m 814W )) than lower mass main-sequence stars. Consequently, they lead to the largest changes in the chromosome map. The distribution of mass ratios among binaries made of solar-type stars in GCs remains widely unknown. Giesers et al. (2019) provide the first empirical determination of that distribution in NGC 3201. Its shape is qualitatively consistent with that of Duquennoy & Mayor (1991). In the solar neighborhood, building on the study of Duquennoy & Mayor (1991), Halbwachs et al. (2003) reported a distribution with a broad peak for mass ratios between 0.2 and 0.7, and a second peak for equal-mass binaries in shortperiod systems (and no second peak for long-period binaries). Raghavan et al. (2010) argued for an almost flat distribution, and confirmed the trend that equal-mass binaries are more frequent among short-period systems.
Recent surveys (Badenes et al. 2018;Moe et al. 2019) report an increase of the overall binary fraction of solar-type stars when metallicity decreases. The binary fraction at [Fe/H]=-1.0 is 40 ± 6% and reaches 53 ± 12% at [Fe/H]=-3.0 according to Moe et al. (2019). This metallicity range corresponds to that of GCs, for which the estimated binary fraction is much lower (Milone et al. 2012;Ji & Bregman 2015). Dynamical effects in the dense environments of GCs most likely affect the properties Fig. 6, but assuming a flat distribution of mass ratios among binaries. of the binaries they host (Heggie 1975). It is thus not unlikely that the mass ratio distribution is also affected, and thus different from field stars.
Clearly, the shape of the mass ratio distribution among GC binaries remains a degree of freedom in the construction of synthetic GCs with binaries. However, whatever the choice, the conclusions remain the same: the extended part of P1 remains less populated than the initial sequence at (0,0) in the chromosome map.
In view of the arguments we presented and discussed above, we conclude that binaries may contribute to the extent of P1, but they are likely not the main driver.
Chromospheric emission
In this section we describe the impact of chromospheric emission on the SED of stars, and thus on the shape of the chromosome map.
Line emission
The atmosphere of solar-type and giant stars is the locus of complex phenomena resulting from the magnetic field, chromosphere, coronae, and winds. This results in stellar activity, the level of which depends on the type of stars. Among the observed phenomena, emission in specific lines is caused by the presence of a chromosphere (Schrijver 1995). The magnesium and calcium HK lines in the UV, at 2976-2803 Å and 3934-3968 Å, respectively, are notably concerned. The calcium doublet is a classical indicator of stellar activity (Linsky et al. 1979;Noyes et al. 1984;Baliunas et al. 1995;Wright et al. 2004;Marsden et al. 2014). The Mg ii lines are located close to the center of the F275W filter, while the Ca ii lines are on the blue side of the F438W filter. These lines may thus affect photometry in both filters, and consequently the position of stars in CMDs and the chromosome map.
Pérez Martínez et al. (2011) measured the flux in the Mg ii HK lines of Galactic giant stars. They concluded that all stars showed chromospheric emission with at least a minimal (basal) flux. However, emission could be higher by about one order of magnitude (see their Fig. 2, as well as Wood et al. 2016). We relied on these studies to estimate the effect of the Mg ii lines on photometry. For that, we added two emission lines to our synthetic spectra with a Gaussian shape and a total, integrated flux corresponding to the range of values of Pérez Martínez et al. (2011). We subsequently computed photometry and repeated the process for different models along the RGB. We found out that the maximum contribution of Mg ii lines reaches 0.1 magnitude in the F275W filter. It is achieved for the most luminous stars, because they are also the coolest and thus the ones for which the Mg ii emission is the strongest, relatively to the photospheric flux. At the bottom of the RGB, the effect of Mg ii lines is close to zero. This magnitude change implies a color change of the same amount both in (m 275W -m 814W ) and C 438W since only the F275W filter is affected.
To study the effect of Mg ii emission on the chromosome map, we first randomly selected a magnitude correction between 0.0 and -0.1. Then, we scaled this correction according to m 814W , to make sure that the hottest stars have a correction close to zero, and the coolest ones potentially 4 the maximum correction. Finally, we added this contribution to the photometry of the synthetic cluster shown in Fig. 2. The results are shown in Fig. 10. The chromosome map and CMDs are almost indistinguishable from those of Fig. 2. The reason is that the color corrections caused by Mg ii lines remain small. Although in the most extreme cases the (m 275W -m 814W ) and C 438W colors could change by 0.1 mag for the coolest/brightest stars, this barely happens in our synthetic cluster because; 1) there are few stars at the top of the AGB, and 2) due to random sampling, the corrections even for these stars, remain below 0.1 mag.
We also tested the effect of the Ca ii lines on the F438W photometry. Assuming the same emission fluxes as for the Mg ii lines (Pérez Martínez et al. 2014), we obtained very small changes (< 0.01 mag). This is because the photospheric flux is much larger in the F438W filter compared to the F275W filter. 4 Cool stars may also have a negligible correction if their activity level is low.
In addition, the Ca ii lines are on the very blue side of the filter, where the throughput is small. Thus, these lines do not affect the photometry of GC stars.
Continuum emission and variability
We observe UV emission not only in lines, but also in the continuum (Montez et al. 2017). The nature of this emission is debated: it may be due to a hot companion (Ortiz & Guerrero 2016) or to heating in the chromosphere, perhaps due to dissipation of acoustic waves or to magnetic reconnections in stars with convective surfaces (Schrijver 1995). Ortiz & Guerrero (2016) measured UV flux with GALEX in a sample of 58 AGB stars and concluded that 34 of them (i.e., 59%) showed excess emission below ∼2800 Å compared to theoretical SEDs of AGB stars. They argue that this can be a sign of the presence of a companion with T eff higher than 5500-6000 K.
However, the UV flux is usually variable, and some studies report a correlation between UV flux and visual magnitude in AGB stars, favoring a chromospheric origin (Smith & Redenbaugh 2010;Montez et al. 2017). Ortiz et al. (2019) analyzed the relations between line and continuum UV emission in AGB stars from GALEX observations. Continuum emission was evaluated in two bands centered at 3000 and 3200 Å, while Mg ii HK lines probed line emission. They showed that the latter varied by a factor ∼2 to ∼10 with time. The ratio of the fluxes at 3200 and 3000 Å also varied by the same amount and was anticorrelated with line variability: the stronger the Mg ii HK emission, the smaller the flux ratio F(3200)/F(3000); so, the UV flux is harder when the Mg ii HK lines are stronger. Finally, they reported that UV emission is dominated by continuum, with lines contributing less than ∼36% of the total flux. Whether these findings also apply to RGB stars is unknown so far. They may be weaker since RGB stars are, on average, hotter than AGB stars, and thus have a stronger photospheric flux in the UV. But one can expect AGB stars in GCs to show UV variability as reported A&A proofs: manuscript no. chmap_bin by Ortiz et al. (2019). In addition to line emission, one should expect a significant continuum emission resulting from activity.
Whatever the origin of the continuum UV emission of AGB (and potentially RGB) stars, the effect of this emission is similar to that of a companion (see Sect. 3). A hardening of the UV emission impacts the F275W and F336W filters, and thus CMDs involving these filters, as well as the chromosome map. If binaries produce the UV emission, their fraction among giant stars should be larger than it is among main-sequence stars to reproduce the P1 sequence (Sect. 3). If UV emission is caused by stellar activity, a large fraction of stars should be active. The main difference is that if chromospheric activity is producing the UV emission, the chromosome map would not be static, in the sense that stars would change position with time according to variability 5 . Whether the effect of chromospheric UV emission on the chromosome map is sufficient to quantitatively explain the P1 shape is unclear due to the uncertainties in the knowledge of UV emission of giants. One may raise an argument against its significance: the clusters M3 and M13 contain stars with presumably very similar properties (metallicity, age; Marín-Franch et al. 2009), but the former shows an extended P1, while the latter does not. If UV variability is an intrinsic property of the stars themselves, and not of the cluster, one does not expect such a different P1 sequence. Clearly, multiepoch observations are needed to investigate the effect of chromospheric activity on MSPs in GCs.
Summary and conclusion
We presented a study of the effect of binary stars and of chromospheric emission on the shape of the chromosome map of GCs. For that purpose, we first built synthetic clusters using isochrones computed by Chantereau et al. (2016). Along the isochrones, we have computed atmosphere models and synthetic spectra. From the latter, we computed synthetic photometry in the HST filters F275W, F336W, F438W, and F814W. The isochrones in the HRD were thus transposed into CMDs. We subsequently selected different combinations of stars taken from isochrones with various chemical compositions to create a synthetic cluster hosting MSPs. The chromosome map was finally built for this synthetic cluster.
To study the impact of binaries, we first combined the synthetic spectra of stars on the giant branches with those of mainsequence stars. We estimated the changes resulting from the addition of a main-sequence star on the photometry of the giant star. We subsequently used these corrections to replace a fraction of stars from the synthetic cluster by binaries. We then rebuilt the chromosome map. We proceeded similarly to estimate the effect of chromospheric emission in the Mg ii HK and Ca ii HK lines: we added an emission component on top of the stellar spectrum and computed the modified photometry.
We find that binaries contribute to the extension of the P1 sequence. The extension is qualitatively consistent with the observations. However, for the reported binary fractions in GCs of 10% the number of stars in the extended part of P1 is small. The relative fraction of stars in the extended part and the original sequence is not consistent with observations in NGC 5272, a cluster with properties similar to our synthetic cluster. Even for a larger binary fraction (30%), the difference remains significant. NGC 5272 (M3) and NGC 6205 (M13) are almost twins (same 5 Variability may also be present in the case of eclipsing binaries, but these objects should represent only a fraction of the total number of binaries. age, mass, metallicity). The former shows an extended P1 while the latter does not, in spite of similar binary fractions. We thus conclude that while binaries can contribute to the extent of the P1 sequence, they are probably not the main driver, unless binary fractions in GCs are severely underestimated. The minor role of binaries in the extension of P1 is supported by the observations of NGC 6254: it hosts more binaries in its core than in its outer parts, but P1 has the same extension in both regions.
Regarding chromospheric emission, the intensity of Mg ii HK and Ca ii HK lines reported in the solar neighborhood are too small to significantly impact the photometry of giant stars. Therefore, the chromospheric emission from these lines does not affect the shape of the chromosome map. Only variations in the UV continuum caused by chromospheric activity could have an effect similar to that of binaries. However, the fraction of stars with significant chromospheric continuum emission is unknown. If multiepoch observations revealed variations in the positions of stars along the P1 sequence, that may be an indication that stellar activity is important in shaping P1 in the chromosome map. At present, we thus conclude that the extension of the P1 sequence remains enigmatic. | 9,503.2 | 2020-01-21T00:00:00.000 | [
"Physics"
] |
PPalign: optimal alignment of Potts models representing proteins with direct coupling information
Background To assign structural and functional annotations to the ever increasing amount of sequenced proteins, the main approach relies on sequence-based homology search methods, e.g. BLAST or the current state-of-the-art methods based on profile Hidden Markov Models, which rely on significant alignments of query sequences to annotated proteins or protein families. While powerful, these approaches do not take coevolution between residues into account. Taking advantage of recent advances in the field of contact prediction, we propose here to represent proteins by Potts models, which model direct couplings between positions in addition to positional composition, and to compare proteins by aligning these models. Due to non-local dependencies, the problem of aligning Potts models is hard and remains the main computational bottleneck for their use. Methods We introduce here an Integer Linear Programming formulation of the problem and PPalign, a program based on this formulation, to compute the optimal pairwise alignment of Potts models representing proteins in tractable time. The approach is assessed with respect to a non-redundant set of reference pairwise sequence alignments from SISYPHUS benchmark which have lowest sequence identity (between \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3\%$$\end{document}3% and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$20\%$$\end{document}20%) and enable to build reliable Potts models for each sequence to be aligned. This experimentation confirms that Potts models can be aligned in reasonable time (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1'37''$$\end{document}1′37′′ in average on these alignments). The contribution of couplings is evaluated in comparison with HHalign and independent-site PPalign. Although Potts models were not fully optimized for alignment purposes and simple gap scores were used, PPalign yields a better mean \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$F_1$$\end{document}F1 score and finds significantly better alignments than HHalign and PPalign without couplings in some cases. Conclusions These results show that pairwise couplings from protein Potts models can be used to improve the alignment of remotely related protein sequences in tractable time. Our experimentation suggests yet that new research on the inference of Potts models is now needed to make them more comparable and suitable for homology search. We think that PPalign’s guaranteed optimality will be a powerful asset to perform unbiased investigations in this direction.
Background
Thanks to sequencing technologies, the number of available protein sequences has considerably increased in the past years, but their functional and structural annotation remains a bottleneck. This task is thus classically performed in silico by scoring the alignment of new sequences to well-annotated homologs. One of the best-known method is BLAST [1], which performs pairwise sequence alignments. The main tools for homology search are now based on profile Hidden Markov Models (pHMMs), which model position-specific composition, insertion and deletion probabilities of each family of homologous proteins. Two well-known software packages using pHMMs are widely used today: HMMER [2] aligns sequences to pHMMs and HH-suite [3] takes it further by aligning pHMMs to pHMMs.
Despite their solid performance, pHMMs are innerly limited by their positional nature. Yet, it is well-known that residues that are distant in the sequence can interact and co-evolve, e.g. due to their spatial proximity, resulting in correlated positions. One can cite for instance experiments of Ranganathan et al. on the WW domain who showed by experimentally testing libraries of artificial sequences of the WW domain that coevolution information is necessary to reproduce the functional properties of native proteins [4].
There have been a few attempts to make use of long-distance sequence information. Menke, Berger and Cowen introduced a Markov Random Field (MRF) approach, SMURF [5], where MRFs generalize pHMMs by allowing dependencies between paired residues in β-strands to recognize proteins that fold into β-structural motifs. Their MRFs are trained on multiple structure alignments. A model simplification [6] and heuristics [7] have been proposed to speed up the process. While these methods outperform HMMER [2] in propeller fold prediction, they are limited to sequence-MRF alignment on β-strand motifs with available structures. Xu et al. [8] proposed a more general method, MRFalign, which performs MRF-MRF alignments using probabilities estimated by neural networks from amino acid frequencies and mutual information. Unlike SMURF, MRFalign handles dependencies between all positions and MRFs are built from multiple sequence alignments.
In addition to these inputs, MRFalign relies on complex scoring functions based on Conditional Neural Fields and Probabilistic Neural Network trained on reference alignments and structural information to optimize the similarity measures of the positional and coupling potentials of the MRF models to be compared. In reported results, PSSM-PSSM and HMM-HMM alignment methods are outperformed by MRFalign in terms of both alignment accuracy and remote homology detection accuracy, notably on mainly beta proteins, showing the potential of using long-distance information in protein sequence alignment.
Meanwhile, a more interpretable type of MRF grounded in the maximum entropy principle led to a breakthrough in the field of contact prediction [9]: the Potts model. This model was brought forward by Direct Coupling Analysis [10], a statistical method to extract direct correlations from multiple sequence alignments. Once inferred on a multiple sequence alignment (MSA), Potts model's nodes represent positional conservation, and its edges represent direct couplings between positions in the MSA. Unlike mutual information which also captures indirect correlations between positions, Potts models are global models capturing the collective effects of entire networks of correlations through their coupling parameters [11], thus tackling indirect effects and making them a relevant means of predicting interactions between residues. Beyond contact prediction, the positional and the direct coupling information captured by Potts model's parameters might also be valuable in the context of protein homology search. The idea of using Potts models for this purpose was simultaneously proposed last year at the 2019 Workshop on Co-evolutionary Methods for the Prediction and Design of Protein Structure and Interactions by Muntoni and Weigt [12], proposing to align sequences to Potts models, and by us [13], proposing to align Potts models to Potts models in our generic framework for the comparison of protein sequences using direct coupling information named ComPotts.
A method to align a sequence to an hybrid model between Potts model and profile Hidden Markov Model was concurrently proposed by Wilburn and Eddy with Hidden Potts Models [14].
The main computational bottleneck for such approaches is that, due to non-local dependencies, alignment problems involving Potts models are hard. Muntoni and Weigt [15] proposed an approximate message-passing algorithm to align a sequence to a Potts model, while Wilburn and Eddy [14] proposed a method based on importance sampling. We present here PPalign, the alignment method we introduced in ComPotts to optimally align two Potts models representing proteins in tractable time with respect to our Integer Linear Programming (ILP) formulation of the problem. This work builds with an adequate scoring function on the ILP formulation of Wohlers, Andonov, Malod-Dognin and Klau [16,17] of the distance matrix alignment problem initiated by DALI to perform protein structure alignment [18] and their efficient solver extending itself to real valued pairwise scores their solver for protein structure alignment by Contact Map Overlap (CMO) maximisation [19], a well-studied problem where Linear Programming strategies are known to be efficient [20,21]. In contrast to these methods using pairwise information from protein structures, in our approach proteins are aligned using pairwise information from protein sequences only. Our method can then be significantly different in not only considering contact or coupling strength information between position pairs but also their coupled amino-acid composition.
This paper fully describe our Potts to Potts model alignment of sequences approach and focus on its performances in terms of alignment quality on remote homologs. In the following sections, we explain our choices for the inference of Potts models and describe the method PPalign for aligning them. To assess the tractability and the quality of alignments by this approach, we extracted 33 non-redundant pairwise reference alignments with a particularly low identity from the manually curated structural alignments database SISYPHUS [22] and randomly split it into a training set of 11 pairs to train our hyperparameters and a test set of 22 pairs on which we compared our results with HHalign's alignments of pHMMs built on the same input data. On this test set, our method yielded the exact solutions up to a chosen epsilon in tractable time, and outperformed HHalign in terms of alignment quality with an F 1 score better on average and significantly better for 5 alignments, suggesting that direct couplings can improve alignment quality of remote homologs. Talibart
Inference of Potts models
Potts models are discrete instances of pairwise Markov Random Fields which originate from statistical physics. They generalize Ising models by describing interacting spins on a crystalline lattice with a finite alphabet. In the paper introducing Direct Coupling Analysis, Weigt et al. came up with the idea of applying them to proteins: by building a multiple sequence alignment of a protein sequence and its close homologs and inferring a Potts model on it, one can predict contacts between residues by looking at its parameters [10]. The inference of a Potts model from a set of protein sequences can be formally defined as follows: Let S = {s n } n=1,...,N be a set of N protein sequences of lengths l 1 , . . . , l N . A multiple sequence alignment (MSA) of these sequences can be defined as a set of N sequences X = {x n } n=1,...,N on the alphabet of S extended with a new gap character '−' , which all have the same length L and such that removing all gaps from a sequence x n gives s n . By extension, L is called the length of the MSA. We denote by q the size of the alphabet.
A Potts model with q states for MSA X can be defined as a statistical model whose probability distribution P over all sequences of length L maximizes the Shannon entropy H (P) = − y∈{1,...,q} L P(y) log P(y) and generates the empirical single and double frequencies of the MSA as marginals: This probability distribution has the following form: where Z is a normalization constant : Z = y∈{1,...,q} L exp −H(y|v, w) and H is an energy function defined as where the parameters (v, w) that define a Potts model are the ones that maximize the likelihood of the sequences in the MSA X: ∀i, j = 1, . . . , L, ∀a, b = 1, . . . , q, y ∈ {1, . . . , q} L y i = a, y j = b These parameters can be assigned a practical interpretation: An illustration of Potts model is given Fig. 1. These parameters are unique up to a gauge invariance: Eqs. (1) and (2) are not independent, implying that the probability distribution remains unchanged under the following transformation: where K ij and C i are arbitrary values. In our case, this indeterminacy is fixed with the widely-used zero-sum gauge: In practice, maximizing the likelihood would require the computation of the normalization constant Z at each step, which is computationally intractable. Among the several approximate inference methods that have been proposed [11,[23][24][25][26], we opted here for pseudo-likelihood maximization since it was proven to be a consistent estimator in the limit of infinite data [27,28] within reasonable time. Furthermore, since our goal is to align Potts models, we need the inferrence to be geared towards similar models for similar MSAs, which is not what inference methods were initially designed for. In an effort towards inferring canonical Potts models, we have chosen here to use CCMpredPy [29], a recent Python-based version of CCMpred [30] which, instead of using the standard L 2 regularization prior R(v, w) = v �v� 2 2 + w �w� 2 2 , allows us to use a smarter prior on v: where v * obeys which yields the correct probability model if no columns are coupled, i.e. P(x|v, w) = L i=1 P(x i ) . Our intuition is that positional parameters should explain the MSA as much as possible and only necessary couplings should be added.
From a protein sequence to a Potts model
Unlike homology detection methods based on simple pairwise sequence alignment such as BLAST, as HHalign our method explicitly considers sequence conservation and variability around sequences to be compared by modeling each sequence and its retrieved close homologs, with the addition of coupling information in our case. This implies that the quality of the alignment will be dependent on the quality of the MSAs of close homologs built for each sequence. In this paper, based on CCMpred's recommendations [31], for each sequence we run HHblits [3] v3.03 with the following parameters: -maxfilt 100000 -realign_max 100000 -all -B 100000 -Z 100000 -n 3 -e 0.001 on Uniclust30 [32] (08/2018 release), and then process the output by: • filtering at 80% identity using HHfilter • taking the first 1000 sequences • removing all columns with > 50% gaps using trimal [33] The resulting MSA is inputted to CCMpredy [29] using default parameters to infer a Potts model, and trimmed positions i (with > 50% gaps in the input MSA) are re-inserted in the model with positional parameters at position i set to background fields defined using frequencies f 0 given by [34] and pairwise coupling parameters with position i set to:
Parameter rescaling strategy
Since existing Potts model inference methods were specifically designed for the prediction of co-evolving position pairs, inferred parameters might not be ideally suited for Potts model comparison. This section describes two strategies implemented to compensate for these shortcomings.
Lessening the effect of small sample variations on the positional parameters
Since field parameters v are linked to single frequencies through a logarithmic relation [see Eq. (9)], any noise in the presence of small probabilities can have a great impact on the model parameters. This has a dramatic effect on the scoring function we use for pairwise Potts model alignment since the sign of each parameter directly determines the sign of their similarity score (see next section). To lessen the effects of sampling variations, we apply additive smoothing to the softmax probability distribution p i associated with each v i . More formally, a standard softmax probability distribution p i is extracted for each positional parameter v i : It is then smoothed towards a uniform distribution so that very low probabilities are more homogenized: where τ v is a parameter controlling the amount of additive smoothing used. Final smoothed parameters ṽ i (a) are retrieved by inverting the softmax function using the fact that Summing up in one formula, each parameter v i (a) of the inferred Potts model is smoothed using the following function:
Diminishing contributions of anti-correlations
In theory, coupling values inside a w ij matrix are supposed to deviate positively or negatively from 0 to reflect a (direct) correlation or anti-correlation. In practice however, while input data can be sufficient to assert that two letters a and b are likely to be found together at positions i and j, deducing that they should not be found together at positions i and j requires more examples to have sufficient countings on all pairs of a and b.
Considering that our data set is limited, a large number of spurious anti-correlations can arise from a mere lack of data. Since positive correlations are more likely to be supported by available training sample than negative ones, our approach here is to skew the coupling value distribution inside each w ij matrix to favor higher, positive values.
To do this, we extract each coupling matrix probability distribution as for the fields, only with a different softmax base β w , chosen so that the extracted distribution is skewed towards higher probabilities and, as for the fields, smooth it towards a uniform distribution to lessen noise, which gives: Using this smoothing scheme on each input Potts model make them more comparable since the most significant information stands out while sampling variations are tuned down.
This strategy was implemented to compensate for the impossibility to add pseudocounts when inferring models with methods based on pseudo-likelihood, such as CCMpredPy, which we selected for its smart prior on the field parameters. For future experiments, we are hoping to find an inference method with the same prior and allowing us to add pseudo-counts on the single and the double frequencies. This smoothing strategy will then probably no longer be needed.
Alignment of Potts models
This section introduces our method for aligning two Potts models. The function we designed to score a given alignment is described and constraints ensuring that the alignment is proper are added as in Wohlers et al. [17], resulting in an Integer Linear Programming formulation that can be optimized using their efficient solver.
Scoring function
Basically, the best alignment between two Potts models A = (v A , w A ) and B = (v B , w B ) of lengths L A and L B is defined as the alignment which maximizes the similarity between aligned fields and aligned couplings. Formally, this means finding the values of the binary variables x ik where x ik = 1 iff position i in Potts model A is aligned with position k in Potts model B so as to maximize: where are similarity scores, respectively between positional parameters v A i and v B k and coupling parameters w A ij and w B kl , and α w is a coefficient ensuring proper balance between positional and coupling score.
To measure the similarity between vectors, the scalar product is a natural candidate. We propose thus to measure the similarity s v (v A i , v B k ) between field parameters using: and to measure the similarity s w (w A ij , w B kl ) between coupling parameters by the extension of the scalar product to matrices, the Frobenius inner product: Note that this scoring function for two Potts models naturally generalizes the score of a sequence x for a given Potts model since its energy can be computed as: where Inspired by sequence alignment methods which use log-odds ratios to compute their scores with respect to a background model, we remove the background field v 0 defined in Eq. (10) to each field vector before computing the scalar product. The actual similarity score between two positional parameters v A i and v B k used in this paper is thus: while the similarity score between two coupling parameters w A ij and w B kl remains:
Optimizing score with respect to constraints
Naturally, the scoring function should be maximized with respect to constraints ensuring that the alignment is proper. In that perspective, we build on the work of Wohlers et al. [17], initially dedicated to protein structure alignment, to propose an Integer Linear Programming formulation for the Potts model alignment problem.
Let us first introduce necessary definitions and notations following [17] to define a proper alignment.
The alignment graph of two Potts models A and B of lengths L A and L B is a L A × L B grid graph where rows (from bottom to top) represent positions in A and columns (from left to right) represent positions in B. A node i.k in the alignment graph represents the alignment of node i from Potts model A and node k from Potts model B. Directed edges (i.k, j.l) are drawn for i < j and k < l . In this framework, an alignment of n positions in the two Potts models is represented by a set of nodes {i 1 .k 1 , . . . , i n .k n } where i 1 < · · · < i n and k 1 < · · · < k n , termed increasing path.
In order to properly set constraints on the alignment, two additional node sets are defined: row ik (j) (resp. col ik (l) ) is the maximal set of nodes in the alignment graph that are tails of edges with head at i.k or heads of edges with tail at i.k, that contain at least one node at row j (resp. column l), and that mutually contradict, i.e. no two of them lie on an increasing path.
To cast the alignment problem into an ILP, binary variables x ik are assigned to each node i.k in the alignment graph, with x ik = 1 if position i in Potts model A and position k in Potts model B are aligned, and similarly a binary variable y ikjl is assigned to each edge in the alignment graph where y ikjl = 1 if edge (i.k, j.l) is activated.
Given notations above, the alignment of two Potts models A and B of lengths L A and L B and parameters (v A , w A ) , (v B , w B ) can be formulated as the following Integer Linear Programming problem: Constraints (24) and (25) prevent edges from activating if their tails are not activated and ensure that heads of edges with a common tail do not contradict, and constraints (26) and (27) denote the reverse situation. Constraint (28) ensures that edges are activated if their heads and tails are activated (this constraint is necessary since similarity scores can be negative). Finally, constraint (29) ensures that the nodes lie on an increasing path.
A major asset of the solver is that it can yield the exact solution of this ILP, or a solution within a chosen epsilon range of the exact one, in tractable time. Desired precision of the optimization can be set by the parameter ǫ , ensuring that 2(UB−LB) s(A,A)+s(B,B) ≤ ǫ where UB and LB are the upper and lower bounds guaranteed by the solver for the solution, to avoid unnecessary optimization steps (the precision can be sufficient for the task) and speed up the search (often the last optimization steps only contribute to tighten the bounds while the optimal solution is already found).
Gap cost and offset
As in [17], an affine gap cost function can be added to the score function to account for insertions and deletions in the sequences, with the appropriate choice of a gap open and a gap extend penalties.
Furthermore, as in most profile-profile methods [35], in order to prevent our method from greedily aligning every position, we penalize each aligned pair with a fixed negative offset hyperparameter.
Data
To evaluate PPalign and the contribution of distant dependencies, we focused on reference alignments based on structures with low sequence identity. We opted for SISYPHUS database [22] since it provides manually curated structural alignments for proteins with non-trivial relationships. Our data set was built as follows: • From each multiple sequence alignment in SISYPHUS, every possible pairwise sequence alignment with a sequence identity lower than 20% was extracted (we set a low sequence identity threshold to focus on harder targets) • For each sequence in each of these extracted pairwise reference alignments, we attempted to build a Potts model with the workflow previously described. Sequences that had less than 1000 80% non-redundant homologs were discarded to focus on (28) (30) x, y binary sequences with sufficient co-evolution signal. Due to CCMpredPy memory consumption, trimmed MSAs whose length was longer than 200 also had to be discarded. • Finally, for each reference multiple sequence alignment in SISYPHUS with more than two of such eligible sequences, a reference sequence pair was randomly selected. This last steps discards many alignment pairs but ensures that no multiple sequence alignment biases the results.
This resulted in a set of 33 non-redundant reference pairwise alignments which was randomly split into a train set of 11 alignments on which our hyperparameters were trained (see Table 1) and a test set of 22 target alignments (see Table 2). The overall workflow to align two protein sequences with our method and the evaluation procedure for each reference MSA are respectively summarized in Figs. 2 and 3. Fig. 2 Potts to Potts alignment of two sequences workflow. Coevolution information is obtained for each sequence by retrieving close homologs, and Potts models are inferred on the corresponding multiple sequence alignments. PPalign computes the optimal alignment of the two Potts models, thereby providing an alignment of the two initial sequences Fig. 3 Overview of evaluation procedure for a reference MSA. A reference pairwise sequence alignment is (randomly) extracted from a reference MSA in SISYPHUS. The two sequences are then aligned by PPalign with the workflow previously introduced, and the alignment is compared with the reference alignment in terms of precision, recall and F 1 score
Alignment evaluation metrics
Alignment quality with respect to SISYPHUS' reference alignments is assessed by computing alignment precision: and recall: using Edgar's qscore program [36] v2.1, and F 1 score:
PPalign's hyperparameters
PPalign's hyperparameters were optimized in a supervised fashion on the 11 alignments from the training set using Hyperopt library [37] to maximize the F 1 score. This process showed to be excessively time-consuming, Hyperopt being unable to show a convergence on the choice of the parameters after one month. In order to reduce the hyperparameter search space and speed up the convergence of this process, we had to arbitrarily set some parameters after some trials on the training set: precision ǫ was set to 0.02, τ v and τ w from Eqs. (15) and (16) were both set to 0.4 and the gap extend penalty was set to 0. In accordance with the expected NP-hardness of the pairwise Potts model alignment problem, time needed to find optimal alignment could be very long for some sets of parameters and even exceed the 6 hours time-out we set. We observed yet that good alignments were usually already found in less than 1 minute and decided to set the time-out by alignment to this value to speed-up more the optimisation of the remaining parameters by Hyperopt, which yielded the following values: • Gap open penalty: 13 • Coupling contribution coefficient α w : 6 • Softmax base β w : 8.0 • Offset γ : 1.0
Other methods to be compared
In this experiment, we compared the results of PPalign with HHalign, the core alignment method of the state-of-the-art remote homology detection method HHsearch.
We ran HHalign v3.0.3 with default options to align pHMMs built with HHmake with default options from the MSAs used to infer Potts models (except for the trimming of the positions with > 50% gaps since pHMMs handle well insertions and deletions).
To assess the contribution of direct couplings in sequence alignment, we also used PPalign to compute alignments of independent-site Potts models (i.e. Potts models where positions are assumed to be independent, thus without coupling parameters) with the same hyperparameters (termed "independent-site PPalign").
We also ran BLASTp v2.9.0+ without E-value cutoff on the sequences truncated as in our training MSAs to provide an indication on the sequences' similarity.
Tractable computation time
We examined the computation times of PPalign, independent-site PPalign and HHalign, considering the time they took to align the models (and not the steps to build them, that can be done offline) of the sequence pairs from the test set. Experiments were run on a Debian9 virtual machine with 4 VCPUs (2.3 GHz) and 8 GB RAM. The timeout for each alignment was set to 6 hours. The first result is that all the alignments could be computed by PPalign in running times ranging from 5 seconds to 6 minutes, with an average of 1 min 36. Figure 4a plots the running times with respect to the lengths of the models to align. It shows that most problems (17/22) are easily solved and that running time for these problems increases gently with the lengths of the models, while a few (5/22) other problems stand out from this majority trend but are still solved in a few minutes.
When couplings are not considered, the problem is fundamentally easier and running times of HHalign and independent-site PPalign are significantly faster than PPalign: both programs were able to compute each optimal positional alignment in less than 1 second. The running times of HHalign and independent-site PPalign are plotted in Fig. 4b, c . The two plots are not completely comparable since time needed to load the models is here included for HHalign and not for independent-site PPalign, but they illustrate the difference between the dynamic programming approach of HHalign, with a steady running time increment with the length of the models, and the Integer Linear Programming optimization approach of independent-site PPalign, showing here 2 outliers with respect to the general tendency.
Alignment quality
Alignment quality was assessed by comparing the alignment obtained by the different methods for the 22 sequences pairs in the test set to their reference alignment.
Overall, PPalign achieves a better F 1 score than HHalign (0.600 versus 0.578) with a better recall (0.613 vs 0.533) but a lower precision (0.587 vs 0.661), outperforming it in 12 out of the 22 alignments. BLAST only aligned 4 out of the 22 pairs, yielding an average F 1 score of 0.113.
Results for each sequence pair of the test set are displayed in Fig. 5 and one example where couplings were particularly helpful in the alignment is discussed Fig. 6.
In most cases, PPalign and HHalign yield similar F 1 scores (with less than 0.1 difference), except for 8 sequence pairs. 5 of them, marked by blue dots in the Fig. 5a, are significantly better aligned by PPalign: AL00050475, AL00050692, AL10050875, AL00050715 and AL00050799 which are among the 7 alignments with the smallest percentage of sequence identity with respectively 3.61% , 5.04% , 5.19% , 5.22% and 6.02% . AL10050875 and AL00050715 are part with AL10063410 of the three sequence pairs that HHalign fails completely to align, yielding small and incorrect alignments with an F 1 score of 0. On AL10063410, PPalign also failed, but on AL10050875 and AL00050715 it was able to do a bit better than HHalign by correctly aligning in each case roughly a fifth of the target alignment while still being wrong on the four other fifths. On AL00050475 and AL00050692, PPalign successfully retrieves about half of the target alignments when HHalign was retrieving only respectively a fifth and a third of it. The contribution of the coupling parameters is particularly noticeable for AL00050799, PPalign correctly retrieving almost 70% of the alignment while HHalign retrieves only 20% of it (see detailed analysis in Fig. 6).
PPalign is significantly outperformed by HHalign on 3 pairs, marked by yellow dots in Fig. 5b. On AL00053335 ( 7.43% sequence identity), PPalign suffers from its tendency to align too many positions: like HHalign it correctly aligns half of the target alignment, but it proposes a longer alignment than HHalign, making its precision drop to around 40% when HHalign stays around 60% . The two other pairs are AL00050021 and AL00052441 with respectively 14.61% and 15.38% sequence identity allowing HHalign to correctly align 60% of the target alignment. On AL00052441, PPalign correctly aligns more than 50% of the target alignment but the main difference comes here again from the precision (0.58 vs 0.81). Results on AL00050021 are clearly in favour of HHalign with an F 1 score of 0.6 compared to 0.4 for PPalign and can be explained by the extremely gappy MSAs used to build the models (more than 1 3 positions in the reference alignment were trimmed). Interestingly, PPalign without coupling score (independent-site PPalign) achieves an F 1 score comparable to HHalign (0.580 vs 0.578) despite a poor handling of gaps by Potts models as opposed to pHMMs. Besides, while PPalign's alignment is most of the time better with the coupling score, 2 sequence pairs were yet significantly better aligned by independent-site PPalign than by PPalign with couplings: on already discussed AL10050875, where it improves a bit the poor quality of the alignment by PPalign, but also on AL00089447 ( 12.93% sequence identity) where it improves over the improvement of HHalign on PPalign.
Discussion
Although the problem is very likely to be NP-hard since the threading problem is NPhard [38], these experiments demonstrate that PPalign yields optimal Potts to Potts alignments up to a precision ǫ in tractable time. These results have to be confirmed on bigger instances. For now, experimentation is limited by memory handling in CCM-predPy, which is currently the only inference method offering the features we require to infer comparable Potts models, but the current implementation of CCMpred [30] shows that this type of inference can be optimized to handle significantly larger models. This should enable us to test larger alignments in the future. Based on our experimentation, we expect these alignments to be also tractable. This is surprising with respect to the NP-complete nature of the problem, but it seems that alignments of Potts models are not the hardest instances when they properly represent homologous proteins. We think that this depends yet on the choice of the parameters shaping the inference of Potts models and the similarity of the models to align: these questions deserve further studies to better understand the application scope of this method.
Regarding alignment quality, our results for the alignment of Potts models inferred using a pseudo-likelihood method designed for co-evolution prediction purposes are overall better than for the alignment of pHMMs by HHalign, with significant examples demonstrating how taking couplings into account can improve the alignment of remote homologous proteins, especially for lowest similarity alignments. There is still room for improvement in our method. We have noticed a tendency to align too many positions that can be corrected and our worst score with respect to HHalign is associated with very gappy train MSAs, indicating that augmenting Potts models with an appropriate gap handling strategy would undoubtedly improve our results. Above all, it is worth noting that independent-site PPalign finds sometimes a better alignment than PPalign, coupling matrices bringing more noise than assistance in these cases. To get better alignments, the priority is now is to work on more robust inference of Potts models, to make them more comparable and informative for homology search despite the relatively small size of training samples. We proposed here some ideas towards the inference of more canonical Potts models, with only the necessary couplings, as well as some post-processing steps, notably to smooth weights by simulated uniform pseudocounts. This later step allowed us to raise the average F 1 score from 0.48 to 0.60, but we think that a more direct procedure would still be preferable. We are now searching for an efficient Potts model inference method that can be geared towards canonicity, providing the possibility to add pseudo-counts on the single and double amino acid counts -thus excluding methods based on pseudo-likelihood maximization -and being able to infer extended Potts models with an appropriate gap handling strategy. Besides, though the focus of this paper was the alignment of models inferred on pre-built multiple sequence alignments, it should be noted that the quality of the sequence alignments we provide strongly depends on the quality of their associated MSAs. The use of a suitable inference method will allow us to properly test this dependency on the method used to retrieve close homologs and on the chosen alignment depth in future experiments.
Conclusion
While Potts models have been successfully used for contact prediction and other tasks on protein sequences, using coevolutionary information captured by direct coupling analysis to improve homology search by sequence alignment seems promising, but challenging. The main computational bottleneck is the hardness of alignments involving Potts models.
We presented here PPalign, our method for Potts model to Potts model alignment based on the introduction of an Integer Linear Programming formulation of the problem with an implementation relying on an efficient solver able to yield the optimal solution in tractable time. This initiates a new approach for remote homology search by alignment of Potts models inferred from close homologs, similarly to HHalign with the alignment of pHMMs but with the addition of long distance sequence correlations reflecting the 3D structure of proteins. In this approach, Potts models need to be comparable. As a basic principle for building canonical Potts models, we proposed to infer models with as much weight as possible on the positional parameters and to add only necessary weight on pairwise couplings. We also proposed a scheme for lessening the effects of small sample variations on the Potts model's parameters.
To experimentally assess the feasibility and interest of the approach, we carefully selected a set of non-redundant reference pairwise alignments with low sequence identity and with enough close homologs for each aligned sequence to infer a Potts models. We carried out rigorous experimentation with a strict separation of data used to train hyperparameters of the method and data used to test its performances. Results on test alignments confirm that Potts models can be aligned in reasonable time ( 1 ′ 37 ′′ in average) and that taking into account direct coupling information can improve sequence alignments, especially for remote homologs with lowest sequence identity.
Our experiments suggest that new research on the inference of Potts models could improve their usefulness for homology search. The approach would undoubtedly benefit from extending to Potts models the insertion/deletions modeling capacities as well as the efficient pseudocount schemes of pHMMs. Maybe a more difficult issue is to have guarantees on a canonical form or at least some robustness of inferred Potts models to make them more comparable. We hope that PPalign's efficiency and optimality will help to perform unbiased investigations in these directions. | 8,754.2 | 2020-12-02T00:00:00.000 | [
"Computer Science",
"Biology"
] |
The Role of Tetrahydrobiopterin in the Regulation of Neuronal Nitric-oxide Synthase-generated Superoxide*
Tetrahydrobiopterin (H4B) is a critical element in the nitric-oxide synthase (NOS) metabolism ofl-arginine to l-citrulline and NO⋅. It has been hypothesized that in the absence of or under nonsaturating levels of l-arginine where O2 reduction is the primary outcome of NOS activation, H4B promotes the generation of H2O2 at the expense of O 2 ⨪ . The experiments were designed to test this hypothesis. To test this theory, two different enzyme preparations, H4B-bound NOS I and H4B-free NOS I, were used. Initial rates of NADPH turnover and O2 utilization were found to be considerably greater in the H4B-bound NOS I preparation than in the H4B-free NOS I preparation. In contrast, the initial generation of O 2 ⨪ from the H4B-free NOS I preparation was found to be substantially greater than that measured using the H4B-bound NOS I preparation. Finally, by spin trapping nearly all of the NOS I produced O 2 ⨪ , we found that the initial rate of H2O2 production by H4B-bound NOS I was considerably greater than that for H4B-free NOS I.
In 1992, we discovered that NOS I generates O 2 . in the absence of L-arginine (13). More recently, NOS II and NOS III, like NOS I, have been found to generate O 2 . during enzymic cycling (14 -16). In the presence of L-arginine, NOS I generates NO ⅐ and O 2 . ; the ratio of these free radicals is dependent upon the concentration of L-arginine (17,18). Thus, L-arginine is one of the controlling factors that dictate the selectivity of free radicals produced by NOS. However, in the absence of substrate, NOS uses O 2 as the terminal electron acceptor, generating O 2 . and H 2 O 2 by sequential one-electron reductive steps (see Fig. 1). Under these conditions, there is undoubtedly an alternative mechanism by which NOS regulates the formation of each of these cell-signaling products of O 2 reduction. One possibility is that H 4 B controls production of O 2 . by increasing the reduction rate of the NOS-Fe 2ϩ O 2 species (Refs. 19 -22 and Fig. 1). Evidence to support this theory comes from experiments where the addition of H 4 B to purified NOS I diminished the spin trapping of O 2 . (17,18,23). However, these findings must be viewed with caution, because H 4 B in aqueous solution has been reported to scavenge O 2 . (24 -26) with a rate constant of 3.9 ϫ 10 5 M Ϫ1 s Ϫ1 (26). Thus, the conclusion drawn from the earlier studies (17,18,23)
Purification of NOS I-NOS I was expressed and purified essentially as described by Roman et al. (30), with the modification that the culture * This work was supported in part by National Institutes of Health Grants RR-12257 (to G. M. R.), T32-ES07263 (to P. T.), R25-GM-55036 (to J. W.), NS-34152 (to G. F.), and GM-52419 and Robert A. Welch Foundation Grant AQ1192. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
NADPH Consumption-Oxidation of NADPH was performed in a reaction using potassium phosphate buffer (50 mM, pH 7.4, 1 mM DTPA, 1 mM EGTA), CaCl 2 (2 mM), calmodulin (100 units/ml), and NADPH (150 M) at room temperature. The reaction was initiated by the addition of NOS I (0.40 M). A UV-visible spectrophotometer (Uvikon, model 940, Research Instruments International, San Diego, CA) was used to monitor the reaction spectrophotometrically at 340 nm. The initial rate of NADPH oxidation was estimated using an extinction coefficient of Oxygen Consumption-Oxygen consumption was measured with a commercial oxygen monitoring system (Hansatech). The system was composed of a membrane-coated Clark-type electrode fitted in a glass body reaction chamber and equipped with a Teflon-coated stirring bar and an air-tight stopper. Data acquisition was performed with proprietary hardware and software (Hansatech (32). Ferricytochrome c (0 -23 M) was used as a competitive inhibitor (33). The reaction mixtures were immediately transferred to an EPR flat quartz cell and introduced into the cavity of the EPR spectrometer (model E-109; Varian Medical Systems, Inc.). EPR spectra were recorded at room temperature 3 min after the reaction was initiated by the addition of xanthine oxidase. Instrument settings were: microwave power, 20 mW; modulation frequency, 100 kHz; modulation amplitude, 0.5 G; sweep time, 12.5 G/min; and response time, 0.5 s.
Estimation of the Half-life of BMPO-OOH-
The half-life of BMPO-OOH was determined by monitoring the decrease in the first line of the EPR spectrum of BMPO-OOH as a function of time. The reaction mixture contained BMPO (50 mM) and hypoxanthine (400 M) in potassium phosphate buffer (chelexed, 50 mM, pH 7.4, 1 mM DTPA) for 10 min, and then SOD (30 units/ml, as defined in Ref. 34) was added. The reaction mixture was immediately transferred to an EPR flat quartz cell and introduced into the cavity of the EPR spectrometer (model E-109; Varian Medical Systems, Inc.). EPR spectra were recorded at various time intervals for 60 min.
Rate of Hydrogen Peroxide Formation-Estimation of H 2 O 2 production was obtained by fluorometric analyses (fluorometer, Hitachi model F2500, High Technologies America, Inc., San Jose, CA). A modified method utilizing the dye Amplex Red was adopted (35)(36)(37). The incubation medium was supplemented with Amplex Red (1 M) and horseradish peroxidase (5 units/ml) in sodium phosphate buffer (50 mM, 1 mM EGTA, pH 7.4). The reaction mixture contained NADPH (160 M), CaCl 2 (0.5 mM), calmodulin (100 units/ml), BMPO (100 mM), and SOD (0.04 -80 units/ml). SOD (0.04 unit/ml) was added to each reaction to suppress initial fluorescence seen from the inclusion of NADPH, and SOD (0.14 -80 units/ml) was used in control experiments described under "Results and Discussion." The reaction was initiated by the addition of purified H 4 B-free NOS I (4 nM) or H 4 B-bound NOS I (4 nM) into the reaction mixture. The initial rate of H 2 O 2 generation was recorded as an increase in fluorescence of the dye at 585 nm with the excitation set at 550 nm. The fluorescence was calibrated by generating a standard curve with known concentrations of H 2 O 2 . The concentration of the commercial 30% H 2 O 2 solution was calculated from light absorbance at 240 nm employing an extinction coefficient of 0.0436 mM Ϫ1 cm Ϫ1 ; the stock solution was diluted to 50 M with water and used for calibration immediately. The specificity of horseradish peroxidase/ Amplex Red toward H 2 O 2 was confirmed, because tert-butyl hydroperoxide was not found to be a substrate.
NOS I Activity by [ 14 C]L-Citrulline Formation Assay-The enzymatic activity of purified NOS I was determined by its ability to catalyze the formation of L-citrulline from L-arginine as previously reported (17) (42). Second, the half-life of BMPO-OOH was considerably greater than for that of DMPO-OOH at 53 s (43) and in the same range as that for DEPMPO-OOH at 18 min (44). Third, the EPR spectrum of BMPO-OOH exhibited a greater signal-to-noise ratio than that found for DEPMPO-OOH. The small signal-to-noise ratio of DEPMPO-OOH was the result of additional hyperfine splitting associated with the phosphorous atom located at the ␣-carbon on the pyrroline ring.
Next spectral peak height as a function of time (see inset in Fig. 2). Fig. 2 (Fig. 1). Alternatively, this peroxide can arise from the dismutation of O 2 . , in which the rate constant at pH 7.4 is 3.0 ϫ 10 5 M Ϫ1 s Ϫ1 (45). It is therefore by no means a trivial task to separate these disparate pathways. After considering several options, we settled on an approach that required increasing the concentration of BMPO to a level so that this nitrone would spin trap most, if not all, of the O 2 . produced (41). Thereupon, the only source of NOS-derived H 2 O 2 would be from the one-electron reduction of the NOS-Fe 2ϩ O 2 species. We then had to find a method that would meet the following criteria. First, the assay had to detect H 2 O 2 in real time, not at some arbitrary time after the reaction had commenced. Second, the method must not interfere with the spin trapping of O 2 . .
Third, given that NOS can easily transfer electrons to a wide variety of one-electron acceptors, such as ferricytochrome c (13), the assay had to be an oxidative process. Fourth, the method had to be sensitive and selective for H 2 O 2 . Given these limitations, we settled on a fluorometric assay developed by Zhou et al. (36,37). The overall mechanism, shown below, involves three distinct reactions, as presented by Chance (46 Although the rate constant for the reaction of H 2 O 2 with horseradish peroxidase to form Compound I is 10 ϫ 10 6 M Ϫ1 s Ϫ1 (46), the rate constants for the sequential one-electron reduction of Compounds I and II to the fluorescent resorufin from Amplex Red are unknown. However, based on similar reactions reported in the literature (46) involving Compound I and II with other donor molecules, we estimate that the rate constant would not be above 1 ϫ 10 5 M Ϫ1 s Ϫ1 , and most likely it is close to 1 ϫ 10 3 M Ϫ1 s Ϫ1 , too slow to allow quantitative estimates of the initial rate of H 2 O 2 production. Therefore, the initial rate of NADPH consumption measured does not have a ml), and SOD (ranging from 0.14 to 80 units/ml). We found that the rate of H 2 O 2 production, as measured by an increase in fluorescence, was constant over this range of SOD used in the experiment (data not shown). Thus, where appropriate SOD (0.14 units/ml) was included in each reaction. When BMPO (100 mM) was added to the above reaction mixture, without and with SOD (0.14 units/ml), we found that BMPO spin-trapped ϳ90% of the O 2 . generated by NOS.
Based on these control experiments, we were confident that by inclusion of BMPO in the reaction mixture, most of the NOS-produced O 2 . released from NOS was considerably less when H 4 B was bound to the enzyme than in the absence of this pterin (Fig. 2). After several min, however, the EPR spectral peak height of BMPO-OOH (Scheme 1) from H 4 B-bound NOS I out-paced that observed with H 4 B-free NOS I (Fig. 2 Although it may be premature to speculate as to the physiologic significance of our findings, we offer one possible scenario. In the absence of or under low levels of L-arginine, where O 2 reduction is the primary end product of NOS activation, H 4 B will undoubtedly play a critical role in regulating the generation of H 2 O 2 and O 2 . . Because each of these reduction products of O 2 activates a different cell signal pathway (47,48), the importance of H 4 B in the regulation of NOS-derived O 2 . and H 2 O 2 and its diversity of physiological functions cannot be underestimated. | 2,629.2 | 2002-10-25T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Spontaneous emission in confined space according to stochastic electrodynamics
Modeling an atomic excited state as a simple charged dipole oscillator immersed in a random (zeropoint) radiation, we discuss the effects of two metallic plates on the properties of a microscopic system. The spectral distribution of the zero-point electromagnetic field, characteristic of stochastic electrodynamics, and the rate of emission of the osci11ator are modified by the boundaries of the cavity. As a result, the lifetime of the oscillator excited states are different from the free-space values. A comparison with recent experimental results [W. Jhe et a1., Phys. Rev. Lett. 5$, 666 (1987)] exhibiting suppression of spontaneous decay of excited Cs atoms shows a good agreement with our simplified model calculation.
I. INTRODUCTION The role of zero-point electromagnetic radiation in many physical phenomena is becoming clearer since its first appearance in physics with Planck's second black- body theory [1].A few years after Planck's discovery of these vacuum zero-temperature electromagnetic fluctua- tions, Nernst [2] proposed that zero-point radiation might be responsible for the stability of atomic systems.
With the development of quantum mechanics in the 1920s, zero-point radiation reappeared as a straightfor- ward consequence of field quantization.However, it lost the status of a real field to become a "virtual" Geld, that is, it could not be observed directly.Nevertheless, impor- tant effects of the zero-point radiation on physical sys- tems were discovered [3]: radiative corrections to the atomic energy levels, anomalous magnetic moment of the electron, the Casimir effect [4] (forces between atoms and macroscopic objects), etc.Later on, the zero-point fields regained more attention with the works by Welton [5] and Sokolov and Tumanov [6], where these quantum electromagnetic fluctuations were considered as the source of the microscopic fluctuations presented by the electron inside an atom.
The experimental confirmation of these theoretical pre- dictions, like, for instance, the measurements of the radi- ative correction to the H-atom fine structure (Lamb shift [7]) and the Casimir attraction between conducting plates observed by Sparnaay [8], were considered as spectacular successes of quantum electrodynamics.
More recently, with the use of lasers, it has been possi- ble to study environmental changes of the vacuum fluc- tuations, for instance, using highly excited states of atoms which have large polarizabilities and therefore are strong- ly coupled with the electromagnetic field.If the atom is inside a cavity, its radiative properties are modified be- cause the electromagnetic field surrounding the atom can be drastically altered by the presence of the cavity walls.These efFects have been observed experimentally and have generated a new field called "cavity quantum electro- dynamics" [9].A very simple system in which one can understand the important role of zero-point electromagnetic fields is the charged harmonic oscillator.Moreover, by Ehrenfest's theorem, the expectation values of any quantummechanical quantity that has linear Heisenberg equations of motion will be identical to the corresponding classical one and, therefore, quite easy to interpret.Within the realm of nonrelativistic quantum electrodynamics (QED) the Heisenberg equation of motion for this system takes the form [6,10] 2 x = -coox+ -x'+ E"(t), -2 e ... e 3mc m where coo is the natural frequency of the oscillator, E"(t): -(1/c )BA-"/dt is the quantized electric field, and 2''/3c is the radiation reaction or self-field.These elec- tromagnetic fields represent, respectively, the jfuctuation and the dissipation that govern the dynamics of such a microscopic system because even at zero temperature there are vacuum fluctuations associated with the field operator E"(t).Therefore it is possible to consider these fluctuations as the source of quantum fluctuations on the position x of the oscillating charge.In fact, it is not diScult to derive the quantum commutation relation be- tween the position and momentum of the oscillator as is well known [10 -12].From the stationary solution of (1.1) one can show that the commutation relation be- follows from the commutation relations associated to the zero-point electroinagnetic fields (observe that this is so only if the dissipative force is precisely 2e x/3c ).It is also easy to show that the ground-state energy of the three-dimensional oscillator is 3A'F00/2 (see Ref. [10] for instance).This happens because, in equilibrium, the em- itted radiation is balanced by the energy absorbed from 6436 1992 The American Physical Society This interesting result means that [13] (1) the charge in the vacuum can only lose energy by cascading down- wards to lower energy levels; (2) the ground state cannot be stable in the absence of the vacuum free-space fluctua- tions which exactly balance the energy loss due to self re- action.One can easily obtain from (1.3) the value of the Einstein A coeScient of "spontaneous" emission and the lifetime of the excited state in the free-space vacuum.It was also shown in Refs.[13]and [14] that conclusions (1) and (2) are valid for real atoms which are much more complicated than simple oscillators.As far as we know this was the first confirmation, within the realm of QED, of the proposal made by Nernst [2] more than 70 years ago.However, within Nernst's original scheme, the vac- uum zero-point fluctuations were associated with classical electromagnetic fields.
If the radiating system is inside a cavity (near conducting plates for instance) it is expected that some of the properties of this microscopic system will be modified [9].
We address ourselves to discuss this problem here be- cause of its fundamental relevance to our understanding of the dynamics of the microscopic world.Due to recent advances in experimental techniques, physicists are learn- ing how to modify the behavior of atoms [9].The zero- point electromagnetic field has a fundamental role in these attempts, as we shall see.
In order to discuss these points we prefer to work within the realm of stochastic electrodynamics [15 -22] (SED) instead of using the standard QED formalism.
The reason for this choice is that within SED the zero- point radiation is a real classical (stochastic) electromag- netic field, whereas in QED these vacuum fields are very often considered "virtual, " a not-well-defined concept in our opinion [23].
Stochastic electrodynamics
is just classical electro- dynamics with the hypothesis of a random background radiation in the whole space (zero-point radiation).The reasoning which leads to this idea can be summarized as follows.In space there are systems of charged particles (atoms or inolecules) that move according to the classical laws.These atoms will be continuously radiating and therefore some amount of radiation will be always present in space.The radiation will act on the atoms and these will arrive at a state of dynamical equilibrium such that the rate of emission equals the rate of absorption.This may explain, in a simple way, the stability of atoms without departing from classical theories.Once the ex- istence of some amount of radiation in space is assumed, its spectral density is fixed by very general principles.In the quantum zero-point electromagnetic fields.In other words, the rate of exchange of energy between the charge and the total radiation field E =E"+2ex'l3c is such that [13,14] 2 -(n ~(xF.+Ex))n ) = --g (n ~x ~n')(n'~x~n ) ) fact, the only spectrum which is Lorentz invariant has a density po(co) such that po(co)const X co =Pleo 1277 c (1.4)
II. DIPOLE BETWEEN CONDUCTING PLATES
As a simple example that can illustrate how the envi- ronment can change the behavior of a microscope sys- tem, we shall consider a dipole oscillator between two large conducting plates separated by the distance a.
Since, very often, one can consider an atomic excited state (with large polarizability) as an oscillating electric dipole emitting radiation [27], the theoretical implica- tions of this example can be checked experimentally, as This background radiation is such that we have a mean energy of %co/2 associated with each normal mode of the electromagnetic fields.If we postulate the existence of classical electromagnetic (fluctuating) fields, persisting even at zero temperature, classical electrodynamics is provided with a new boundary condition.In this way some of the quantum behavior of the microscopic matter can be predicted entirely on classical grounds [2].Some examples may be found in the reviews by Boyer [18,21], Milonni [19], Claverie and Diner [20], and de la Pena [22] where the microscopic properties of the harmonic oscilla- tor, the blackbody radiation, the diamagnetic behavior of free and harmonically bound charges, the Casimir forces between macroscopic objects, and other phenomena are discussed.More recently it has been shown that the paramagnetic behavior of a rigid magnetic dipole [24], the specific heat of solids [25], and also other fundamen- tal electromagnetic processes [26] may be understood classically.
With the purpose of a clear presentation of another ex- ample of the above-mentioned ideas, we propose to dis- cuss in this paper a simple model which describes the emission of radiation by an excited atomic state, when the atom is between two mirrors.In our simplified description we shall assume that the rate of energy emit- ted during the transition is the same as the rate of emis- sion by an oscillator.With this goal in mind our paper is organized as follows.For the reader's convenience we present in Sec.II a summary of previous results as far as the behavior of an oscillator between conducting plates is concerned.We want to draw the reader's attention to the oscillator properties which are modified and also to those which are not modified by the changes in the zero-point modes of the electromagnetic field due to the presence of the metallic plates.In Sec.III we discuss how the spontaneous emission of the oscillator excited states is modified in confined space.We also present, in Sec.III, a comparison between our model calculation and some re- cent experimental results.In Sec.IV we address our- selves to a qualitative discussion of atomic stability [13,18].Our goal is to stress the role of zero-point radia- tion as a source of energy which maintains the ground- state stability (the emitted radiation is compensated by the radiation absorbed from zero-point electromagnetic fields), which is the essential idea of SED.Finally we present our conclusions in Sec.IV.
we shall see.
The phenomenon that we are going to study in Sec.III is the spontaneous emission by an excited state.In quan- tum language, the emission can be suppressed if the emit- ted photon (wavelength A, ) corresponds to a mode which cannot propagate between the plates (A, )2a for photon polarization parallel to the mirror plates).Since the criteria for the propagation are derived from classical Maxwell theory and since QED makes a distinction be- tween real photons (emitted by the atoms) and the virtual photons from the quantum vacuum, we prefer to work within the realm of classical stochastic electrodynamics as we have mentioned in the Introduction.Within SED all modes of the electromagnetic fields are treated on the same footing, that is, they correspond to real modes that interact with the atomic dipole.This problem was studied by Marshall [16], who ob- tained the main results we are going to use in Sec.III.
The same questions were discussed later by Milonni and Knight [28] using the quantum formalism.More recently Cetto and de la Pea [29] considered again the problem within the realm of SED.In all these papers [16,28,29] the authors reach essentially the same conclusions.For the reader's convenience we are going to summarize here the results which we shall use in Sec.III.
If we have an oscillating dipole (frequency co) that is between two parallel perfectly conducting plates, the emis- sion rate is different from the free-space emission because the fields reflected by the mirror plates interfere with the emitted wave.If the interference is constructive we have enhanced emission or, alternatively, one can obtain an in- hibited emission in the case of destructive interference.
The phenomenon is typically undulatory in character.
Therefore if we have an oscillating electric dipole locat- ed at the point (0,0, b ) in between two mirrors separated by the distance a (a )b ), the emitted power must be cal- culated taking into account the whole set of image di- poles (we are assuming that the plates are perpendicular to the z direction and are located at z =0 and a).Since the image dipoles are located at the points z =b+2a, b+4a, . . .and at z =b, -b+2a, -b+4a , . . ., one can calculate the total electric field at the posi- tion (0,0, b) of the real dipole.
The retarded fields generated by an oscillating dipole are well known [30].However, we only need to take into account those fields which contribute to the emission and absorption of radiation by the real dipole [16].Of course there are terms that give rise to a van der Waals force be- tween the real dipole and conducting plates [32].This force will contribute to displace the dipole as a whole and will not be considered here.
The above discussions shows that the spontaneous emission by the oscillator excited states is modified by the conducting plates.The equations of motion, which are isotropic in free space, are changed to [16] x+y x+N x = -E", g + yy JJ +N JP -Ey (2.1) 2+y, z+N z= -E, , if the oscillator is between two conducting plates.In this case we must take y = yy = y ~~and y, = y~.Here E, Ey, and E, are the components of the electric field associated with the zero-point radiation.These fluctuating fields are also modified by the metallic boundaries [16,29].
In free space the damping constants are such that yx yy yz yFs and 2 e 2 yFS -N mc (2.2) which is the well-known expression for the damping con- stant [31].However, for an oscillator at the point (0,0, b ) between two metallic plates (see Fig. where [coa /irc ] is the integral part of the ratio boa /irc. As far as we know these expressions were obtained for the first time by Marshall [16].More recently Cetto and de la Pena [29] rederived the expressions without using the image method.This method, however, was used by Milonni and Knight [28] within the realm of the quantum theory.The quantum result can be obtained from (2.3) and (2.4) by replacing y"s by the Einstein A coefficient of spontaneous emission.We can understood this (within the realm of SED) if we recall a previous result by Franca and Marshall [33].By studying transitions between oscil- lator states with energies fico(n+ -, ') these authors ob- tained for the Einstein A coefficient (corresponding to a downward transition) the following result valid in free space: Here one can easily see that the Einstein coefBcient of spontaneous emission has contributions from the radia- tion reaction field and also from zero-point electromag- netic fields [see also (1.3) and (1.4)].
We have already mentioned that the fluctuating fields of the zero-point electromagnetic radiation are also modified by the metallic boundaries.Marshall [16] was able to calculate the spectral density of modes by requir- ing that each normal mode has an average energy I/2AN.This is the usual hypothesis in SED, the same as in @ED (in free space this corresponds to the requirement of a Lorentz invariant spectral distribution for the zero-point electromagnetic radiation).Using the inodified spectral distribution and also (2.3) and (2.4}, Marshall was able to show that the statistical properties associated with the ground state of the oscillator are preserved, that is, are the same as those valid for free space.In other words the ground-state average energy is precisely 3frco/2 and the probability distribution is a Gaussian with variances (2.6) Here the frequency ~must be greater than mc/a if we have perfectly conducting plates [see (2.4)].Also the os- cillator phase-space distribution is the same as in free space.If the oscillator is forced by an additional deter- ministic external force (with arbitrary time dependence) it is easy to show that coherent states are generated in the same way as we have seen before [33,34].
There are also other properties of the oscillator which are affected by the environmental modifications of the electromagnetic vacuum.They are mass corrections and Lamb shift corrections to the energy.However, they are very small as one can see from the more recent works by Cetto and de la Perta [29].Hence the environmental effects due to the presence of the plates show up mainly through the modifications displayed by y~~a nd yj.This affects the lifetime of excited states and can be observed experimentally, as we shall see in Sec.III.
IN CONFINED SPACE
Let us consider an atom inside a metallic cavity.The structure of the spectral distribution of zero-point radia- tion is dramatically altered in the wavelengths comparable to the dimensions of the cavity [16,29].In the case of two perfect mirrors, for instance, y~~w ill be zero for wavelengths larger than Za as one can see from (2.4}.
Very recently, interesting experiments were carried out using the parallel-mirror geometry we have discussed in the preceding section.One experiment demonstrated the inhibition of spontaneous emission from Rydberg states of cesium atoms [35].A beam was passed through a tun- nel between two mirrors separated by a distance a such that a (A, /2 where A, is the wavelength of the emitted ra- diation.The atoms surviving in the initial quantum state were detected at tunnel exit by ionizing them in a small electric field.The lifetime obtained is at least 20 times larger than it is in free space.
A similar experiment was performed by Jhe et al. [36].
In this experiment the inhibited transition was 5D~~2 -+6P3/2 at a wavelength of 3.49 pm.The excited atoms propagate through the tunnel (between two metal- lic mirror separated by a 1.1-JMm gap) for about 13 natu- ral lifetimes without appreciable decay.The experimenters applied a small magnetic field (2.4 G) in order to demonstrate the anisotropy of spontaneous emission be- tween mirrors.The magnetic dipole p=(e/2c)rXr asso- ciated with the excited state will precess around the ap- plied magnetic field B. The electric dipole p=er, which is always perpendicular to p, , will change its orientation in space and the spontaneous-emission rate will be different for difFerent magnetic-field orientations.Modeling an atomic excited state as a simple dipole oscillator immersed in the zero-point radiation, we want to see the effects of two metallic plates on the properties of this mi- croscopic system.
Let us assume that the magnetic field B is oriented in such a way that it makes an angle 8 with the z direction, which we have taken as perpendicular to the mirrors.
The magnetic moment p will precess around the B direc- tion with the Larmor frequency co& =e8/2mc.It is easy to show that the orientation of the vector p will change in time according to p"/u = cos8 sing cos(cos t )+sin8 cosy', p» /p =sing sin(cost ), p, /p, = -sin8 sing cos(co~t ) +cos8 cosy, (3.1)where p -= ~p~i s assumed to be constant and the angle p is defined as the initial orientation of p, with respect to the direction of the magnetic field B. This is illustrated in Fig. 1. %e will also assume for simplicity that the electron motion in the excited state corresponds to a circular orbit with a frequency ~.Therefore the electric dipole p, which is always perpendicular to p, , will have an orienta- tion in space that is easy to relate to the orientation of the vector p.If the instantaneous orientation of p has a po- lar angle f(t) and an azimuthal angle P(t) one can show that p"/p = cosl( cosP sin(cot )+sing cos(cot ), p"/p = -cosf sing sin(cot )+cosP cos(cot ), p, /p = sing sin(cot ), -2 -2 ea'=yxpx+yypy+y p (3.3) where the bar denotes times average.The damping con- stants y; were obtained previously, that is, y =yy=y~z where p = ~p ~is a constant for circular orbits.Within this model one can consider that the excited state emits radiation like a dipole oscillator with a frequency co.Therefore the emitted radiation will have a wavelength A, =21TC /N In order to apply the results of Sec.II we must obtain the efFective damping constant (I', s) associated with the dipole motion described by (3.1}and (3.2).Since one can define the effective damping constant as the ratio between the average emitted power and average oscillator energy we get and y<EMAIL_ADDRESS>course, I,&, besides depending on the magnetic-field orientation 0, depends on the position oc- cupied by the atom and on the distance between the mir- rors, as one can see from (2.3) and (2.4).
Using (3.2) one can express I, tt as 1.0 2r,a(8, V ) =y~~(i+cos'q)+y (Icos'q), and since cosg = p, , (t) lp, we get (3.4) cos2l( =cos 8cos p+ -, 'sin 8 sin y, (3.5)where the angle y defines the initial orientation of the excited-state magnetic dipole p with respect to the mag- netic field B (see Fig. l).Since, in the atomic beam which enters the tunnel between the mirrors, the excited atoms have a random distribution in the orientation angle y, we must take this into account.
If t is the tunnel crossing time, the fraction f(8) of atoms which survive after passing through the gap be- tween the mirrors will be f (8 ) = , ' f d y -singexp[ t l-,tt( 8, y ) ], 0 (3.6)where the integral over y means that we are averaging over the initial orientation of the atomic dipole.The fraction f(8) can be compared with the experimental data, as we shall see.
Another way of expressing f (8) (y"-y, )(l --, 'sin'8) .(3.9)If A (0 the integral (3.8) corresponds to the well- known error function.If, however, A )0, (3.8) corresponds to Dawson's integral [37].In both cases it must be evaluated numerically.The latter experiment above [36] is characterized by a value of t (average tunnel crossing time) such that tyFs=12. 8where, as before, yFs is the damping constant of the excited state 5D, &2 in free space.In our calcula- tion we consider tyFs as given since we are not able to obtain y"s for this state within the framework of SED [see (2.2)].Since in this experiment A, =3.49 pm and the mirrors separation is a = 1.1 pm, we have boa /mc =2a /X=0.63.Therefore, according to the previ- ous results (2.3) and (2.4), we get y ~~= 0 and 'Vi/XFs 3k/4a = 2. 38.
The comparison between our model calculation which leads to (3.7) and the experimental data is shown in Fig. 2. The agreement between our classical calculation based on SED and the experimental observations is very good.
The corresponding QED calculation based on the Lamb- Bethe theory also presents a very good agreement with FIG. 2. Excited-state transmission between the mirrors as a function of the angle 8 between the magnetic 5eld and the nor- mal to the plates.Solid line corresponds to (3.7) normalized to the maximum counting rate (30 counts per second [36]).
the experimental data reported in Ref. [36].However, as far as we know, these quantum calculations have not been published [9,36].
IV. DISCUSSION
The phenomena discussed in the preceding section are very stimulating.
One can conclude that physicists are learning to control the atomic behavior.If one has an atom (or molecule) between mirrors or inside resonant cavities [9], one can modify the behavior (in a probabilis- tic sense) of the microscopic system.
In the case of two parallel mirrors, for instance, we know that atomic excited states, which emit radiation (and also absorb energy from the zero-point electromagnetic fields) at a wavelength whose normal modes are affected by the cavity boundaries, have diferent properties from the same atomic state in free space.Not only is the lifetime affected.If the atom stays for a long time in- side the cavity some states are "eliminated" because they become more unstable, while other states acquire the status of "stable states." From the simple example we have discussed one can infer that some states with large-z components of the angular momentum become more stable because the electron trajectory is parallel to the plates and have inhibited emission of radiation (Y~~=O).
This is interesting because one can modify the chemical properties of atoms and molecules by putting them inside appropriate cavities.The technological implications of this kind of "cavity chemical dynamics" is obviously at- tractive.However, we want to draw the reader's attention to other fundamental implications.We have just seen that an unstable excited state can become stable (that is, one can increase its lifetime by a factor of 20 as has been ob- served in some experiments [35]) if we put the atom in- side an appropriate environment.We understand this in terms of a balance between the emitted radiation and the energy absorbed from the environmental radiation, that is, the zero-point electromagnetic fields which corresponds to the allowed normal modes associated to the en- vironment.
This may be considered a typical classical (stochastic) mechanism for atomic stability.Therefore the natural questions which appear immediately are the following: What occurs with an atom in free space?Is it possible to understand the stability of the atomic ground state in the same manner?
In our opinion the answer to these questions may be simple, as we have already said in the Introduction.It is obvious that isolated systems do not exist.Therefore the environmental radiation in free space is simply the emit- ted radiation which comes from distant matter in the universe.Such a proposal, as an explanation for the ob- served zero-point electromagnetic fields, has been dis- cussed more than once in the past [38,39].As far as the second question is concerned, we also believe that the answer may be affirmative [20,40 -42].
In this article we have tried to show how two ideas originating within the entirely classical concepts of sto- chastic electrodynamics have paved the way towards a deeper understanding of the interaction between atoms and the radiation field.The rate of such processes may be calculated with extraordinary accuracy in quantum electrodynamics, but their explanation remains obscure.
The first of these ideas is the understanding of the sta- tionary states of nonrelativistic quantum mechanics, and the second is the related question of the status of the canonical commutation relations between position and momentum.
Both of these have to be understood in a dynamical sense; the commutation relations can be main- tained for all times only by introducing both the radia- tion reaction field and the zero-point field.This was recognized first within quantum electrodynamics by the group of Ackerhalt, Knight, and Eberly [43] and Milonni [12], but had been well recognized previously by the school of stochastic electrodynamics.The more recent recognization, by Dalibard, Dupont-Roc, and Cohen- Tannoudji [13] and others [14], that a full understanding is possible only by working with the symmetrically ordered product of field and atom operators (leading to a full recognition that the radiation reaction and the zero- point fields have a separate real existence) has led to an even closer convergence between QED and SED.One can now recognize not only why the ground state is the only truly stationary state, but also that the stability of the ground state itself results from the existence of the zero-point field.
We remark here that the fashionable area of research now known as cavity quantum electrodynamics was also foreseen in stochastic electrodynamics, since Marshall showed that, if we assume that a harmonic oscillator keeps the same ground state in a cavity as it has in free space, then this necessarily means replacing the zero- point field of free space by one in which each cavity mode of oscillation is occupied by radiation of average intensity equivalent to half a "photon." We also remark that, in a subsequent article, Henry and Marshall [44] extended this idea to calculate the zero-point field in a cavity of finite quality factor Q, something that, so far as we know, has not been done in quantum electrodynamics.Howev- er, it is already seen from the article of Henry and Marshall that the statement [36], made frequently in cav- ity electrodynamics, that modes with wavelength greater than the dimension of the cavity (A.& 2a in our example) are completely extinguished, has to be modified.In a cavity of finite Q there is always some residual noise at all frequencies.But, by the fluctuation-dissipation theorem, the corresponding relaxation time for an oscillator with such frequencies will be very much greater inside the cav- ity than in free space.
We have to recognize, however, that, perhaps because its objective is to understand, (rather than merely calcu- late) physical phenomena, stochastic electrodynamics has not made any advances in the calculation of atomic ener- gy levels.Indeed all the evidence is that the simple Braffort-Marshall equation of stochastic electrodynamics is not adequate for the electron in a Coulomb field [41,42].Nevertheless, we can claim that there is some virtue in restricting attention to models based on the sys- tem which stochastic electrodynamics understands really wellthe harmonic oscillator.In this context the article by Cray, Shih, and Milonni [27], together with references quoted therein, is very illuminating.
It is essentially in accordance with such ideas that we have introduced our simple model with circular orbits in Sec.III.
The proposition that the intuitive insights of stochastic electrodynamics have something to offer, even in helping to understand such a monumental computational achievement as the QED calculation of gyromagnetic fac- tor for the electron (g -2), is well illustrated by Cohen- Tannoudji [14].There it is shown that the Lamb shift is mainly due to vacuum fluctuations, but the spin anomaly g -2 is mainly due to radiation reaction.In other words, the leading term of g -2 is given, in both sign and magni- tude, by an interaction of the spin with the vacuum fluc- tuations and the radiation reaction field, that is, both are necessary.
ACKNOWLEDGMENTS
This work was partially supported by DGICYT (Spain) Grant No. PB 87-0014.One of us (H.M.F.) acknowl- edges the hospitality of the Universidad de Cantabria and also the financial support from FundagXo de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP), Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq), and programa Banco Interamericano de Desenvolvimento -Universidade de Sao Paulo.
ingFIG. 1 .
FIG. 1.Initial orientation of the magnetic moment p, withrespect to the excited state. is W. Milonni and M. L. Shih, Am.J. Phys.59, 684 (1991), for interesting comments concerning Planck's papers and the appearance of the zero-point energy in quantum theory. | 6,979 | 1992-05-01T00:00:00.000 | [
"Physics"
] |
Optical antenna design for fluorescence enhancement in the ultraviolet
Through rational design, we compare the performance of three plasmonic antenna structures for UV fluorescence enhancement. Among the antenna performance metrics considered are the local increase in excitation intensity and the increase in quantum efficiency, the product of which represents the net fluorescence enhancement. With realistic structures in aluminum, we predict that greater than 100× net enhancement can be obtained. © 2012 Optical Society of America OCIS codes: (240.6680) Surface plasmons; (300.6540) Spectroscopy, ultraviolet. References and links 1. R. F. Chen “Fluorescence quantum yields of tryptophan and tyrosine,” Anal. Lett. 1, 35–42 (1967). 2. C. R. Johnson, M. Ludwig, S. O’Donnell, and S. A. Asher “UV resonance Raman spectroscopy of the aromatic amino acids and myoglobin,” J. Am. Chem. Soc. 106, 5008–5010 (1984). 3. G. D. Fasman, ed. Practical Handbook of Biochemistry and Molecular Biology. CRC Press 1989. 4. K. Ray, M. H. Chowdhury, and J. R. Lakowicz “Aluminum nanostructured films as substrates for enhanced fluorescence in the ultraviolet-blue spectral region,” Anal. Chem. 79, 6480–6487 (2007). 5. H. Szmacinski, K. Ray, and J. R. Lakowicz “Metal-enhanced fluorescence of tryptophan residues in proteins: Application towards label-free bioassays,” Anal. Biochem. 385, 358–364 (2008). 6. J. R. Lakowicz, B. Shen, Z. Gryczynski, S. D’Auria, and I. Gryczynski “Intrinsic fluorescence from DNA can be enhanced by metallic particles,” Biochem. Biophys. Res. Commun. 286, 875–879 (2001). 7. J. R. Lakowicz, J. Malicka, I. Gryczynski, Z. Gryczynski, and C. D. Geddes “Radiative decay engineering: the role of photonic mode density in biotechnology,” J. Phys. D: Appl. Phys. 36, R240–R249 (2003). 8. K. Aslan, M. J. R. Previte, Y. Zhang, and C. D. Geddes “Surface plasmon coupled fluorescence in the ultraviolet and visible spectral regions using zinc thin films,” Anal. Chem. 80, 7304–7312 (2008). 9. A. Kinkhabwala, Z. Yu, S. Fan, Y. Avlasevich, K. Müllen, and W. E. Moerner “Large single-molecule fluorescence enhancements produced by a bowtie nanoantenna,” Nat. Photonics 3, 654–657 (2009). 10. A. Taguchi, N. Hayazawa, K. Furusawa, H. Ishitobi, and S. Kawata “Deep-UV tip-enhanced raman scattering,” J. Raman Spectrosc. 40, 1324–1330 (2009). 11. C. C. Davis “Fluorescence: Molecules in a tight spot,” Nat. Photonics 3, 608–609 (2009). 12. S. Attavar, M. Diwekar, and S. Blair “Photoactivated capture molecule immobilization in plasmonic nanoapertures in the ultraviolet,” Lab Chip 11, 841–844 (2011). 13. M. T. Neves-Petersen, T. Snabe, S. Klitgaard, M. Duroux, and S. B. Petersen “Photonic activation of disulfide bridges achieves oriented protein immobilization on biosensor surfaces,” Protein Sci. 15, 343–351 (2006). 14. K. Aslan and C. D. Geddes “Directional surface plasmon coupled luminescence for analytical sensing applications: Which metal, what wavelength, what observation angle?,” Anal. Chem. 81, 6913–6922 (2009). 15. P. R. West, S. Ishii, G. V. Naik, N. K. Emani, V. M. Shalaev, and A. Boltasseva “Searching for better plasmonic materials,” Laser & Photon. Rev. 4, 795–808 (2010). 16. S. Blair and J. Wenger “Enhancing fluorescence with sub-wavelength metallic apertures,” in The Role of Plasmonic Engineering in Surface-Enhanced Fluorescence (C. D. Geddes, ed.) ch. 17 John Wiley & Sons 2008. 17. E. M. Purcell “Spontaneous emission probabilities at radio frequencies,” Phys. Rev. 69, 681 (1946). 18. H. Aouani, O. Mahboub, N. Bonod, E. Devaux, E. Popov, H. Rigneault, T. W. Ebbesen, and J. Wenger “Bright unidirectional fluorescence emission of molecules in a nanoaperture with plasmonic corrugations,” Nano Lett. 11, 637–644 (2011). #175459 $15.00 USD Received 6 Sep 2012; revised 30 Nov 2012; accepted 1 Dec 2012; published 21 Dec 2012 (C) 2012 OSA 31 December 2012 / Vol. 20, No. 28 / OPTICS EXPRESS 29909 19. F. Mahdavi and S. Blair “Nanoaperture fluorescence enhancement in the ultraviolet,” Plasmonics 5, 169–174 (2010). 20. E. D. Palik, “Handbook of Optical Constants of Solids,” Academic Press, London (1985) 21. L. Novotny and B. Hecht, “Principles of Nano-Optics,” Cambridge University Press, Cambridge, (2006). 22. H. Fischer and O. J. F. Martin “Engineering the optical response of plasmonic nanoantennas,” Opt. Express 16, 9144–9154 (2008). 23. O. Mahboub, S. C. Palacios, C. Genet, F. J. Garcia-Vidal, S. G. Rodrigo, L. Martin-Moreno, and T. W. Ebbesen “Optimization of bull’s eye structures for transmission enhancement,” Opt. Express 18, 11292–11299 (2010). 24. M. Kuttge, F. J. G. de Abajo, and A. Polman “How grooves reflect and confine surface plasmon polaritons,” Opt. Express 17, 10385–10392 (2009). 25. S. Carretero-Palacios, O. Mahboub, F. J. Garcia-Vidal, L. Martin-Moreno, S. G. Rodrigo, C. Genet, and T. W. Ebbesen “Mechanisms for extraordinary optical transmission through bull’s eye structures,” Opt. Express 19, 10429–10442 (2011). 26. H. J. Lezec, A. Degiron, E. Devaux, R. A. Linke, L. Martin-Moreno, F. J. Garvia-Vidal, and T. W. Ebbesen “Beaming light from a subwavelength aperture,” Science 297, 820–822 (2002).
Introduction
The field of plasmonics has been primarily focused on the visible-NIR range, with comparatively little effort devoted to the UV (defined here as λ < 400 nm). Motivating factors in the study of UV plasmonics are the direct access to biomolecular resonances and native fluorescence, resonant Raman scattering interactions, and the potential for exerting control over photochemical reactions, including photocatalysis.
Organic molecules have electronic resonances in the UV part of the spectrum. The advantages of UV-resonant molecular spectroscopy have been recognized for decades [1,2], such as the use of UV resonant Raman scattering for structural conformational and kinetics studies. Biomolecules such as peptides and proteins contain residues that absorb in the UV (220-280 nm); the aromatic amino acids tryptophan, tyrosine, and phenylalanine are fluorescence and Raman active. However, aromatic residues have relatively low fluorescence quantum efficiencies and molar extinction coefficients [1,3], as do nucleic acid bases, so achieving significant enhancement via plasmonic structures [4] could be a key enabling factor in the label-free detection of proteins [5] or DNA molecules [6,7]. Label-free detection methods are highly desirable for the measurement of the kinetics of molecular interactions, which, when implemented in a highly-parallel manner, enable mapping of the interactome of biological systems. Nevertheless, there are numerous organic dye labels in use that absorb/fluoresce in the UV [8] For native fluorescence, "brightness" (which is the product of absorption cross-section and quantum efficiency) is about 100× lower than for common fluorescent dyes in the visible. Therefore, a metric goal for UV optical antenna design is obtaining ∼100× fluorescent enhancement, which is clearly possible in the visible [9], but not demonstrated in the UV. For comparison, UV resonant Raman cross-sections of many biomolecules are comparable to resonant cross-sections of organic dye molecules; UV resonance results in approximately a 10 5 increase in cross-section as compared to non-resonant excitation conditions [2]. Indeed, tipenhanced UV resonance Raman scattering has been demonstrated [10].
Photochemical reactions can be exploited in the UV, where plasmonic enhancement can be used to drive localized chemical reactions on a scale commensurate with the molecules themselves [11] and with increased reaction rates [12]. For example, when aromatic residues in proteins are in proximity to disulphide bonds, ∼280 nm irradiation of the residue can induce breakage of the disulphide bond, creating free thiol groups that link with thiol-reactive surfaces such as Au [13]. Other reactive groups can be used for photocrosslinking at wavelengths typically near 365 nm, such as aryl azides, benzophenone, diazirine rings, and anthraquinones.
One of the limiting factors for UV plasmonics is the material response. Conventional "plasmonic" metals such as Ag and Au suffer from the influence of interband transitions near the blue part of the spectrum, whereas the interband transition for Al lies in the near infrared, opening up a near Drude-like response in the UV. Other metals are suitable for UV plasmonic applications [14]. Fig. 1 plots the SPP and LSPR "quality factors" [15], or figures of merit, for some common metals. Clearly, of the metals considered, Al has the highest quality factor in the UV, and is the metal chosen for this study. This paper focuses on native fluorescence enhancement as an exemplary application of UV plasmonics, but the results apply to other applications as well. Therefore, we compare UV fluorescence enhancement using three canonical plasmonic structures -dipole antenna, bull's eye nanoaperture and nanoaperture array.
Simulation model
Most fluorescent molecules can be treated as a system of three energy levels-singlet ground state S 0 , first excited singlet state S 1 , and dark non-fluorescing, or first excited triplet state T 1 . The fluorescence count rate per molecule (CRM) in steady state is given by [16] CRM = κφ σ I e where κ is the light collection efficiency (combination of the optical system and radiation profile), φ = k rad /k tot the quantum efficiency (QE), k rad and k nr the rate constants for radiative emission and non-radiative de-excitation from S 1 to S 0 , k tot = k rad + k nr the inverse of the excited state lifetime τ, σ I e the net excitation rate, σ the absorption cross-section, and the saturation intensity According to Eq. 2, CRM enhancement by plasmonic structures (e.g. nanoantennas) consists of three contributions-local increase in the excitation intensity I e , local increase in the radiative emission k rad or quantum efficiency φ of enclosed fluorophores, and modification of the collection efficiency κ.
The enhancement of quantum efficiency (QE) can be expressed as [9] f where φ o is the native QE, and f rad is the ratio of the k rad values calculated with (denoted with ') and without the antenna. The ratio is known as the Purcell factor, which represents the change in spontaneous emission rate of a perfect dipole [17]. According to Eq. 3, the modified QE varies dramatically based upon the native QE of the fluorophore. Here, we use tryptophan (Trp) as the model fluorophore, whose native quantum efficiency is φ o = 13% [1]. Trp has maximum absorption near 266 nm and peak emission near 340 nm. It is noted that radiation only into the substrate is used in our calculations, which corresponds to typical epifluorescence setup through a glass substrate [18], resulting in the calculation of an effective quantum efficiency. Thus, φ o of tryptophan becomes 8% [19]. The net enhancement (NE) of fluorescence can be described as where f I is the excitation enhancement and f κ is the enhancement in collection efficiency which is assumed equal to 1. The impacts of different antenna structures on κ will be considered by comparing the far-field radiation profiles. In summary, five important parameters have been introduced. Excitation enhancement ( f I ) is often used as the primary metric in the design and analysis of plasmonic structures. The Purcell factor ( f Purcell ) is the increase in total energy emitted by an ideal dipole, which is inversely related to its increase in lifetime. The radiative enhancement ( f rad ) stands for the fluorescence enhancement under saturated condition. QE enhancement ( f φ ) is a parameter that relates to the radiation efficiency of the fluorophore. NE quantifies the net fluorescence enhancement, including excitation and emission components. In the following section, these parameters will be used as figures of merit in order to analyze the influence of nanoantenna design on UV fluorescence.
The three nanoantenna structures considered in this paper are depicted in Fig. 2, including the plan views and cross sections of (a) dipole antenna, (b) bull's eye aperture and (c) aperture array. The structures are assumed to be supported by a semi-infinite glass (SiO 2 ) substrate and covered by water. The active region, where the enhanced local field interacts with the fluorophore, is shown in zoom-in image in Fig. 2 extending just 10 nm above the glass substrate. Dielectric constants of aluminum, water and glass are incorporated via the dielectric constant obtained from handbook data [20].
Three-dimensional electromagnetic simulation is performed using Lumerical FDTD Solutions. Antisymmetric and symmetric boundaries are used along the x and y directions according to the symmetry of the structure and the source, which reduces the calculation and memory overhead without sacrificing resolution. Perfectly matched layers (PML) are used on the other boundaries. The grid size is 1×1×1 nm 3 for the dipole antenna and 2×2× 2 nm 3 for the bull's eye and hole array. In order to calculate the excitation enhancement factor f I , a plane wave with unit amplitude (1 V/m) is introduced inside the substrate, which normally illuminates the structures from the bottom. Average enhancement is calculated by integrating the total intensity within a 10 nm thick monitor covering the active region, and dividing by the integrated intensity within the same volume but in the absence of the metallic structures. For the emission calculations, the analysis of the FDTD results rely on the fact that, for an atomic dipole transition that can only occur through radiation, the quantum mechanical decay rate in an inhomogeneous environment can be related to the classical power radiated by the dipole in the same environment [21]. Specifically, we can relate every rate constant to the corresponding power, such as k rad k rad + k nr = P rad P o where P rad and P o are the radiative and total emission power of a dipole. Therefore, an electric dipole with unit amplitude (1 V/m) (at 340 nm) is positioned at the center of active region. The radiative emission is calculated as the transmission through monitors around the structure, while the total emission is calculated as the transmission through monitors around the dipole. Then the radiative enhancement ( f rad ) and Purcell factor ( f Purcell ) can be obtained by dividing the corresponding emission with those without the antenna structure. Calculations are performed for x, y, and z dipole orientations, and the reported enhancements are an average across these orientations. Then QE enhancement and NE can be calculated according to Eq. 3 and 4a.
Dipole antenna design
There are four geometrical parameters determining the response of the dipole antenna, as shown in Fig. 2. The arm length L defines the antenna resonance wavelength, while the gap distance G affects the coupling between the two arms. In our studies, we fix both thickness T and width W of each arm at 30 nm for simplicity, and vary gap size (20≤ G ≤50 nm) and arm length (20≤ L ≤180 nm). Figure 3 shows the 2-D enhancement maps of six antenna performance metrics versus L and G. Peaks in both excitation and emission enhancement occur under resonance conditions determined by the arm length, whereas the gap size controls the level of enhancement. As expected, smaller gap size generates higher enhancement due to the stronger coupling between the arms. From the map of excitation enhancement in Fig. 3(a), there are three resonances at the excitation wavelength, for arm lengths L=20 nm, 80 nm and 130 nm. The field intensity distributions (not shown) verify that these correspond to different resonance orders. Furthermore, peak enhancement increases with the arm length, which agrees with previous research [22], but it decreases at the fourth resonance (not shown here) due to the increase of material absorption. The highest excitation enhancement is ∼17 at the third resonance (when G=20 nm).
From the maps of emission enhancement in Fig. 3(b), 3(c), 3(d) and 3(e), the first and second peak values are at L=40 nm and 120 nm, which are shifted to longer arm length due to the longer emission wavelength of the dipole (340 nm compared to 266 nm). Comparing the peak enhancements at the two emission resonances, the Purcell factor is relatively unaffected by the arm length (maximum f Purcell is ∼11 when G=20 nm and L=40 nm), but the radiative enhancement factor ( f rad ) has a lower peak value at the longer arm length. This behavior implies that the non-radiative emission increases with increase of the volume of metallic structure, which in turn gives the lower peak enhancement of QE at the longer arm, as shown in Fig. 3(e) (maximum f φ is ∼4.5 when G=20 nm, L=30 nm). The net enhancement (NE) in Fig. 3(f) is the product of f I and f φ , and reaches maximum values of ∼27 at both the first (L=20 nm, G=20 nm) and second (L=120 nm, G=20 nm) resonance, where QE enhancement is greater for the shorter antenna. It should be noted again that only the radiative enhancement into the substrate is used to calculate f φ and NE, but due to the finite thickness of the antenna, some radiation escapes into the upper halfspace.
In the above analysis, the radiation pattern was not considered (i.e. f κ = 1). The far-field radiation patterns of structures corresponding to the first (L=20 nm, G=20 nm) and second (L=120 nm, G=20 nm) peak NE, calculated for an x-polarized electric dipole in the active region, are shown in Fig. 4(a). The patterns are indicative of dipole and quadrupole resonances, because the two structures are close to the corresponding order of emission resonance. The radiation of the first resonance has a prominent main lobe along the z direction (270 • ) with a divergence angle of ±55 • , while radiation from the second resonance has two strong side lobes around ±50 • with respect to the z direction, each side lobe with divergence angle around ±15 • . The spatial cross-section distributions of |E| 2 for the two resonance modes are also shown in Fig. 4(b) and 4(c), respectively. A logarithmic scale is used to allow a greater dynamic range of field intensity to be displayed. The two distinct resonance modes are clearly seen by inspecting the number of nodes in the antenna arms.
Bull's eye antenna design
The bull's eye antenna has more a complicated structure compared to the dipole antenna, which involves six geometrical parameters shown in Fig. 2. Fortunately, most of of parameters can be approximately related to the groove pitch P through design criteria [23]. In addition, the hole diameter (D) defines the environment around the fluorophore, and has an influence on emission enhancement somewhat independent from the other parameters. The depth (S) and width (W ) of the grooves can further modify the optical response through groove modes [24], but these don't change the working mechanism of the bull's eye structure, which relies on constructive interference at the central hole of standing waves emitted by the independent grooves [25]. Therefore, we fix the depth and width as S =20 nm and W =60 nm to remove the effect of the groove mode for simplicity. The thickness of the structure (T ) is set to 100 nm. The number of grooves is set as 3 to reduce memory and computational time requirements. The far-field radiation patterns are also considered, as shown in Fig. 6(a). Three patterns corresponding to the three peak values of NE are plotted. The pattern of the first peak (P140D50) has two comparatively small side lobes, each with divergence angle of ±15 • , because the pitch corresponds to the excitation resonance rather than the emission resonance. By contrast, the other two patterns are from structures that are close to emission resonance, and show the features of first order resonance (a main lobe along z with a divergence angle of ±5 • ) and second order of resonance (two strong side lobes with divergence angles of ±5 • ), respectively. The spatial distribution of |E| 2 for the three cases in cross-section at the glass interface is also shown in Fig. 6(b), 6(c) and 6(d), respectively. The resonance and off-resonance features can be clearly seen from the corresponding images.
It is worth comparing the bull's eye and dipole antennas. The bull's eye is an extended planar structure with much greater interaction cross-section, thus the excitation enhancement f I is much higher for roughly the same active area. The round aperture in the bull's eye has an optimal size for the excitation and emission processes -about 50 nm for excitation and 70 nm for emission, which agree with previous studies [19] -whereas, enhancement will generally increase for the dipole antenna with decreasing gap. For the bull's eye, the peak values of f rad into the substrate (Fig. 5(d)) are much larger than the total f rad (Fig. 5(c)), because the thicker aperture strongly attenuates radiation into the upper halfplane. Furthermore, the bull's eye exhibits more directionality in emission due to constructive interference with scattering by the concentric grooves, and effect sometimes called "beaming" [26].
Aperture array design
If a fluorophore could be placed inside a specific aperture, an aperture array can be treated as a variation from the bull's eye with one central aperture surrounding by a square lattice of apertures instead of concentric grooves. Therefore, similar performance should be expected in terms of fluorescence excitation and emission. Again our analysis will be focused on period P and aperture size D as parameters. The thickness of the structure (T ) is fixed at 100 nm. The total number of periods is set as 6 due to the limitation of memory and speed of FDTD simulation.
Maps of the figures of merit are generated by changing the aperture size (40≤ D ≤100 nm) and period (100≤ P ≤320 nm), and are shown in Fig. 7. Excitation enhancement shows resonance peaks near same regions as the bull's eye, but with much smaller enhancement values (maximum f I ∼23). A more obvious difference can be seen from emission enhancement in Fig. 7(b), 7(c), 7(d) and 7(e). The influence of the period is less distinct, which implies that there's a weaker interaction between the central and nearby apertures than there is between a central aperture and concentric grooves. Therefore, the emission enhancement of the aperture array is closer to that of a single aperture, where the aperture size D is the dominant factor. The emission figures of merit f Purcell , f rad and f φ have similar features, with maximum enhancements ∼4, for D ∼80 nm. These results are very close to those from a single aperture reported before [19] and obtained from a different simulation method, which further validates our analysis. The map of NE under unsaturated conditions exhibits three peaks, with values ∼41, 37 and 40 at P=120 nm, 200 nm and 300 nm, respectively, for D=50 nm, which follow the dominant excitation resonance. The stronger excitation enhancement due to the collective SPP resonance makes the NE of an aperture array stronger than that of single aperture. The far-field angular radiation patterns from the aperture array are shown in Fig. 8(a). The patterns for the near resonant cases (P200D50 and P300D50) have distinct directional peaks on broad non-directional backgrounds, implying some interaction between apertures, while the off-resonance (P120D50) case exhibits a broad non-directional pattern, suggesting reduced inter-aperture interaction. The spatial distributions of |E| 2 at the glass interface for the three cases is also shown in Fig. 8(b), 8(c) and 8(d), respectively, showing that there is no interaction between apertures in Fig. 8 (b), while weak interaction can be found in Fig. 8(c) and 8(d).
Performance comparison
After the analysis of the three antenna structures, it is helpful to compare their performance. The maximum enhancement values are listed in Table 1. For excitation enhancement ( f I ) under plane wave illumination, the bull's eye gives the best performance (∼61) due to its large concentrating structure. While comparing the structures based upon plane wave illumination might be appropriate for nanoantenna arrays, if single structures are to be compared, then focused illumination needs to be considered. For example, comparing the dipole and bull's eye structures under the conditions of maximal f I , one might use focused illumination of diameters 280 nm and 1.8 μm, respectively. Assuming the same power in each beam, then the intensity in the gap of the dipole antenna would be about 700× the intensity incident on the bull's eye, for which the intensity within the central aperture of bull's eye remains as 60.5× incident. This comparison is simply a statement that, within the diffraction limit, focusing via conventional imaging is more efficient than plasmonic focusing. The dipole antenna produces the highest emission enhancement with f Purcell ∼11, f rad ∼7, and f φ ∼4.5, due to it's favorable gap structure. The far-field angular radiation patterns of the first order of emission resonance from three structures is shown in Fig. 9. The patterns are normalized for comparison. The bull's eye antenna has the most directionality due to its extended structure with strongly interacting concentric grooves.
Conclusion
In conclusion, three plasmonic antenna structures for UV fluorescence enhancement are numerically studied by comparing five performance metrics: excitation enhancement ( f I ), Purcell factor ( f Purcell ), radiative enhancement ( f rad ), QE enhancement ( f φ ) and NE. The 2-D maps of performance metrics versus geometrical parameters are generated in order to clarify the influence of structure parameters. The far-field radiation patterns are also considered. All three structures present similar features that peak enhancement of the excitation and emission processes occurs under resonant conditions, determined by arm length for dipole antenna and pitch for other two structures. Furthermore, distinct differences are observed across the structures. The bull's eye aperture and aperture array produces higher enhancements due to their extended planar structure with much greater physical interaction cross-section with incident light. Decrease of gap size of dipole antenna will increase the enhancement of excitation and emission, while the round apertures in the bull's eye and aperture array have an optimal size for the excitation (∼ 50 nm) and emission (∼ 70 nm) processes. Due to the favorable gap structure, the dipole antenna produces higher Purcell factor ( f Purcell ∼11), radiative enhancement ( f rad ∼ 7) and QE enhancement ( f φ ∼4.5). The thicker structures of bull's eye and aperture array effectively suppress radiation in the direction away from the substrate, which is preferable for an epifluorescence setup. The far-field radiation of the bull's eye aperture has the most directionality due to constructive interference with scattering by the concentric grooves. The aperture array has the least directionality due to the weak interaction between central and neighboring apertures. | 6,052.2 | 2012-12-31T00:00:00.000 | [
"Physics"
] |
Effect of interface orientation on the adhesion strength and fracture toughness of Ni/CrN interfaces by first-principles study
The brittleness and relatively poor adhesion properties of CrN materials have been extensively addressed by developing Ni/CrN composites with a separate Ni phase. However, conditions at the Ni/CrN interfaces, which are the key features leading to the enhanced toughness, remain poorly understood. The present work addresses this issue by investigating the effect of interface orientation on adhesion strength and fracture toughness of Ni/CrN interfaces using first-principles calculations. To this end, we build seven Ni/CrN interface models, including Ni(100)/CrN(100), Ni(110)/CrN(110), Ni(110)/CrN(111), and Ni(111)/CrN(111), with different interface orientation and stacking orders. The results demonstrate that the interface orientation plays a predominant role in determining the mechanical properties of the Ni/CrN interfaces, while the effect of stacking order can be neglected. The Ni(111)/CrN(111) interface is demonstrated to provide the greatest adhesion strength, interfacial stability, and fracture toughness among the Ni/CrN interfaces considered, and is therefore the preferred orientation for Ni/CrN composite applications.
Introduction
Chromium nitride (CrN) has been widely used as a protective layer material owing to its high hardness and good resistance to wear and corrosion. However, the brittleness and relatively poor adhesion of CrN materials greatly restrict their use in industrial applications. Studies have demonstrated that the incorporation of ductile metallic phases within CrN materials is a promising approach for enhancing their mechanical properties [1][2][3]. To this end, Ni has been most commonly employed as a separate phase in transition metal nitrides (TMNs) due to its low permeability to N atoms [4][5][6][7]. Therefore, the development of Ni/CrN composites has received considerable attention [8][9][10].
The incorporation of Ni as a separate phase in TMNs dictates that the strength and failure mechanisms of the interface between the two different phases represent important factors when studying the mechanical properties of these composites. In this regard, studies have demonstrated that the enhanced toughness of multilayer TMN composites is facilitated via various phenomena occurring at the interface, such as crack deflection, stress relaxation, and energy dissipation [11,12]. For example, Daniel et al [3] experimentally demonstrated that the fracture resistance of TiN/SiO x and CrN/Cr multilayer coatings was enhanced by the deflection of cracks propagating at the interfaces. Sui et al [13] experimentally studied the effects of interfacial properties on the increased fracture toughness and adhesion strength of TiAlN/CrN multilayer coatings. Moreover, the crystal orientations at the interfaces of the composites have been demonstrated to play a key role in their mechanical properties. For example, Wiecinski et al [14] experimentally demonstrated that the optimum crystal orientation at the interfaces of Ti/TiN multilayer coatings resulted in a decreased interfacial energy and a corresponding improvement in the mechanical properties of the coatings. A similar effect was observed for Cr/CrN multilayer coatings [15]. Therefore, obtaining detailed information regarding the interfacial characteristics of Ni/CrN materials with different interface orientations is imperative for developing advanced Ni/CrN composites with optimal mechanical properties.
According to the above discussion, experimental methods have demonstrated the important effect of interface orientation on the enhanced toughness of multiphase materials empirically. However, the results of experimental methods lack important details regarding the molecular mechanisms leading to this phenomenon, and the results therefore tend to be limited. This can be addressed by employing first-principles calculations based on density functional theory (DFT) [16][17][18][19]. Here, DFT calculations have been demonstrated to be a powerful method for revealing detailed information regarding atomic and electronic structures at the interfaces between two phases, and thereby facilitating predictions regarding the stability, adhesion strength, and fracture toughness of interfaces. Previous theoretical studies extensively investigated the interfacial properties of cermet composites which were influenced by atomic termination, stacking order, or interfacial doped element [20][21][22][23][24][25][26][27][28]. For example, Zhang et al [23] investigated atomic structure and electronic properties of Ag(111)/TiC(111) interfaces with two atomic terminations and eight stacking sites. Li et al [21] studied the interfacial bonding mechanism of Al(111)/Al 2 MgC 2 (0001) interface models with five different terminations of Al 2 MgC 2 (0001) and various four stacking sites of Al(111). Guo et al [20] investigated the effect of active Ti element on the bonding characteristic of the Ag(111)/α-Al 2 O 3 (0001) interface. However, despite this large body of interfacial research based on first-principles calculations, the interfacial properties of the cermet composites influenced by interface orientation were relatively unexplored [16]. As a result, the effect of interface orientation on the adhesion strength and fracture toughness at the interfaces of multiphase Ni/CrN materials remains poorly understood.
The present work addresses this issue by systematically investigating the effect of interface orientation on the adhesion strength and fracture toughness of Ni/CrN interfaces using first-principles calculations based on DFT.
To this end, we construct seven Ni/CrN interface models, including Ni(100)/CrN(100), Ni(110)/CrN(110), Ni(110)/CrN(111), and Ni(111)/CrN(111), with different interface orientations, stacking orders, and interface misfit values less than 5%. We compare the mechanical properties of the interface models according to the calculated values of the work of adhesion, interfacial energy, and fracture toughness. However, the mechanical properties of the interfaces are particularly related to the nature of atomic bonding at the interfaces, which in turn depends on their electronic structures and bonding characteristics. Therefore, we also investigate the electronic properties of the Ni/CrN interface models. The results demonstrate that the interface orientation has a significant effect on the mechanical properties of Ni/CrN interfaces, while the effect of stacking order is negligible. The Ni(111)/CrN(111) interface is demonstrated to provide the greatest adhesion strength, interfacial stability, and fracture toughness among the Ni/CrN interfaces considered. In addition, the electronic properties of the Ni(111)/CrN(111) interface demonstrate that the high interfacial adhesion strength and fracture toughness of that interface are determined by its large number of N-Ni bonds with ionic and covalent features.
Methodology
First-principles calculations were performed using the Cambridge Serial Total Energy Package (CASTEP), which employs the plane-wave ultrasoft pseudopotential method based on DFT. The exchange-correlation energy was obtained using the generalized gradient approximation (GGA) with the Perdew-Burke-Ernzerhof (PBE) functional. The GGA-PBE functional may underestimate the band gap of some complex materials [29,30], however, it always yields excellent results for lattice constant and mechanical properties of transition metal and TMNs [26,[31][32][33][34]. Moreover, many studies have demonstrated that the GGA-PBE functional is reliable for the DFT calculations on cermet composite systems [20,21,23,35]. The ground state was determined via electronic minimization conducted by solving the Kohn-Sham equation under a self-consistent field (SCF). Meanwhile, the atomic structure was relaxed using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. The electronic configurations considered were Ni 3d 8 4s 2 , Cr 3s 2 3p 6 3d 5 4s 1 , and N 2s 2 2p 3 . A plane-wave cutoff energy of 400 eV was used in all calculations. The convergence tolerances were set as 5.0×10 −5 eV/atom for the energy, 0.1 eV/Å for the maximum force, and 0.005 Å for the maximum displacement.
Bulk properties
Both CrN and Ni have face centered cubic (FCC) structures with Fm 3 m space groups. Specifically, the stable configuration of CrN at room temperature is a paramagnetic (PM) NaCl structure [36][37][38]. The magnetic properties of CrN are induced by the asymmetric distribution of the Cr 3d spin-up and spin-down states in this compound [32,39]. Therefore, the calculation of CrN should be conducted using spin-polarization. The k-point mesh of the Monkhorst-Pack grid was 12×12×12. The values of lattice constants (a), bulk modulus (B), elastic constants (C ij ), and elastic compliances (S ij ) calculated for Ni and CrN are listed in table 1 along with corresponding data obtained from other first-principles studies. The good agreement between the past and present calculation results indicates that the calculation parameters employed in the present study are reasonable.
Interface models
The lattice constant of CrN (4.176 Å) is much greater than that of Ni (3.548 Å). Therefore, a supercell approach with periodic boundary conditions was employed to obtain a perfect lattice match between Ni and CrN. As illustrated in figure 1, the interface models were built based on four types of interface orientations, including Ni(100)/CrN(100), Ni(110)/CrN(110), Ni(110)/CrN(111), and Ni(111)/CrN(111). All of the crystal orientations at the interfaces are the most common growth orientations of Ni and CrN materials according to the standard XRD reflections (PDF#04-0850 of Ni, PDF#11-0065 of CrN), and these orientations are often detected in the experiments [5,6,15,43]. It is noted that the Ni(100)/CrN(100), Ni(110)/CrN(110), and Ni(110)/CrN(111) interface models have two different stacking orders, including OT and HCP, where OT indicates that the interfacial N atoms sit directly above the interfacial Ni atoms, HCP indicates that the N atoms sit above the Ni atoms on the second layer. Therefore, seven Ni/CrN interface models in total were constructed. As shown in figures 1(i)-(k), we intentionally terminated the CrN(111) surfaces with N atoms rather than Cr atoms to facilitate covalent or ionic bonding between Ni and N atoms at the interface rather than metallic bonds between Ni and Cr atoms. Our rationality for doing so is that this will increase the adhesion strength and fracture toughness of the interfaces because covalent or ionic bonds appear to be much stronger than metallic bonds [27,31]. The interface models were composed of Ni and CrN slabs with a sufficient number of layers to ensure a bulk-like structure in the interior, and vacuum spaces of 15 Å thickness were applied in the z direction above the Ni layer and below the CrN layer to avoid interactions between the periodic cells in that direction.
Interfacial mismatch rates δ U and δ V can be calculated according to the U and V directions shown in figures 1(e)-(h) and (l)-(n) as follows: where d U , CrN and d U , Ni are the lengths of CrN and Ni surfaces along U, respectively, and d V , CrN and d V , Ni are the lengths of CrN and Ni surfaces along V, respectively. In addition, we calculate an angle difference dθ (°) as follows: where θ CrN is the angle between U CrN and V CrN , and θ Ni is the angle between U Ni and V Ni . The mismatches between the Ni and CrN surfaces obtained for the interface models are listed in table 2. It can be seen that the values of δ U and δ V are both less than 5%, and the values of dθ are all less than 2°. Therefore, the interface models constructed in this work represent valid structures.
Surface convergence testing
We conducted convergence tests based on surface energy to determine the appropriate numbers of layers to apply in the interface models to ensure that they were sufficiently thick to exhibit bulk-like structures in the interiors. The surface energy γ s for the non-polar Ni(100), Ni(110), Ni(111), CrN(100), and CrN(110) surfaces can be calculated as follows [31]: where E slab is the total energy of the material slab, n is the total number of atoms (or chemical units) in the slab, E bulk is the total energy per atom (or chemical unit) in the bulk material, A is the surface area, and the factor 2 represents two identical surfaces of the slab. However, this issue is more complicated for the N-terminated CrN (111) surfaces. Here, CrN(111) slab models with an even number of layers are stoichiometric, and the surface Table 1. Calculated lattice constant (a), bulk modulus (B), elastic constants (C ij ), and elastic compliances (S ij ) of Ni and CrN.
Phase
Data source a (Å) (4). However, the CrN(111) slab models with odd numbers of layers are non-stoichiometric. Therefore, we calculate the surface energy γ c of these non-stoichiometric models as follows [44,45]: where E Nslab and E Crslab are the total energies of the CrN(111) slabs with complementary N-and Crterminations, and the factor 4 comes from the fact that we create four slab surfaces. In fact, the surface energy calculated for the CrN(111) slab model is exactly the average value of slabs with the two terminations, namely, g g g + / . The results of convergence testing demonstrate that the surface energies of the Ni and CrN slab models converge to stable values for slabs composed of 7 layers. Therefore, all interface models employed seven Ni layers and seven CrN layers. The surface energies obtained for the various 7-layer Ni and CrN slab models are listed in table 3. It is worth noting that the Ni(111) plane has the lowest surface energy of all Ni slab models considered, while the CrN(100) plane has the lowest surface energy of all CrN slab models considered. In fact, these results are consistent with corresponding experimental results [46]. In addition, the surface energies of these slab models are in good agreement with similarly calculated surface energy data in the literature, which are also listed in table 3. These results further verify the correctness of the calculation parameters employed in the present work. Finally, we note that the sum of the surface energies obtained for the combined Ni and CrN slab models follows the order: Ni
Work of adhesion
The adhesion strength of the interface models was assessed according to the work of adhesion W ad , which can be calculated as follows [22]: where E slab,Ni and E slab,CrN are the total energies of the fully relaxed surface slabs, E Ni/CrN is the total energy of the interface model, and A is the area of the interface. The k-point meshes employed for the calculations involving the Ni(100)/CrN(100), Ni(110)/CrN(110), Ni(110)/CrN(111), and Ni(111)-CrN(111) interface models were 6×6×1, 5×7×1, 5×8×1, and 11×11×1, respectively. The values of W ad can be calculated by two different methods. The first method is the universal binding energy relation (UBER) method [26,31]. The values of W ad obtained by this method versus the interfacial distance d 0 are shown in figure 2. The peaks of the curves represent the optimal values of d 0 and W ad . The second method adopts the optimal structures obtained from the UBER method, and relaxes the interface models fully. Four atomic layers at the top of the Ni slabs and three atomic layers at the bottom of the CrN slabs were fixed in their bulk positions, respectively. Then, the optimal values of d 0 and W ad are obtained from the relaxed models. The values of d 0 and W ad obtained for the different interface models using this second method are listed in table 4. We note from figure 2 and table 4 that the optimal W ad values obtained for the Ni(110)/CrN(111) and Ni(111)/CrN(111) interface models are much greater than those obtained for the Ni(100)/CrN(100) and Ni(110)/CrN(110) interface models. This may be because the interfaces between the Ni slabs and the N-terminated CrN(111) slabs generate more polar covalent bonds with Ni, which, as discussed above, appear to be stronger than metallic bonds. In addition, we note that the Ni(111)/CrN(111) interface has the highest value of W ad (3.54 J m −2 ) among all the interfaces considered. Finally, the above results demonstrate that the interface orientation plays a predominant role in determining the value of W ad obtained at Ni/CrN interfaces, while the stacking orders of the Ni or CrN surfaces have a relatively small influence on these values.
Interfacial energy
Interfacial energy E int is used to evaluate the thermodynamic stability of the interface. In general, the stability of the interface increases with decreasing E int . Here, E int is calculated as follows [50][51][52][53]: where γ Ni and γ CrN are the surface energies of the Ni and CrN surfaces, respectively, which are calculated according to equations (4) or (5) CrN(111). Accordingly, the relatively large sum of γ Ni +γ CrN obtained for the Ni(111)/CrN(111) interface model was compensated by its very large value of W ad to provide the lowest value of E int (0.44 J m −2 ), indicating that this interface is more thermodynamically stable than the other interfaces.
Interfacial fracture toughness
The fracture toughness of an interface represents its strength to resist the development of a fracture along the interface. According to Griffith crack theory, the interfacial fracture toughness K hklIC along a specific direction [hkl] can be calculated as follows [54]: ( ) Figure 2. Relationship between the work of adhesion W ad and the interfacial distance d 0 obtained using the UBER method. Table 4. Interfacial distance (d 0 ), work of adhesion (W ad ), interfacial energy (E int ), and fracture toughness (K IC hkl ) of different interface models after full relaxation.
Interface orientation
Stacking order
Electronic structure
The electronic structure and bonding characteristics of the Ni/CrN interface models were evaluated according to the charge density, charge density difference, Mulliken population analysis, and partial density of states (PDOS).
The table 4 for the values of W ad and K hklIC . Therefore, these results illustrate the bonding mechanism responsible for the observed trends in W ad and K hklIC obtained for the seven Ni/CrN interfaces.
Mulliken population analysis provides a semi-quantitative evaluation of charge transfer and ionicity. Therefore (111) interface model provided the strongest adhesion, largest thermodynamic stability, and largest interfacial fracture toughness among the seven interface models considered. Therefore, we evaluated the nature of bonding interactions in the Ni(111)/CrN(111) interface further by calculating the PDOS for the selected N, Cr, and Ni atoms in the different layers (as labeled in figure 4(b)) of the Ni(111)/CrN(111) interface model, which are presented in figure 5, where the Fermi level is marked by a dashed line at an energy of zero. We note that states are observed at the Fermi level for all atomic layers, implying that both Ni and CrN have a metallic nature. In addition, the electron density values of the Ni-3d orbital for Ni4, Ni16, and Ni2 atoms at the Fermi level are 2.8, 2.8, and 1.6 e eV −1 , respectively. The large decrease in the electron density of the Ni-3d state for the Ni2 atom at the Fermi level indicates that electrons in the Ni-3d orbital of the Ni2 atom take part in the covalent bonding between Ni2 and N8 atoms or electron transfer from Ni2 to N8 atoms. Furthermore, a comparison of the PDOS obtained for the Ni2 atom with those of the Ni4 and Ni16 atoms indicates that the d orbital in the PDOS of the Ni2 atom shifts toward more negative energy levels. Meanwhile, the p orbital in the PDOS of the N8 atom shifts toward more positive energy levels compared to that orbital in the PDOS of the N2 atom. These results indicate that the 3d orbital of the Ni2 atom interacts and hybridizes with the 2p orbital of the N8 atom at the energy of −3.9 eV, as shown by the arrows in figure 5. Therefore, we can conclude that a covalent bond is formed between Ni2 and N8 atoms in the Ni(111)/ CrN(111) interface model. Finally, the PDOS obtained for the Ni2 atom is lower in height compared to those obtained for the Ni4 and Ni16 atoms, indicating that electrons transfer from Ni2 to N8 atoms. The above analysis demonstrates that the high interfacial adhesion strength and fracture toughness of the Ni(111)/CrN (111) interface model are determined by its large number of N-Ni bonds with ionic and covalent features.
Conclusions
Conditions at Ni/CrN interfaces, which are the key features leading to the enhanced toughness of Ni/CrN composites, remain poorly understood. In this regard, the condition of crystal orientation at the interface is a critical factor that should be considered. Therefore, the present work systematically investigated the effect of interface orientation on the adhesion strength and fracture toughness of Ni/CrN interfaces using first-principles calculations based on DFT. We constructed seven Ni/CrN interface models in total by considering both interface orientations and stacking orders. The work of adhesion, interfacial energy, fracture toughness, and electronic properties of the seven Ni/CrN interfaces were evaluated and compared. The results demonstrated that the interface orientation plays a predominant role in determining the mechanical properties of the Ni/CrN interfaces, while the effect of the stacking order is negligible. In addition, the Ni(111)/CrN(111) interface model was demonstrated to provide the greatest adhesion strength, thermodynamic stability, and fracture toughness of all models considered. Accordingly, the development of Ni(111)/CrN(111) interfaces are preferred in actual Ni/ CrN coating applications. Furthermore, the electronic properties of the Ni(111)/CrN(111) interface demonstrate that the high interfacial adhesion strength and fracture toughness of this interface are determined by its large number of N-Ni bonds with ionic and covalent features. Accordingly, we can conclude that the present study provides a practical perspective for tailoring the interfaces in Ni/CrN materials to obtain improved mechanical properties. | 4,785.2 | 2021-01-01T00:00:00.000 | [
"Materials Science"
] |
Imaging the coupling of terahertz radiation to a high electron mobility transistor in the near-field
We used AlGaN/GaN high electron mobility transistors as room-temperature direct detectors of radiation at 0.15 THz from a free electron laser, hence 5 times higher than their cutoff frequency of 30 GHz. By near-field active mapping we investigated the antenna-like coupling of the radiation to the transistor channel. We formulate a model for the detection based on self-mixing in the transistor channel. The noise equivalent power is found in the range of 10−7 W/Hz0.5 without any optimization of the device responsivity. Present day AlGaN/GaN fabrication technology may provide operation at higher frequency, integration of amplifiers for improved responsivity and fast switches for multiplexing, which make the detector here described the basic element of a monolithic terahertz focal plane array. [DOI: 10.2971/jeos.2009.09006]
INTRODUCTION
The rapid development of compact terahertz sources and detectors suggests a large number of practical imaging applications for non-destructive material testing, medical diagnostics and security.Nowadays, commercial instruments start to appear in the millimeter-wave range (0.1-0.3 THz).The acquisiton of images with a large field of view relies either on raster-scanning with highly sensitive, low-temperature singlepixel detectors, or on massive arrays of modular detectors, based on passive rectifying elements like Schottky diodes or semiconductor-metal-semiconductor junctions.Therefore, a room-temperature integrated focal plane array (FPA) detector is desirable to achieve a compact video-rate terahertz im-ager.The most direct way of obtaining an integrated FPA is to look for devices sensitive to terahertz radiation which can be fabricated directly on a semiconductor wafer by fabrication steps compatible with present-day high-speed semiconductor technology [1].In the last years, high electron mobility transistors (HEMTs) have been demonstrated to operate as detectors of terahertz radiation well beyond their cutoff frequency for amplification f T [2], displaying gate-tunable resonant detection [3] and detecting radiation up to 3.1 THz when cooled at low temperatures [4].The detection mechanism is based on the properties of the two-dimensional electron gas found at the interface of the semiconductor heterostructure on which the HEMT is fabricated, which provides operation as either direct detector [2] or as mixer [5].From the noise and responsivity figures reported [1], at room temperature a quartermicron gate AlGaAs/GaAs HEMT should be sensitive to a flux of tens of µW/mm 2 in the millimeter-wave range, which can be obtained from diode-based frequency multipliers.It seems therefore promising to design an integrated FPA to be fabricated on a semiconductor heterostructure wafer like Al-GaAs/GaAs or AlGaN/GaN, together with low-noise amplifiers and fast switches for readout which are already achievable on these materials.Extension of the FPA operation to higher frequency and/or lower flux are not precluded in principle, and may require only minor changes to the fabrication process.
The first step towards an integrated FPA is the study of the detection of terahertz radiation by single HEMTs in order to optimize the radiation coupling and to fit the data to a phenomenological model, which allows then for further design and simulation.In this paper, we demonstrate the operation of a AlGaN/GaN HEMT at room temperature as a direct detector of radiation at 0.15 THz, i. e. at a frequency higher than its f T = 30 GHz.The radiation coupling to the transistor channel was investigated by active near-field imaging with a lateral resolution of 200 µm (λ/10 where λ is the wavelength).The detection mechanism is attributed to self-mixing effect of the signal provided by a single source (in this case a Free Electron Laser (FEL)), simultaneously coupled to the transistor channel and to the gate electrode.As a result, direct dc detection of the source power is obtained.The operation frequency of 0.15 THz was set by the source itself, and operation at higher frequency may be possible with the same HEMT or by implementing shorter gate length and lithographic antennas in an advanced design.
EXPERIMENTAL SETUP
Double-channel HEMTs were fabricated on AlGaN/GaN heterostructures grown on SiC substrates.A photograph of the device is shown in Figure 1(a) and displays two source pads, one drain pad and one gate pad with two 0.25 µm long Schottky gate fingers.The channel width is 50 µm.The device fabrication technology is based on a mix and match approach by using both stepper and electron beam lithography [6].The schematic of the circuit used for the double-channel HEMT is shown in Figure 1(b) and includes dc bias power supplies (V c = 10 V and variable V g0 ).The two source pads (S1, S2), the drain (D) and gate (G) pads were wire-bonded through 1.5 mm long, 25 µm diameter, aluminium wires to the copper lines of a PC board, and these were connected via coaxial cables to the voltage sources.The dc current i d0 flowing through the channel and a series resistor R d = 1.02 kΩ was measured with a digital meter for all V g0 and the static drain voltage was obtained by The near-field imaging system developed at the ENEA-FEL facility in Frascati [7]- [9]
ACTIVE IMAGING OF RADIATION COUPLING
The normal operation of the ENEA-FEL instrument for nondestructive material testing is the imaging of the power reflected by a sample, which is raster-scanned below the WG end by keeping constant the WG-to-sample distance z [8].For each position, one FEL shot is fired and the emitted power is measured by the monitor diode.The z-dependent percentage of reflected power R z is measured at every given position by a second Schottky diode mounted close to the WG end (passive imaging) and a R z -map is obtained.If z < λ, the sample is in the near-field region and a lateral resolution down to 200 µm, i. e. better than λ, can be achieved.In the present experiment, we imaged the radiation coupling to the HEMT detector by recording the signal measured by the detector itself at every given position, while raster-scanning the detector board below the open WG-end (active imaging [9]).Here also, since the distance between the HEMT and the open WG-end is much smaller than λ, we are imaging the near-field coupling of the radiation to the detector at a lateral resolution better than λ [9]- [11].The transient drain voltage during the FEL pulse is recorded by a digital oscilloscope as in Figure 2 and the detector signal V * is calculated by where t = 0 is given by the FEL trigger and ∆t = 3 µs.The passive (R z ) image of the detector was also obtained, an example of which is shown in Figure 3(a).The WG-to-board distance z was set to 1.0 mm (λ/2) so that the reflected power is maximum when the WG is above the copper lines on the PC board, while almost no power is reflected by the chip.The position of the copper lines was used to assign coordinates to the HEMT chip and the bonding wires.In this way, we could superimpose the active (V * ) images obtained for different bias and polarization conditions to a map describing the position of the chip and the wires.The result of this procedure is shown in Figure 3(b).
In a first experiment, we connected the HEMT as depicted in Figure 1.The active map in Figure 3(b) displays a broad spot centered above the HEMT.A straightforward conclusion would be that since the HEMT is acting as a detector, the maximum signal is obtained when the WG is on top of it.However, a closer inspection of Figure 3(b) indicates that the area of maximum signal is significantly larger than both the HEMT active area (0.3 × 0.3 mm 2 including contact pads) and the WG end (1.2 × 0.6 mm 2 ), suggesting that the radiation is also coupled to the transistor channel through the bonding wires (red lines in Figure 3), by a high-frequency electromagnetic pick-up mechanism which makes the wires to act as antennas.Recently, a similar effect has been indirectly observed when exposing a AlGaAs/GaAs HEMT to a linearly polarized beam at 100 GHz and compared to finite element calculations [12].According to the authors, the effect of bonding wires can be understood in terms of dynamic resistors and capacitors connected between themselves and to the transistor channel, whose values and connections strongly depends on details of the bonding wire geometry.Another equivalent interpretation could be given in terms of unconventional dipole antennas.Indeed, our set-up is the high-frequency analogue of other experimental setups extensively used to extract antenna parameters [13,14] through measurements of mutual coupling between two antennas in the near-field region.It is beyond the scope of this work to investigate the theory and to develop the techniques to calculate the mutual coupling in our experiment, however we emphasize that our set-up can be used to extrapolate data of millimeter-wave antennas and push the limits of the near-field measurements at higher frequencies and lower dimensions with respect to past experiments.In a second experiment, we mounted two identical HEMTs on another PC board at a lateral distance equal to λ = 2 mm in order to test cross-talking effects between two "pixels".In that case, only one channel of the HEMT was activated by connecting the D-pad to ground and the S1 pad to the drain circuit, while the S2 pad was left floating.Each HEMT was connected to a different drain circuit (Vd1 and Vd2 in Figure 4) while the ground and the gate circuit were in common.The configuration is shown by the passive image in Figure 4(a).Detection signals similar to those in Figure 2 were obtained by exposing either one or the other HEMT when the corresponding V c was turned on, with a somewhat smaller intensity (the active channel width being half of that of the first experiment).The active images in Figures 4(b) and 4(c) were obtained with V c = 10 V for the top transistor, V c disconnected for the bottom transistor and V g0 = −4.5 V applied to both transistors.Interestingly, in Figure 4(b) the maximum signal is found when the WG is outside the HEMT chip, at a position which is just on top of a bonding wire, connecting the drain circuit Vd1 to the S1 pad, and running parallel to the direction of the electric field of the radiation E. After the acquisition of the active image in Figure 4(b), we rotated the PC board with respect to the WG end and recorded another image, shown in Figure 4(c).
In this image, the E direction is effectively rotated by 90 o and one can see that the maximum signal is again obtained on the wire which is parallel to the E direction.To sum up, the active images confirm that the maximum radiation coupling is found on bonding wires running parallel to the E direction, irrespectively of the orientation of the transistor channel.This means that the transistor channel responds more to an elec-trical (antenna-like) excitation, rather than to a direct optical excitation.However, the signal V * at the center of the chip is nonzero and it is lower than the maximum just by a factor of ten.Radiation coupling maps might be different at higher frequency, where the electrical pick-up by bonding wires should be negligible [12] and direct detection by the 2-dimensional electron gas in the HEMT is predicted [15].
Finally, it appears in Figure 4
DETECTION MECHANISM
We now propose a phenomenological model which accounts for the negative signal in V d (t) seen in the saturation region of the transistor characteristics (V g0 = −4.5 V, i d0 = 4.91 mA) [16].To sum up the experimental facts: i) from the data in Figure 2 we have that the maximum signal is found close to the transistor pinch-off; ii) from the images in Figures 3 and 4 we have that the detection signal is due to an electrical signal coupled to the channel by the gate, or drain, or grounded source contact (the latter being less effective, see Figure 4).The detection mechanism in the saturation region relies then on the high frequency (ω h f = 150 GHz) component of the drain voltage and current induced by the radiation and whose amplitudes are proportional to the time-varying radiation field E e ıω h f t .
As a consequence of that, a voltage at ω h f is produced in the channel and this leads to a gate-to-channel voltage variation v gc (t) = v gc sin(ω h f t) (in our configuration the gate contact is grounded by an external capacitor C g shown in Figure 1(b) with respect to a time-varying signal, so that the variation v gc is indeed related to the relative variation of the potential between the channel and the gate contact).As a result of the mixing between v gc (t) and the time-varying transistor parameters [17,18], we have a down-converted component of the drain current i d (t) which adds up to the dc bias current i d0 during the FEL pulse illumination.If we are far from the linear region of the transistor characteristics we can consider a series expansion of g m to the first order in v gc and approximate i d (t) = i d0 + g m × v gc (t) by: The first term is the high-frequency linear response, the second one is the square-law self-mixing term displaying one down-converted component at low frequency and one at 2ω h f .After low-pass filtering by parasitic capacitances in our device (whose unity gain is indeed at 30 GHz), the transient voltage drop V d (t), measured at the drain connector and reported in Figure 2, can be approximated by V d (t) − i d (t) R d (where indicates the low-pass filtering of the high-frequency components), hence: We can now compare the peak signal measured in the first experiment as a function of V g0 with an estimate of the quantity ∂g m /∂v gc .The latter is obtained by double numeric differentation of the i d0 vs. V g0 curve measured in dc (green line in For low gate bias the E-field may bring the Schottky gate contact forward biased, producing a positive component of V d (t) [9], so that the mixing action can be clearly observed only at large negative V g0 .
A further test of the model is the linearity check as a function of the radiation power.To perform this test, we monuted a variable attenuator in the waveguide, before the monitor and the WG end.The pulse energy can be then reduced from its maximum value of 0.25 mJ over two orders of magnitude in a controlled way for different pulses.By monitoring the detec- tor response to weaker fields, one can discriminate eventual mechanisms displaying an electric field threshold value, like dielectric breakdown or photoexcitation.In Figure 6 we show the dependence of the peak value in the saturation regime (V g0 = −3.8V) as a function of the FEL pulse energy, reduced for different pulses by the attenuator.The integrated signal V * scales linearly with the pulse energy over two orders of magnitude (Figure 6(a)).Moreover, the peak output signal scales linearly with the peak power measured by the monitor (Figure 6(b)).The same data can be also displayed as a function of the calculated peak value of the electric field in the waveguide estimated from the peak power.This behaviour rules out the existence of a threshold and is compatible with either bolometric (temperature increase) detection or direct square-law electric field amplitude detection.Indeed, Eq. ( 3) explains the square-law dependence of V d (t) from v gc , which is proportional to the transient E-field, the instantaneous power being proportional to |E| 2 .However, we have seen (Figures 4(b) and 4(c)) that the maximum signal can be obtained when the wires are irradiated, and the transistor is not.This definitely rules out the bolometric mechanism.One may worry that in the configuration used for the first experiment (data in Fig- ure 3) we have two devices connected in parallel, with the istantaneous radiation field directed in the opposite direction with respect to the flow of i d0 (see Figure 1(b)).The square-law mixing term in Eq. ( 1) indeed explains while we get a finite V d (t) anyway, since the latter quantity has the same negative polarity for both devices, regardless of the instantaneous sign of the E-field.
Concerning detector performances, the maximum of the detector response is found when the waveguide is within 1 mm from the transistor position.In this condition, with the PC board at z = 1.0 mm, the transistor surface and the bond- ing wires are at z < 0.5 mm ∼ λ/4, well inside the nearfield region of the open waveguide end.We can therefore approximate the peak value of the modulus of the E-field at the transistor position with its value inside the waveguide.The peak value of E in the waveguide is 220 kV/m, resulting in an output peak voltage of about 3 V.If we consider an effective device length of 1 mm (including the pick-up wire), we obtain a conversion gain in the range of 10 −2 , a factor of ten lower than that of commercial Schottky diode video detectors that we use as monitors.This restricts the use of our device to high-power applications at the moment.It is clear that higher mixing signals can be obtained with device designs which increase ∂g m /∂v gc like a large number of parallel channels in a small detector area (interdigitated schemes) and/or different heterostructure engineering.Futhermore, a better coupling of the radiation to the channel can be obtained by an optimized antenna design, which was beyond the scopes of the present work.
The responsivity and noise equivalent power (NEP) of the present detector can be calculated in the special condition where one single bonding wire of length ∼ λ/2 = 1 mm is coupling most of the 0.15 THz signal into the HEMT channel.We consider the case in Figure 4(c) when the open-end WG is on top of the center of the gate wire, giving at V g = −4.5 V a peak signal |V d | = 2 V.The peak power density p of the incident radiation can be taken as approximately constant along the wire and is p = |E| 2 /Z 0 = 128 W/mm 2 .The effective area of a half-wave dipole antenna is given by the classical formula A e f f = (λ/2)(0.26λ)= 0.52 mm 2 , so that the actual power coupled to the HEMT channel is P = pA e f f = 67 W and the effective voltage responsivity R v,e f f = |V d |/P = 0.03 V/W.Note that this value can be very far from the ideal responsivity, which is obtained when the antenna impedance matches the channel impedance at 0.15 THz.No effort was made in the present work in order to optimize the responsivity of the detector.In order to calculate the NEP, we assume that the intrinsic equivalent input noise of our HEMT e n = 1.34 nV/Hz 0.5 is further amplified by the transistor circuit to a voltage noise at the drain of V n = e n × g m × R d = 27 nV/Hz 0.5 .The calculated NEP is then given by V n /R v,e f f = 0.9 µW/Hz 0.5 .As one can see, this value is low enough to perform the imaging experiments presented in this work, where we optimized the output swing of our device in the range of 0.1-10 V. We would like to point out that the NEP could be made much lower by increasing the effective responsivity with an optimized antenna design and a proper impedance matching, an issue that we did not consider in our experiment.
The use of transistors as radiation detectors in the microwave range is well known [19] and usually relies on the superheterodyne mixing of a weak radiofrequency (RF) signal with a more intense local oscillator drive (LO), capable of modulating the transistor parameters, in order to obtain a downconverted signal whose intensity is proportional to that of the RF signal.Using high electron mobility transistors (HEMTs), detection of weak RF signals up to the Q-band (40-50 GHz) was reported [18].In constrast to this use, in the present work the propagating radiation is free-space coupled to the transistor, with no use of high frequency ports and connectors, which allows the detection of radiation by the same HEMT at any frequency which is supported by the intrinsic detection mechanism.Indeed, the free-space coupling principle is also exploited in high-sensitivity mixers developed for space and astronomy applications in the 0.1-6 THz range with LO and RF signals optically coupled to the mixer [20].These devices include superconducting bolometers [22]- [24], Josephson tunnelling junctions [25], GaAs Schottky diodes [26].However, none of these technologies would allow direct integration of electronics for readout and/or room temperature operation.
Numerous experiments [1]- [4] suggest that direct detection based on self-mixing may work in HEMTs up to frequencies as high as 10 f T with different responsivity values according to the material, the gate length, the antenna coupling, making HEMT focal plane arrays a very promising technology for video-rate imaging in the terahertz range with much room for optimization.
CONCLUSIONS
In conclusion, we reported the use of a AlGaN/GaN high electron mobility transistor as as a direct radiation power detector at 0.15 THz, well beyond its unity gain frequency of 30 GHz.We proposed a detection mechanism based on self-mixing due to the high-frequency transconductance change which takes place when antenna-like structures couple a 0.15 THz current into the active channel.We have also shown that the performance of this class of devices can be dramatically improved by using devices with higher transconductance variation with the gate bias and by optimizing the antenna coupling, which we studied here by our near-field active imaging setup.Our work contributes to the development of a new class of transistor-based integrable detectors of radiation up to the terahertz range.
FIG. 1
FIG. 1 (a) Photograph of the double channel HEMT trough an optical microscope with metal Source (S) and Drain (D) pads and gate fingers elongated from pad G over the two channel areas.(b) Circuit for the 150-GHz detection experiment.
FIG. 3
FIG. 3 (a) Passive image of the PC board, where the copper lines are clearly visible.(b) Active near-field image of the integrated HEMT signal V * obtained by scanning the HEMT below the open end waveguide.The HEMT chip, oriented as in Figure 1(a), is sketched as a square.Bonding wires are also sketched with their approximate length and direction.V c = 10 V, i d0 = −4.91 mA and V g0 = −4.5 V.The arrow in (b) indicates the electric field direction of the FEL radiation.
(b) that when the bottom transistor, which has V g = −4.5 V, grounded source and floating V d , is exposed to the FEL pulses, a small signal is measured at the drain of the top transistor.This cross-talking effect is probably due to local oscillation of the common gate potential.Another possibility is that, beyond the near-field illumination, the electric field propagates along the PC board far from the openend WG position.This shall be the subject of a subsequent paper.In any case, the data in Figures4(b) and 4(c) indicate that cross-talking effects are present in the common gate configuration, but they do not represent a major concern.
FIG. 4
FIG.4Passive (a) and active (b,c) near-field images of a two-transistor system.In (c), the FEL radiation electric field direction is effectively rotated by 90 • degrees, as indicated by the arrows.The HEMT chip is sketched as a square and the bonding wires as red lines.Once the symmetry in the transistor circuit is broken, the maximum signal is found when wires parallel to the field are irradiated.V c = 10 V is applied to the top transistor only, while V g0 = −4.5 V is applied to both.
).This is a rough approximation, as the transistor characteritics are expected to vary with frequency.Nevertheless, the comparison shown in Figure5(b) indicates that the main features of the dependence from the gate voltage are captured by our phenomenological model.A region with a positive signal (negative ∂g m /∂v gc ) is found for V g0 > −3 V in both the detection signal (red dots in Figure5(b)) and the calculated curve (grey line in Figure5(b)).More importantly, the maximum absolute value of the detection signal is found at V g0 = −4.5 V and corresponds to the maximum in −∂g m /∂v gc .Considering the variation of g m with V g0 in Figure5(b), the selfmixing effect takes place when approaching the saturation region from small negative V g0 down to the threshold voltage.
M
FIG. 5 (a) dc current (green, left scale) and transconductance (blue, right scale) of the HEMT used as a detector, measured without an external load resistor R d and V c = 100mV.(b) Peak detection signal (dots, left scale) compared with the numerically calculated transconductance change (grey line, right scale).The two independently obtained quantities show a maximum in the same range of gate voltage values V g0 .
FIG. 6
FIG. 6 Integrated signal intensity V * measured with V g0 = −3.8V as a function of the pulse energy and peak signal as a function of the peak power, both of them reduced by a 40 dB variable attenuator.The data fit with a square-law electric field amplitude detection.An estimate of the responsivity can be derived by the bottom plots as a function of the electric field value obtained by a simple model (see text). | 5,915 | 2009-02-25T00:00:00.000 | [
"Physics",
"Engineering"
] |
Common Core in Danger? Personalized Information and the Fragmentation of the Public Agenda
The diversification of information sources has reignited the controversy on media-induced fragmentation endangering social integration. The media's capability to set the public agenda and create issues as a common core is a pivotal part of the public sphere and contributes fundamentally to society's cohesion. Algorithm-driven sources like social media that personalize content to the preferences of individuals and their social networks are considered agents of fragmentation of the public sphere. Politically extreme individuals relying on them may be particularly vulnerable to losing touch with society's common core. We employ an innovative operationalization of fragmentation on the individual level: “issue horizons”—comprising issue diversity, top issue focus, and issue overlap—to investigate how different information sources affect fragmentation. In a two-week daily diary, conducted 2016 in Germany, 356 participants named the two most important political issues of each day and reported the issue-specific sources of information. Results show that social media reliance neither increases nor decreases the compatibility of individuals' issue horizons, but news media reliance significantly increases the compatibility of issue horizons among the politically more extreme. Not relying on news media (but rather on social media) means that politically extreme persons are at risk of losing touch with society's mainstream. This attests to the news media's ongoing, indispensable integration function. Using multiple sources of political information—including the news media—appears to be of paramount importance in ensuring that most citizens are aware of the most important issues facing the nation.
common core is pivotal for citizens' shared perception of the current social reality (Echterhoff et al. 2009), the functioning of democracy, and the stability of society (Webster and Ksiazek 2012). The news media's agenda-setting function fundamentally contributes to this (Djerf-Pierre and Shehata 2017;Feezell 2018): the issues highlighted by the news media concern the citizenry as a whole and create a sense of belonging (Geiß 2015) which exert pressure on politics to deal with the issue (Protess et al. 1991). The news media's coorientation and their joint orientation towards news factors lead to a relatively uniform media agenda (Donsbach 2004).
The changing technical conditions in today's high-choice information environment have reignited the controversial debates on fragmentation (Fletcher and Nielsen 2017). Such a breakup of society would challenge its stability (Katz 1996). Each individual (or each social group) has specific, relatively stable issue preferences that can differ strongly from one individual (or group) to another. The high-choice information environment allows for highly individualized selection and use of content. The downside of this abundance of choice is that the vast information supply renders it impossible for users to make all choices actively themselves. Content selection must be increasingly automated through algorithms to prevent information overload (Napoli 2014). These algorithms can cater to individual preferences by guessing what kinds of content an individual seeks, based on what the algorithm "learns" from the data users produce. Individuals' social networks are an important component in many of these algorithms, particularly in SNS. To what degree algorithms really reinforce fragmentation is unclear and is debated heatedly (Riles et al. 2018;Webster and Ksiazek 2012).
Issue Horizons-Conceptualizing and Measuring Fragmentation
This ambiguity partly stems from a lack of individual-level theorizing and measurement in fragmentation research (Djerf-Pierre and Shehata 2017; Porten-Cheé and Eilders 2019) which impedes understanding fragmentation. We propose an innovative, differentiated multidimensional conceptualization and operationalization of three indicators for individual-level processes that contribute to societal-level issue fragmentation. The complex construct we call "issue horizon" provides a conceptual link between an individual's issue set (based on which we measure individual agendas) and the aggregate issue set (based on which we measure the public agenda and the degree of fragmentation) (Figure 1) by focusing on how an individual's issue set relates to other individuals' issue sets. The horizon determines what is visible (or not) from one's point of view. If the issues others think and talk about are beyond one's horizon, one will not be able to follow and join the conversation and are excluded from the common core (increasing fragmentation). If people have wide horizons and many individuals' horizons include the same issues, the common core grows wider (fragmentation grows less likely). Some issues may even be visible for almost everyone, creating a common focus nearly everyone shares. Narrow, incompatible issue horizons can be considered a mechanism that produces fragmentation and an expression of the current degree of fragmentation. The Components of Issue Horizons. Issue horizons are the set of issues an individual views as relevant and how compatible this set is to other individuals' issue sets.
1. Issue diversity: The more different issues each individual mentions as relevant (horizon wideness), the greater the chance of having more points of connection with more other individuals. To capture this, we analyze how many different issues an individual has on his/her issue set. This individual-level agenda diversity has been conceptualized and measured before (e.g., Peter and de Vreese, 2003). However, we view it as a component of issue horizons regarding the chance to overlap with others' issue sets which changes the interpretative context and analytical focus. 2. Top issue focus: The ability of the public to focus on one top issue helps exert pressure on policymaking (Protess et al. 1991), to deal effectively with major crises and conflicts, and contributes to identity-building collective memory (Tenenboim-Weinblatt 2013 In contrast to the trade-off between top issue focus and issue diversity, we expect a substantial positive correlation between issue diversity and issue overlap: the more different issues individuals mention, the greater the likelihood of "random" overlaps. The synchronizing and integrating power of the public sphere may strengthen the correlation between issue diversity and issue overlap further, producing not only "random" but also "coordinated" overlaps: Agenda-setting effects can synchronize the issue sets of different individuals, and every additional mention has an even greater chance of being a "match" with many others' issue sets because all tend to mention those issues the media emphasized. However, if a homogeneous media agenda is absent and individuals use strongly differing sources, the link between diversity and overlap is weakened ("negative coordination") and (segmented) agenda-setting effects can even contribute to fragmentation (for the potential effects of social media on the public agenda see also Cardenal et al. 2019;Feezell 2018).
Atomized and Segmented Fragmentation. It is useful to distinguish two ideal-typical forms of fragmentation: Atomized issue fragmentation means that each individual has a highly idiosyncratic issue set, as described by the filter bubble metaphor (Pariser 2011). Overlaps with other individuals are unlikely. Societally relevant issues that an individual dislikes can become invisible for him/her. People may lose touch with the societal mainstream, to a point where a common core is lost. Segmented issue fragmentation means that different societal subgroups' issue sets are largely incompatible, while the homogeneous issue sets within each subgroup strongly overlap. Political camps become alienated, society runs out of shared issues for discussion (Iyengar and Westwood 2015), discussions across camps become difficult, prone to misunderstandings, and potentially increase political polarization (Stroud 2010). Both forms would decrease overlap and weaken the relationship between diversity and overlap.
Information Environments, Attitude Extremity, and Issue Horizons
One principal advantage of analyzing individual issue horizons is that this sheds light on the factors that affect the compatibility of issue horizons-and with it, societal (des) integration. We consider two factors: the reliance on political information sources and the extremity of the individuals' political attitudes. Reliance on specific sources may affect issue horizon wideness and compatibility by itself. However, theoretical arguments and empirical findings suggest that individuals with extreme political attitudes will be much more vulnerable to such effects (Bruns 2019).
Our study focuses on broad differences between types of information sources and their typical way of selecting and curating information to their users rather than individual outlets within those categories. By selecting and ranking content (e.g., according to relevance or urgency), all information sources define the spectrum from which users can choose and predetermine to a considerable extent which information gets a realistic chance to reach and affect the users. With broad brush strokes, our hypotheses and research questions contrast traditional news media and social media in this respect.
News Media Reliance and Issue Horizons
General Widening Effect. News media (offline/online) supply a relatively consonant "media reality" across different outlets that has the power to transmit issues even through boundaries between "secluded" social groups. Of course, different outlets select somewhat differently, and (hyper-)partisan media may feature segment-specific agendas. However, (hyper-)partisan media are largely unimportant in Germany. The most popular Internet news sites-all tied to established news organizationsaddress general ("mainstream") audiences or broad center-left/center-right political strata rather than narrow ideological strata (Newman et al. 2020). Thus, those who heavily rely on German news media for political information have a great chance to come across many mainstream issues, increasing the chance to develop a wide, compatible issue horizon. H1: The more an individual relies on news media for political information, the greater (a) issue diversity, (b) top issue focus, and (c) issue overlap of that individual's issue horizon.
Conditional Widening Effect on People with Extreme Attitudes. People with moderate attitudes may already have a wide, compatible issue horizon to begin with. This leads to less potential for further widening their issue horizon through news media (ceiling effect). In contrast, people with extreme attitudes are at risk of losing touch with society's common core (Abelson 2014;Rodriguez et al. 2017) and more likely to have narrower, less compatible issue horizons. Being confronted with "mainstream" issues in the news media that they might overlook when receiving more personalized/groupspecific information may widen their issue horizons. Even if they disagree with society's mainstream, their likelihood to remain in touch with the mainstream would increase. H2: The more extreme an individual's political attitude, the more strongly does relying on news media for political information increase the (a) issue diversity, (b) top issue focus, and (c) issue overlap of that individual's issue horizon.
Social Media Reliance and Issue Horizons
General Narrowing Effect? Social media use automated, personalized content curation. Therefore, heavy reliance on them for political information could contribute to narrower, less compatible issue horizons. Even if the algorithms and general curation concept of different social media (e.g., Facebook, Twitter, YouTube) obviously differ, their logics of filtering, sorting, and personalizing content based on data collected about the users (Jürgens and Stark 2017) systematically diverges from news media's curation.
Two processes are conceivable that are working in opposite directions.
(1) Social media can increase users' connectedness with the common core, for example, by increasing the probability of incidental news exposure (Kümpel 2019), particularly if users have diverse interests and heterogeneous networks (Bodó et al. 2019). Also, extremely popular "viral" messages spread quickly and comprehensively (Bampo et al. 2008) across camps. (2) Social media's automated personalization can narrow their horizons by rendering contact with issues users "like" ("dislike") more (less) likely. An extreme version of this scenario-highly individualized, isolated information environments, disconnected from the outside world-has been described with the popular filter bubble metaphor (Pariser 2011). Research has debunked this extreme scenario (Bruns 2019;Haim et al. 2018;Hindman 2012;Mahrt 2020;Möller et al. 2018;Zuiderveen Borgesius et al. 2016); if any, filter bubbles must be regarded a "fringe" phenomenon that is likely to occur only under specific conditions. Thus, algorithmic content curation will usually not lead to completely different, individualized issue sets. Even though these mechanisms might not lead to the extreme pathological case of completely isolated bubbles, they could still decrease individuals' contact with the topics that others care about.
Based on the current state of research, it is not possible to make a call as to which of these two competing oppositional processes is dominant in balance, preventing us from formulating a directed hypothesis. Rather, we will explore how heavy reliance on social media for political information affects the wideness and compatibility of issue horizons. RQ1: How does the extent of an individual's reliance on social media for political information relate to the (a) issue diversity, (b) top issue focus, and (c) issue overlap of that individual's issue horizon?
Conditional Narrowing Effect on People with Extreme Attitudes? Research suggests that reliance on social media for political information will particularly narrow the information input of individuals with extreme political views (Bruns 2019). These are viewed as particularly vulnerable because they tend to have an ideologically more homogeneous social network (online and offline), a greater motivation to avoid ideologically inconsistent viewpoints and seek out consistent viewpoints, and more often experience cognitive dissonance when confronted with mainstream news (Abelson 2014;Rodriguez et al. 2017). The ideas of issue ownership (Petrocik et al. 2003) and instrumental actualization (Kepplinger et al. 1991) suggest that some issues tend to be instrumental for the political left (e.g., climate change) or for the political right (e.g., migration). Therefore, more ideologically extreme individuals who rely heavily on social media for political information may hold narrower and less compatible issue horizons, but this has been insufficiently researched yet. RQ2: How does the extremity of an individual's political attitude change the way relying on social media for political information affects (a) issue diversity, (b) top issue focus, and (c) issue overlap of this individual's issue horizon?
A Probabilistic Source Categorization
Obviously, our hypotheses and research questions are highly probabilistic: most individuals use a great variety of information sources (Newman et al. 2020). Each user can use both social media and news media in different ways that modify the structure of issue exposure. The structure of social networks and their ideological range varies, as does users' openness for cross-cutting messages. Lastly, sources' content structures and modes of content curation develop over time. In effect, not everyone using social media will incessantly receive only (or predominantly) information matching their interests; some may use social media like a newsstand. And some news media users will sometimes actively avoid issues they "dislike." But given their mechanics, there is a higher probability of personalized issue curation in social media vis-à-vis news media. Only the repeated small effects of moderately higher probability of personalized issue curation in social media (one-size-fits-all information curation in news media) across many individuals would show as an "effect" of reliance on social media (news media).
Other Information Sources
Although search engines also engage in algorithmic content curation, we omitted them so far. Different from social media, in case of search engines and other "search promptbased" online services, user must actively express their interest with the search query. Algorithms mostly serve here to disambiguate what the users meant. Therefore, search engine reliance for political information is unlikely to result in an algorithmically amplified partisan-biased issue selection (Magin et al. 2015;Unkel and Haim 2019). Our data show no specific relation between search media reliance and issue horizons (Table 1). However, including all major information sources is important because reliance on different sources tend to be highly (positively) correlated. The chance to mistake effects of misattributing certain outcomes to a certain information source (omission bias) increases if not controlling for major information sources (e.g., search engines, personal conversations) (Geiß et al. 2021).
Study Design and Sample
The analysis utilizes data from a two-week panel survey (online daily diary; September 6-19, 2016, last possible responses on September 20, 2016), supplemented by a screening (August 19-September 04, 2016), and a completion survey (September 24-28, 2016). The population was defined as Internet users in Germany (14-69 years). Quotas were defined regarding age, sex, education, and Facebook use to match the population's demographics (reference for quotas: media planning study "Best for Planning" (2016)). The market research institute executed the fieldwork, drawing on an existing commercial access panel fulfilling the ISO 26362:2009 requirements. The panel members self-selected to participate, as long as they matched to quotas. Invitations were sent out on day 1 and 3. The daily surveys (computer-assisted self-interviews) could be filled within 24 h. Participants received €3.50 in "bonus points" for participation in the screening and final survey, and €1.00 per participation in the daily surveys. The targeted sample size was 350. One thousand eight hundred and eighteen people were contacted, 459 agreed to participate, 359 completed all three parts of the survey, 356 remained after data cleaning. Respondents were dropped during data cleaning if completing less than 50 percent of the daily surveys; most respondents participated on 13 or 14 out of 14 days (probably because of the incentives). The American Association for Public Opinion Research (AAPOR) response rate (type 1) was 25 percent (recruitment) and 20 percent (ready to analyze).
Measurement
The following variables were measured on two levels (level 1: issue mentions; level 2: participants). For the analysis, all variables measured on level 1 (issue mentions, information sources) are aggregated on level 2. All analyses are run on level 2.
Issue Mentions (Dependent Variable). In a daily, open-ended question, we asked for the two most important political issues of the respective day for the participants personally (for questionnaire see Supplemental Information file). This resulted in 8,930 political issue mentions by 356 respondents over 14 days (the maximum number of issue mentions would have been 333 × 14 × 2 = 9,968; nonpolitical issue mentions (e.g., sports, personal matters), were excluded). In open-ended questions, respondents typically mention around two issues they find important (Peter and de Vreese 2003). The participants had little problem mentioning two issues every day, also because they could mention the same issue(s) every day. We inductively developed a coding scheme, distinguishing fifty-seven different issues (Table A2 in the Supplemental Information file). Three student coders (intercoder reliability based on coding of forty randomly selected issue mentions by all three coders: α Krippendorff = .740; 95 percent CI [0.616;0.864]) coded all 8,930 issue mentions (i.e., assigned them to one of the fifty-seven different issues each). Each issue mention was treated as a separate mention, even if the same issue was mentioned several times by the same respondent (which was often the case, e.g., multiple mentions of "refugee crisis") or several subissues of the same issue were mentioned (e.g., "refugee crisis" and "asylum policy").
The so-called "refugee crisis"-the tremendously increased number of immigrants to Germany in 2015-clearly emerged as top issue complex. By "issue complex," we designate a term/label that connects several distinct issues by identifying a similarity or common point-of-reference for all these issues. The issue complex "refugee crisis" bundles (1) migration and asylum policy (e.g., border control, limitation of immigration), (2) domestic security (e.g., crime, terrorism), (3) political changes at least partly attributed to increased immigration (e.g., increased popularity of the right-wing populist party Alternative for Germany (AfD), EU crisis summits). Thereby, the issue complex "refugee crisis" bundled twenty-one subissues, as many of the open-ended responses revealed, which accounted for 5,125 (57 percent) of all issue mentions. The issue mentions by the same respondent were aggregated to the respondent level in three different ways, leading to the three indicators of compatibility of individual issue horizons.
1. Issue diversity. We calculated how many different unique issues each respondent mentioned. This number is divided by the theoretical maximum of twenty-eight issue mentions per individual, thus standardizing it to a value range from 0 (no issue mentioned) to 1 (maximum diversity). Here, we treated the twenty-one subissues of the issue complex "refugee crisis" to be separate issues (rather than treating all as one big issue) since we consider it relevant how far individuals grasp the different facets of the "refugee crisis." 1 For example, mentioning fourteen different issues results in a score of 14/28 = 0.500. 2. Top issue focus (complex). We summed the number of issue mentions per individual belonging to one of the twenty-one subissues of the top issue "refugee crisis." Here, we collapsed them because we consider a common focus on the top issue as a minimum requirement for a "common core." To that end, referring to the same issue complex (not necessarily the same subissue) suffices to create a sense of commonality. We divided this sum by the individual's total number of issue mentions, thus standardizing it to a value range from 0 (no mention of top issue) to 1 (all issue mentions of the individual belong to the top issue). For example, an individual who cited the "refugee crisis" or its subissues twelve times and cited twenty-seven issues in total received a score of 12/27 = 0.444. 3. Issue overlap. We compared the issue mentions of all participants pairwise, treating the twenty-one subissues of the complex "refugee crisis" separately again (due to the reasons explained under issue diversity). For each pair of participants (dyad), we counted how many unique subissues both participants shared (match) and how many issues only one of them mentioned (nonmatch) or were not unique matches (i.e., the same issue matches several times in the same dyad). The average number of matches for each respondent served as indicator of issue overlap. For instance, a participant with 2,130 issue matches with the 355 other participants has an issue overlap score of 2,130/355 = 5.611 (roughly six issue matches with any other participant on average).
Issue overlap and issue diversity are very strongly correlated (R = .917; p < .001), sharing 84 percent of their variation. This does not mean that the measures are generally redundant, however. Rather, the high redundancy in the specific context we are studying reveals something important about the coordinating/synchronizing force of political information content that cannot be taken for granted in all contexts: If greater individual-level issue diversity translates into greater overlap, each additional issue and individual mentions to a large degree matches the issues other individuals mention. In segmented or atomized fragmentation, the pattern would be different: additional issue mentions either have a generally low likelihood to overlap with issues other persons mentioned (atomized), or they would systematically overlap only with the issue sets within one's own opinion camp (segmented). The correlation between diversity and overlap would be moderate, low, or even absent in those cases. Put differently, if issue diversity produces large issue overlap, the dangers of segmented or atomized fragmentation are low. If individual issue sets (we use the empirical marginal distribution of issue frequency we found in this study) were completely randomized, overlap and diversity would be correlated at R = .753 (56.3 percent of shared variance) in our study. Thus, the observed overlap (R = .917) is greater than expected if the issues were chosen uncoordinatedly (R = .753); we observe a "positive coordination." In case of atomization/segmentation, we would see a "negative coordination" that would lead to a correlation substantially below .753. Overlap and focus are weakly correlated (r = .184; p < .001); diversity and focus are mostly unrelated (r = −.072; p = .190).
Reliance on Political Information Sources (Independent Variable).
The reliance on information sources was measured separately for each issue mention. The participants indicated on a 5-point scale (recoded to 0 = not important at all; 1 = less important; 2 = somewhat important; 3 = important; 4 = very important) how important several sources had been to inform about this issue on the respective day. Our measures cover all sources widely used in Germany for political information, with a greater resolution for online content, which was in the focus of the study. A first question asked for the importance of (1) offline media, (2) personal conversations, and (3) the Internet. A follow-up question for those who referred to the Internet as at least "somewhat important" asked them to specify the importance of eleven online sources: (a) Facebook, (b) Twitter, (c) other SNS, (d) daily newspapers online, (e) news magazines online, (f) broadcasters online, (g) YouTube, (h) other video platforms, (i) search engine Google, (j) other search engines, and (k) Wikipedia. These information sources are grouped into four types of information sources (Table A1 in the Supplemental Information file), of which two (news media and social media) are immediately relevant for testing the hypotheses. The other two (search media, conversations) are included to avoid omission bias.
News media. Reliance on the news media (per issue mention) was the highest value measured for (1), (d), (e), or (f). If an individual, for example, considered broadcasters "very important (=4)" and the three other sources "somewhat important" (=2) to get information about the issue, news media reliance score is "very important (=4)" in our analysis. We use the maximum value rather than the average value per individual. Our argument for using the maximum value is that the reliance on a source category does not increase with the number of sources relied upon within the category. One can be highly news media reliant by relying, for example, on only a single newspaper; relying on several newspapers does not increase reliance on news media. Moreover, choosing the maximum value increases the compatibility of measures collected for broader ("Offline media") and narrower categories ("Facebook"): Individuals who rate the importance of offline media would not mentally average across all news outlets, but rate how important the most important offline medium was for them (maximum). This same logic is applied for "social media," "search media," and "conversations" as well. For each individual, we computed the average importance of news media across issue mentions.
Social media. Reliance on social media (per issue mention) results from the highest value measured for (a), (b), (c), (g), and (h). Again, one does not have to rate all social media as "very important" to obtain a high score; for example, rating "Facebook" as "very important" was sufficient to classifying a person as strongly reliant on social media. For each individual, we computed the average importance of social media across issue mentions.
Personal conversations. Per issue mention, the reliance on (2) was measured directly. For each individual, we computed the average importance of personal conversations across issue mentions.
Search media. Per issue mention, the reliance on search media results from the highest value measured for (i), (j), and (k). Wikipedia is included here since it is often ranked first in search engine results pages (Steiner et al. 2020) and its users most often use a within-page keyword search to find relevant entries . For each individual, we computed the average importance of search media across issue mentions.
Extremity of Political Attitudes (Moderator). The participants indicated their political attitude (in general; not related to certain issues) on a 7-point scale from 1 = extremely left to 7 = extremely right. The answers were recoded to indicate how far the participant's answer was off the scale's center (=4). Thereby, 0 means moderate (4 on the original scale), 1 means slightly left/right (3 or 5, respectively), 2 means clearly left/right (2 or 6, respectively), 3 means strongly/extremely left/right (1 or 7, respectively).
Controls.
To control for other plausible influences on issue horizons, all analyses include sex (1 = female; 0 = male), age (years, centered), education (nine ranks, centered), per capita household income (in 100€, centered), employment status (1 = fulltime; 0 = not full-time), political interest (from −2 = low to + 2 = high), duty to keep informed (four Guttman-type items; from 1 = do not agree at all to 5 = fully agree), personality strength (ten Likert-type items; from 1 = does not apply at all to 5 = fully applies) and need for orientation (NFO; nine Likert-type items; from 1 = fully disagree to 5 = fully agree). We factorized duty to keep informed, personality strength and NFO by use of a principal components analysis with Varimax rotation, after checking for sufficient internal consistency (all Cronbach's α's >.70). NFO had to be split into two main components-need for information and need for opinions to achieve sufficient internal consistency.
Analysis
We analyze which factors increase or decrease the compatibility of issue horizons using linear regression models. The higher issue diversity (top issue focus, issue overlap), the wider (more compatible) is an individual's issue horizon with others' issue horizons. Per dependent variable, we compare three nested models with different sets of predictors: Model (1) considers the reliance on information sources in isolation. Model (2) adds control variables. Model (3) introduces interactions between participants' political attitude extremity and the importance of sources of information on top.
Results
Issue Diversity (H1a, H2a, RQ1a, RQ2a) Table 1 shows a positive interaction between extreme attitudes and news media use, but no simple effect of news media use. A visual inspection of the interaction (( Figure 2, top) indicates that among the politically more extreme participants, higher importance of news media leads to a strong increase in issue diversity (in line with H2a). Among the politically moderates, reliance on news media for political information does not affect issue diversity (contrary to H1a). The reliance on social media does not affect issue diversity (RQ1a), and there is no interaction with political attitude extremity (RQ2a) ( Table 1). News media reliance leads to a slightly higher top issue focus, supporting H1b. In contrast, the use of social media has no effect on top issue focus (RQ1b) ( Table 1). We do not consider models (2) and (3) which add the interaction terms addressed since they do not add explanatory power (Supplemental Table A6). If interactions do not add to explanatory power, H2b must be rejected and the answer to RQ2b is that social media reliance does not interact with attitude extremity regarding top issue focus (see also Figure 2, center).
Issue Overlap Between Individuals (H1c, H2c, RQ1c, RQ1c) Reliance on news media increases issue overlap significantly according to models (1) and (2), which seems to support H1c (Supplemental Table A7). When adding the interaction between news media reliance and political extremity into the equation (model (3)) ( Table 1, Supplemental Table A4), however, it becomes apparent that this effect is limited to those with extreme political attitudes (as observed for diversity), while overlap is not affected by news media use among political moderates (Figure 1, bottom) (H2c confirmed). Reliance on social media neither affects overlap (RQ1c) nor does it interacts with political extremity (RQ2c).
Additional Findings
The more a person relies on personal conversations for obtaining political information, the narrower the overlap with others, independent of an individual's attitude extremity.
Relying on personal conversations can thus limit issue horizons among both politically moderate and politically extreme individuals (Table 1).
Discussion
Healthy democracies need a common core, built around collectively relevant issues. A shrinking common core can endanger societal integration. The news consumption in the high-choice information environment has reignited the controversy on this threat's actual extent: many users rely on algorithm-driven intermediaries like Facebook (Newman et al. 2020) that provide them with tailored information. Pundits have voiced concerns that this may lead to personalized news diets that dismiss relevant issues if the individual dislikes them. We investigated how reliance on different information sources and the extremity of political attitudes jointly affect the fragmentation of individual issue horizons, consisting of issue diversity (horizon wideness), top issue focus, and issue overlaps (horizon compatibility). While the majority of previous studies investigated source diversity at the aggregate level (e.g., Bright, 2018;Webster and Ksiazek, 2012), our innovative operationalization reflects the complexity of fragmentation as a multilevel phenomenon. It links individual level issue sets (agendas) to aggregate level issue sets (agendas) by individual's issue horizon wideness and compatibility. With their inherent link to aggregate-level fragmentation, these measures are particularly suited to investigating how fragmentation comes to happen at the individual level.
Impact of Source Reliance on Issue Horizons
In line with the rare previous research (e.g., Djerf-Pierre and Shehata 2017; Fletcher and Nielsen 2017), our results suggest that the concerns regarding a disintegrating effect of intermediaries may be overstated: relying on social media for political information does not decrease the compatibility of individuals' issue horizons. If anything, one can conceive the absence of positive effects as a negative effect since social media do not increase the compatibility of issue horizons either. That is because reliance on news media (online and offline) clearly makes issue horizons more compatible among those with extreme political attitudes. By informing a large, dispersed audience on a relatively manageable number of issues, news media can build bridges and facilitate that people with extreme views (re)connect with the common core. Those strongly relying on intermediaries-but not on the news media-miss the news media's reconnection effect. Interestingly, issue overlap was systematically lower among those strongly relying on personal conversations. They run a higher risk for ending up with an incompatible issue horizon and losing touch with the common core than persons relying on social media.
Trajectory of Information Repertoires
But in contrast to personal conversations (whose importance is most likely stable), social media have become and probably will continue to become more important over time. Currently, the news media are still the most important information source while only very few people rely solely on intermediaries. Most people (in Germany) have broad information repertoires (Stark et al. 2017), preventing the widespread emergence of incompatible issue horizons. However, the news media have lost importance (Newman et al. 2020). If this tendency continues, individual issue horizons and with them the common core might shrink. This does not mean that recipients will not use news media anymore as intermediaries heavily draw on content produced by news organizations (Fletcher and Nielsen 2017). But if intermediaries serve the users with personalized information from different providers, their issue horizons are no longer rendered more compatible as happens by the regular, traditional use of "their" news outlet(s).
Methodological Limitations
Before discussing generalizability and context dependency, we point out some significant methodological limitations that should be considered: (1) The political camps were reconstructed solely based on self-classification on a left-right scale.
(2) The measurement of issue overlap would require a fully representative sample of the entire population for optimal results. Despite all efforts to create a sample representative of Internet users in Germany, sampling bias cannot be avoided (and Internet nonusers are not studied). (3) We traced solely which issues interviewees viewed as relevant and did survey which beliefs or opinions they held regarding the issues-but stark contrasts in beliefs and opinions may contribute to social disintegration even if two individuals care about the same issue.
Other Countries
The above assessment of impact and trajectories describes the situation in Germany, which most likely resembles the situation in many less-polarized Western and Northern European countries. Clearly, a highly partisan information environment can change the game. When important news media cater to specific camps rather than the public (as currently in the United States), the widening of issue horizons may apply only to mainstream media. Consequently, the analysis would need to distinguish reliance on partisan (or hyperpartisan) and on mainstream news media. Comparing between the effects of partisan news media and social media would be interesting since it is unclear how they relate to the wideness and compatibility of issue horizons. Chances are that using hyperpartisan news media contribute to producing segmented issue fragmentation.
Algorithm Variability
Social media's algorithms do not only vary between different services (Facebook, Twitter, YouTube), but also change substantially over time. Facebook, for example, has changed its algorithm so that preference is given to user-generated content at the expense of news. However, our categorization does not make strong assumptions regarding the exact nature of the algorithm curating the content and views it as an extension of editorial, social, and user selectivity that are present anyway (Figure 1). Therefore, our arguments still apply as long as content curation still follows their typical design features (DeVito 2017). We see no signs of a general paradigm change in how these algorithms operate. Still, longitudinal studies mapping fragmentation as a long-term process are highly desirable to clarify that.
Crisis and Routine Situations
The "refugee crisis" has been one of the most pervasive issues in Germany in recent years, with extreme media and public salience (Haller 2017). Our study was conducted one year after the decision to accept refugees in Germany. This was followed by debates about immigration policy, criminal acts by immigrants, the rise of the immigration-critical party AfD, and a "news wave" about migration numbers. Do our findings just reflect that idiosyncratic moment? Such major crises certainly do not occur every day. However, major "news waves" are not anomalies either but rather a real, relatively frequent phenomenon (Geiß 2018). Other recent examples are the financial crisis 2007/2008, the Euro currency crisis, and the Coronavirus pandemic. In case of severe fragmentation, however, the media might not even succeed in building a public for that one issue. Despite the value of our findings for this type of situation-which are highly significant for building and maintaining a common core-our findings do not immediately translate to "routine" situations. This calls for applying our concept of issue horizons in more diverse contexts to explore commonalities and differences. The basic mechanisms we identified can serve as working hypotheses.
Issue Dominance
The predominance of the "refugee crisis" in our study reminds us of the impact of the landscape of issues for policymaking and the party landscape: the "refugee crisis" is a popular issue of the political right. Its continued dominance clearly created beneficial conditions for the rise of the right-wing populist AfD (Augstein 2018), which was strongly associated with (and deemed competent for) the immigration issue. More generally put, a strong and long-lasting focus on one top issue can impact the long-term development of the party system in a country. Additionally, the long-term dominance of a single issue raises the question how compatible and wide individual issue horizons must be to ensure a functional common core: if a society focuses too strongly on one top issue, other important matters are likely to be overlooked.
Knowledge Needs
Moreover, the tension between issue diversity and top issue focus further illustrates that fragmentation must be measured using several indicators. Our innovative threepart operationalization of issue horizons has proven to be useful for this purpose: it enables capturing the nature and size of fragmentation nuanced, leading to a differentiated diagnosis, and allowing for a targeted treatment. For instance, issue overlap would increase if the news ecosystem emphasizes a limited set of recurring issues (issue oligopoly). Issue focus would increase if it prioritized only one major issue (issue monopoly). We need repeated investigations of issue horizons, however, to establish how wide and compatible issue horizons typically are and which individual, content, and contextual factors influence which dimension of issue horizons. Normative work providing standards and criteria for desirable levels of issue diversity, issue focus, and issue overlap would be a helpful guideline for future research.
The current study shows that the common assumptions about fragmentation are too simple and that investigating issue horizons can help us understand the link between information environments and fragmentation in a more nuanced way. The fragmenting consequences of the rise of algorithm-driven, personalized information sources are currently not to be found unconditionally, but rather in subpopulations-with extreme political attitudes emerging as the primary "risk factor"-or under specific contextual conditions.
Supplemental Material
Supplemental material for this article is available online.
Note
1. If they are collapsed into a single issue category, the level of the diversity index decreases, but the results of the regression analyses hardly differ. Since the twenty-one issues would be counted as independent issues if the current bundling event creating the (temporary) issue complex were absent, we refrain from collapsing them. | 9,126.6 | 2021-06-18T00:00:00.000 | [
"Political Science",
"Computer Science"
] |
Abnormal Functional Resting-State Networks in ADHD: Graph Theory and Pattern Recognition Analysis of fMRI Data
The framework of graph theory provides useful tools for investigating the neural substrates of neuropsychiatric disorders. Graph description measures may be useful as predictor variables in classification procedures. Here, we consider several centrality measures as predictor features in a classification algorithm to identify nodes of resting-state networks containing predictive information that can discriminate between typical developing children and patients with attention-deficit/hyperactivity disorder (ADHD). The prediction was based on a support vector machines classifier. The analyses were performed in a multisite and publicly available resting-state fMRI dataset of healthy children and ADHD patients: the ADHD-200 database. Network centrality measures contained little predictive information for the discrimination between ADHD patients and healthy subjects. However, the classification between inattentive and combined ADHD subtypes was more promising, achieving accuracies higher than 65% (balance between sensitivity and specificity) in some sites. Finally, brain regions were ranked according to the amount of discriminant information and the most relevant were mapped. As hypothesized, we found that brain regions in motor, frontoparietal, and default mode networks contained the most predictive information. We concluded that the functional connectivity estimations are strongly dependent on the sample characteristics. Thus different acquisition protocols and clinical heterogeneity decrease the predictive values of the graph descriptors.
Introduction
Attention-deficit/hyperactive disorder (ADHD) is a neurodevelopmental disorder with a prevalence of around 5.3% in children and adolescents [1]. It is characterized by cognitive and behavioral impairments associated with inattention and/or hyperactivity and impulsivity symptoms [2]. The most frequent and investigated ADHD phenotypes are the ones with predominance of inattentive symptoms and a group that combines inattention and hyperactivity/impulsivity. As for most mental disorders, the etiological bases and neural substrates of ADHD are far from being fully understood.
The search for structural or functional neural correlates of ADHD, and consequently for potential biomarkers of the disorder, is crucial in the pursuit of its prevention, early detection and more effective treatment [3,4]. For this purpose, the combination of machine-learning techniques for pattern recognition and resting-state functional neuroimaging data is a particularly promising approach [5].
Graph theoretical analysis is an emerging component in the field of connectomics and brain network analysis based on neuroimaging data [6,7]. Descriptors derived from graph theory are measurements quantifying different characteristics of the network organization. When applied to 2 BioMed Research International resting-state fMRI data, graph theoretical measures may be used to enhance the understanding of resting-state network (RSN) dynamics [8]. RSNs are characterized by consistent correlations with the spontaneous fluctuations of the BOLD signal among certain brain regions. Among the diffuse RSNs identified via fMRI analysis, specifically sensory-motor, frontoparietal, basal ganglia, and default mode networks have been implicated in ADHD pathophysiology [9]. Currently, abnormal interactions within distinct RSNs have been identified as a key factor in contributing to various neuropsychiatric disorders [10], in particular within the default mode network (DMN) [11,12].
Pattern recognition methods based on machine learning techniques have shown to be a promising approach to the analysis of neuroimaging data [13]. Support vector machines (SVMs) [14] are one of the most frequently used methods in this field, given their robust properties when dealing with high dimensional multivariate data in addition to providing predictions for each individual case. In other words, given a set of features (e.g., brain measurements) and a label (e.g., healthy and patient), SVMs are used to learn a function, which maps the set of features to their respective labels within a training dataset. Thus, given a new set of features produced from an unseen observation, SVMs are able to provide a predicted label for this novel observation.
Graph theory descriptors can be used as predictor variables (i.e., features) in a machine-learning framework. Merging graph theoretical approaches and machine learning techniques might provide a better-adjusted way to scrutinize the impairment of RSNs in ADHD as well as mapping predictions to a single individual case. In this study, we investigated the use of network centrality measures as predictive features to discriminate between typical developing children and ADHD patients with both inattentive and combined presentations. In addition, we investigated possible differences between inattentive and combined ADHD groups. The ADHD-200 dataset [15] formed the basis of our analysis. We aimed at evaluating three issues: (i) the mean classification score ([sensitivity + specificity]/2) across distinct acquisition sites; (ii) the classification score site-by-site (i.e., only the data within each site are used to train and test the classifier) with a global classification (i.e., using the data of all sites in a joint analysis); (iii) brain regions (i.e., network nodes) containing the greater amount of predictive information to discriminate between the groups. We hypothesize that frontoparietal, sensorymotor, and default mode network nodes will have a more relevant predictive value in the classification. This hypothesis relies on the potential association between abnormalities in resting-state networks and the main symptoms of ADHD. All research protocols from institutes contributing to the ADHD-200 Consortium received local approval by their respective IRB. All the data distributed via the International Neuroimaging Data-sharing Initiative (INDI) are fully anonymized in accordance with HIPAA Privacy Rules. Further details concerning the sample and scanning parameters can be obtained by request to the ADHD-200 Consortium.
Data and Image
Step-wise data preprocessing was previously conducted by the NeuroBureau community using the Athena pipeline and consisted in the systematic and homogeneous processing of all resting-state fMRI data. The following steps were carried out: exclusion of the first four EPI volumes; slice time correction; deobliquity of the dataset; head motion correction using the first volume as a reference; exclusion of voxels at non-brain regions by masking the volumes; averaging the EPI volumes to obtain a mean functional image; coregistration of this mean functional image to the subjects' correspondent anatomical image; spatial transformation of functional data into template space; extraction of BOLD time series from white matter and cerebrospinal fluid using masks obtained from segmenting the structural data; removing trend and motion effects through linear multiple regression; temporal band-pass filtering; spatial smoothing using a Gaussian filter.All preprocessed images are available at the website http://neurobureau.projects.nitrc.org.
Connectivity Analysis and Graphs.
A representative set of 400 brain-wide regions of interest (ROIs) was chosen for defining the network nodes used for connectivity analysis and the construction of the graphs. The ROIs were determined by using the method developed by Craddock et al. [16] based on the fMRI data of 650 subjects. This atlas is publicly available at http://www.nitrc.org/plugins/mwiki/index.php/neurobureau: AthenaPipeline. The Pearson correlation coefficient between each pair of ROIs was calculated and regarded as a proxy of functional connectivity. The correlation matrix was equated with the adjacency matrix of an undirected and weighted graph. Meanwhile, binary adjacency matrices were built for each subject by applying three different cut-off values (0.1, 0.15 and 0.25) to the correlation matrix. The cut-offs were defined within this particular range since the network becomes too fragmented and granular to allow a proper graph analysis for higher cut-off values [17]. We evaluated the predictive power from both weighted and unweighted graphs. The following centrality measures of the nodes in the weighted graph were calculated: degree, closeness [18], betweenness [19], eigenvector, and Burt's constraint [20]. The degree, closeness, and betweenness were also calculated for the unweighted graphs.
The mathematical definitions of these measures are described in Table 1 where is the set of all nodes and Table 1 Measure edges within a network and is the number of nodes. An edge between two nodes and is represented by , . In the undirected graph case, = 1 if there is a connection between the nodes and ; otherwise, = 0. In betweenness definition, ℎ is the number of shortest paths between ℎ and , and ℎ ( ) is the number of shortest paths between ℎ and passing through . In eigenvector definition, is a constant. Note that eigenvector and Burt's constraint are definable only for weighted graphs.
Degree is a straight and intuitive way to quantify nodes centrality, and it is defined as the number of edges connected to a particular node. The closeness centrality is the average distance between a given node and all other nodes of the network. Betweenness quantifies the influence of a node and is defined as the number of shortest paths passing through it. The basic rationale underlying eigenvector centrality is that connections with more central nodes increase the nodes influence in the network. Hence, different weights are attributed to a vertex depending on the centrality of the connected nodes. Finally, Burt's constraint value is inversely proportional to the number of connections of a node and increases with the number of strong mutual connections [20]. The uses and interpretations of graph theoretical measures in the context of fMRI studies were the central topic in an excellent previous review [7]. All analyses were performed in the R platform for Computational Statistics (R Project for Statistical Computing) (http://www.r-project.org/) using the R igraph package.
Classifier Implementation and Identification of Discriminative ROIs.
The centrality measures of each graph's nodes were used as features (i.e., predictor variables) in an independent classification analysis. Classification was performed using a linear support vector machine (SVM) algorithm [14]. The rationale behind SVM is that the determination of the boundary defined by the predictor variables should maximize the separation margin between the two groups to be classified. Accuracy of the classification model was estimated via a leave-one-subject out cross-validation procedure. The classifications were based on the discrimination between typical developing children compared to ADHD patients (both inattentive and combined, and a comparison between the ADHD-inattentive and ADHD-combined types. For each graph descriptor, two distinct analyses were carried out: (i) an independent site-by-site classification using the data within a single site to train and test the SVM (leave-one-subject-out score) and (ii) a joint analysis concatenating the data strings from all sites into a single classification.
Finally, in order to identify the most discriminative regions, we built brain maps highlighting the 5% brain regions with greater predictive values. We used the approach proposed by Mourão-Miranda et al. [21] and Sato et al. [22]. In brief, the decision function of the linear SVM used to predict the group of each subject is a hyperplane equation. This equation is defined by a constant and a set of coefficients, each one associated to an input feature (i.e., a brain region defined by the ROIs). During the classifier training, these parameters are tuned in order to define the optimum hyperplane for separating the data. We then used the absolute values of these hyperplane coefficients (taking into account the training with all subjects and not the leaveone-out procedure) to rank the features and highlight the top 5% most discriminative brain regions. Table 2 depicts the scores for the between-group condition comparing typical developing children with ADHD patients. The highest score obtained via site-by-site analysis was 73% using weighted betweenness at the OHSU site. However, this finding was not replicated at the other sites. In the whole-sample analysis the highest score was 58%, achieved with eigenvector centrality. Table 3 shows the scores for the discrimination analysis between inattentive and combined ADHD subtypes. This analysis was more promising and several measures achieved scores greater than 65% across multiple sites. The highest score obtained via site-by-site analysis was 77% when using the degree measure with unweighted graphs (with a 0.15 cutoff) at OHSU. The highest score in whole-sample analysis was 61%, achieved when using unweighted degree (with a 0.25 cut-off).
Classifier Accuracy.
Interestingly, the mean score (across sites) and the score from whole-sample classification were very similar, except when using betweenness and degree in unweighted graphs ( Figure 1). In this exception, the mean score was greater than the whole sample classification score.
Brain Regions with Higher Predictive Value.
Regarding the identification of the brain regions with greater contribution to prediction, we chose only the classifications with accuracy above 70%. Figure 2 illustrates the discriminant regions for weighted betweenness centrality in healthy versus ADHD groups at OHSU. Several cerebellar and cortical regions were observed including left cerebellum, cerebellar vermis, bilateral occipital cortex, left inferior temporal gyrus, left parietal cortex, right dorsolateral prefrontal cortex, and left frontal pole. Figure 3 depicts the regions in which centrality measures contributed to the classification of the ADHD types in the OHSU sample. Betweenness centrality contributed most to classification in the following brain regions: thalamus, left cerebellar cortex, right occipital cortex, right temporal cortex, right precuneus, and right dorsomedial prefrontal and parietal cortices. The brain regions in which degree centrality contributed mostly to classification of ADHD types are also depicted in Figure 3. They include the right temporal and frontal cortices, precuneus and bilateral sensory-motor cortex, dorsal anterior cingulate cortex (dACC), and bilateral parietal regions. In the case of eigenvector centrality, the highest classification scores were obtained in orbitofrontal cortex (OFC), dACC, bilateral temporal cortex, right parietal cortex, motor areas, basal ganglia, and bilateral cerebellum.
Discussion
At present, resting-state fMRI is a well-established tool for the assessment of spontaneous brain activity. Graph theoretical measures provide a suitable framework for the investigation of the structures of complex neural networks. In addition, the application of machine-learning algorithms has been of great impact on developing more advanced neuroimaging studies of psychiatric disorders [13]. In the present work, we aimed to explore the use of graph-derived measures of resting-state BOLD signal as features to discriminate between ADHD types and healthy subjects. In order to estimate the "realworld" reproducibility of the classification procedure, we analyzed data collected at five distinct sites, which differed in terms of MRI scan specifications and acquisition parameters. Finally, we mapped the brain regions in which centrality graph-derived measures showed the greatest contribution to classification. This mapping could provide some insight into the pathophysiological mechanisms of ADHD from a network analysis perspective.
When the whole sample was used, none of the centrality measures had a relevant predictive power beyond chance. However, significant prediction values were observed at the R L OHSU site. Thus both within-and between-site variability have a negative impact on the extraction of predictive information and consequently on classification. In the OHSU sample, betweenness centrality measures contained predictive information for the classification of ADHD and control subjects with a score of 73%. After an extensive analysis of sample characteristics and acquisition parameters, we hypothesize that the classification score at OHSU was higher than the other scores for two main reasons: (i) the sample was approximately balanced between typically developing controls (42 subjects) and ADHD patients (35 subjects), while the group sizes were very different at the other sites; (ii) OHSU EPI acquisition has the largest voxel size (3.8 mm) and the 3T system was equipped with a 12 channels head coil (as opposed to 8) which increases the signal-to-noise ratio. When the 5% nodes with greater predictive values were mapped, a sparse pattern of brain regions was observed. In fact, widespread brain alterations in ADHD are supported by findings of impaired interregional connectivity between the nodes of large-scale functional networks (reviewed in [9]), and both task-related and resting-state fMRI studies described atypical activations in frontal, temporal, and parietal lobes as well as in cerebellum [23][24][25].
A promising finding was observed for the degree centrality in the whole sample analysis on the classification of the disorder types. In the within-site analyses, relatively high scores were observed for degree, betweenness, and eigenvector centralities. However, as the sample size is smaller in these cases, variability is increased. Moreover, the mean scores of within-site analyses were almost identical to the ones from the whole sample analysis. Brain regions mapped for betweenness measures included nodes of the right frontoparietal network. This network has been implicated in attentional and executive processes and is thought to be impaired in ADHD. Cubillo et al. [23] have shown reduced interregional functional connectivity between frontoparietal network nodes during a stop and switching task in ADHD patients when compared to control subjects. Of particular note is the thalamus, which forms part of this attentional network [26,27], and consequently may play a key role in ADHD. In fact, reduced regional activations in bilateral thalami have been reported in ADHD. Additionally, reduced connectivity between the thalamus and right prefrontal region, occurring concurrently with increased connectivity between the thalamus and occipital lobes, has been found in ADHD in an fMRI study using a sustained attention task [28]. Interestingly, betweenness is the number of shortest path lengths that pass through a node, which is consistent with the purported structural position of the thalamus as a relay to the whole cortex sheet. We speculate that a high betweenness value for the nodes of the attentional network is compatible with the function of switching attention focus to different stimuli or tasks. The measure of degree centrality, when applied to the separation between ADHD types, produced the highest classification scores in areas of the sensory-motor network and of the DMN, mainly in parietal cortex and the precuneus. These findings are in agreement with our hypothesis, based on consistent results in the literature [9]. In fact, it is quite intuitive that motor network connectivity should be altered in a disorder characterized by hyperactivity. It is coherent that the measure of degree centrality (the number of nodes that connect to a given node) contains more discriminative information in these areas, since the motor network fundamentally comprises the output of the central nervous system. It is also expected that motor regions contain information which enables discrimination between inattention with or without hyperactivity. The eigenvector centrality was also found to contribute more to classification within the motor network, as well as within orbitofrontal cortex, dorsal anterior cingulate cortex, parietal regions, basal ganglia, and the cerebellum. Orbitofrontal areas have been classically implicated in impulse control mechanisms and appear to have impaired activation in ADHD patients [26]. Finally, alterations of DMN activity have also been proposed as a key part of ADHD pathophysiology [29]. In summary, functional networks implicated in attention, hyperactivity, and impulsivity contained predictive information for the discrimination between ADHD inattentive and combined subtypes.
In conclusion, a novel approach of applying graph theoretical measures was shown to be useful for testing our hypothesis regarding resting-state network impairment in ADHD disorder. In particular, distinct patterns of network dysfunction were evident for both inattentive and combined ADHD subtypes. The classification scores for discriminating between ADHD and healthy subjects were close to chance. Clearly, within-site analysis improves prediction levels when compared to whole sample analysis, suggesting that heterogeneity across the sites may strongly limit the application of the method as a potential clinical support. The functional connectivity estimation is strongly dependent on the samples' characteristics. Thus, in order to advance the pathophysiological knowledge of ADHD, we emphasize the importance of further multicentric studies with more homogeneous acquisitions.
Disclosure
Dr. Luis Augusto Rohde has been a member of the speakers' bureau/advisory board and/or acted as a consultant for Eli-Lilly, Janssen-Cilag, Novartis, and Shire in the last three years. He receives authorship royalties from Oxford Press and ArtMed. He has also received travel awards from Shire for his participation of the 2014 APA meeting. The ADHD and Juvenile Bipolar Disorder Outpatient Programs chaired by him received unrestricted educational and research support from the following pharmaceutical companies in the last three years: Eli-Lilly, Janssen-Cilag, Novartis, and Shire. | 4,470.2 | 2014-08-31T00:00:00.000 | [
"Psychology",
"Computer Science"
] |
Diagnostic accuracy of DNA methylation in detection of gastric cancer: a meta-analysis
Emerging studies demonstrate the diagnostic utility of DNA methylation-based blood test for gastric cancer. The aim of the meta-analysis is to evaluate the accuracy of blood DNA methylation markers for detecting patients with gastric cancer. A systematic literature search to November 2016 that evaluated DNA methylation markers utilizing blood specimen to detect gastric cancer were selected to derive pooled sensitivities and specificities. 32 studies including 4,172 patients (gastric cancer (N = 2,098), control (N = 2,074)) met the study criteria. Overall sensitivity of DNA methylation-based blood test for detecting gastric cancer was 57% (95% CI 50–63%); specificity was 97% (95% CI 95–98%). Among patients who received plasma-based testing, sensitivity was 71% (95% CI 59–81%); specificity was 89% (95% CI 78–94%). Among patients who received serum-based testing, sensitivity was 50% (95% CI 43–58%); specificity was 98% (95% CI 96–99%). Using multiple methylated genes had sensitivity of 76% (95% CI 64–84%); specificity of 85% (95% CI 65–95%). DNA methylation test had sensitivity of 55% (95% CI 47–64%) and specificity of 96% (95% CI 92–98%) for detecting TNM stage I+II gastric cancer. In conclusion, blood-based DNA methylation test had high specificity but modest sensitivity for detecting gastric cancer. Evaluating multiple methylated genes or using plasma sample may improve the diagnostic sensitivity.
INTRODUCTION
Gastric cancer is a common cancer worldwide and is associated with high morbidity and mortality.Approximately 984,000 incident cases of gastric cancer and 841,000 attributable deaths were estimated globally in 2013, which ranks fifth in cancer incidence and second in cancer-related deaths [1].Although the incidence of gastric cancer has decreased in recent years, lack or nonspecific symptoms among patients with early stage cancer prevents early detection and treatment in the majority of the patients [2].Although accurate for detecting early gastric cancer, the cost and invasiveness of endoscopy as a primary screening test has been limited in large-scale screening programs [3].
In order to improve the detection of early gastric cancer, non-invasive blood biomarkers have received great interest.Although biomarkers such as carcinoembryonic antigen (CEA), carbohydrate antigen-19-9 (CA , and carbohydrate antigen-72-4 (CA 72-4) have been evaluated for diagnosis and surveillance of gastric cancer, the low sensitivity of these biomarkers do not warrant their routine use in clinical settings [4][5][6].Emerging studies have highlighted that carcinogenesis of gastric cancer involves multiple processes that includes epigenetic as well as genetic alterations.Specifically, DNA methylation can lead to inactivation or activation of cancer-related gene [7,8].Furthermore, methylation of the promoter region can silence tumor suppressor genes that play an important role in regulating DNA repair, cell adhesion, cell-cycle Meta-Analysis www.impactjournals.com/oncotargetregulation, signal transduction, and apoptosis [9,10].Modification in DNA methylation (e.g.Reprimo, hMLH1) frequently detected in gastric cancer tissues as well as in serum or plasma specimens in patients with gastric cancer, while seldom or absent in controls, suggests its potential application as a non-invasive biomarker [11,12].More importantly, several studies have demonstrated the presence of gene methylation in intraepithelial neoplasia further supporting the role of DNA methylation as an early event in the carcinogenesis of gastric cancer [13,14].
Previous studies evaluating DNA methylation to differentiate patients with or without gastric cancer are limited by small sample size or inconsistent results.Therefore, we performed a meta-analysis to assess the diagnostic accuracy of DNA methylation markers as a non-invasive biomarker in detecting gastric cancer.
Literature search
After a thorough literature search, we identified 197 records from Pubmed, two from Cochrane, 204 from Web of Science, and 211 from Embase.Of the 614 studies, 556 were excluded after reviewing the title and abstract for the following reasons: duplicate studies in 174, unrelated to study aim in 321, review papers in 43, meeting abstracts in 14, editorials in two, letter to the editor in one, and a book chapter in one.When 14 meeting abstracts were reviewed in detail, two were published in full-text and the data had been included in the analysis.However, the remaining 12 meeting abstracts were confirmed to not meet the study criteria (unrelated study in six, insufficient data to calculate study outcome in four, insufficient data to assess bias in two).Additional 26 studies were excluded after reviewing the full text (insufficient data for calculating specificity in 10, non-blood based testing in nine, prognostic study of patients with gastric cancer in five, and non-English manuscript in two).Finally, 32 studies that include 69 analyses of blood DNA methylation tests for evaluation of gastric cancer were included in the meta-analysis.The flow chart of the search method is shown (Figure 1).
Characteristics of selected studies
The characteristics of the 32 publications including 69 analyses that compared frequency of DNA methylation markers between gastric cancer and control patients are shown in Table 1.Of the 32 included studies, nineteen evaluated methylation status of one gene, five evaluated two genes, and eight evaluated three or more genes.A total of 39 genes methylation were analyzed in these studies.P16 was reported in seven studies.E-cadherin and RASSF1A were reported in four studies.Finally, DAPK, APC, hMLH1, SFRP2, RUNX3, Reprimo, and RNF180 were reported in two studies.
These studies were conducted in nine countries (China, Japan, Thailand, Chile, Iran, Greece, South Korea, India, Singapore) and were published between 2002 and 2016.Of the 32 studies, 13 studies reported age and 15 studies reported gender data only among patients with gastric cancer while none of the studies provided age and gender data on control patients.Methylation-specific polymerase chain reaction was used in 31 studies to detect DNA methylation in the serum or plasma samples.In addition, 27 studies qualitatively analyzed the frequency of methylation.
Diagnostic accuracy of DNA methylation markers in gastric cancer
Forest plot of individual and pooled sensitivities and specificities of all DNA methylation markers for diagnosing patients with gastric cancer are shown in Figure 2. The pooled sensitivity and specificity for detecting gastric cancer by the presence of one or more DNA methylation genes were 57% (95% CI 50-63%) and 97% (95% CI 95-98%), respectively.In addition, the pooled +LR (positive likelihood ratio) and -LR (negative likelihood ratio) were 19.1 (95% CI 11.0-33.0)and 0.45 (95% CI 0.38-0.52),respectively.The DOR (diagnostic odds ratio) was 42 (95% CI 24-74).The I 2 values for sensitivity and specificity were 93.1% and 90.3%, respectively, implying significant heterogeneity between studies.Thus, the random effects model was used to pool the values.In addition, the SROC (summary receiver operating characteristic) curve for the included studies is presented in Figure 3.The AUC (area under the curve) was 0.88 (95%CI 0.85-0.91).
Assessment of bias
The quality assessment showed presence of high or unclear risk of bias for patient selection given that all the studies were case-control study design (Figure 5). 10 studies did not clarify whether the index test was interpreted prior to the knowledge of the reference standard.All studies showed low risk of bias in the categories of reference standard and flow & timing domains.In addition, publication bias was detected by Deeks' funnel plot asymmetry test (P < 0.05).However, the subgroup analyses of plasma testing (P = 0.41) and multiple DNA methylation markers (P = 0.91) did not demonstrate publication bias (Figure 6).
DISCUSSION
DNA methylation refers to the addition of a methylgroup to the carbon 5' position of the cytosine ring of CpG dinucleotides to form 5-methylcytosine. CpG dinucleotides are concentrated in the upstream promoter region of many genes [15].Aberrant methylation of the promoter region involved in the inactivation of tumor suppressor gene plays an important role in the tumorigenesis of multiple cancers (head and neck cancer, colon cancer, bladder cancer) and appears to be promising biomarkers for cancer detection [16][17][18][19].Emerging studies have also shown that tumor suppressor genes (e.g.RNF180, Zic1) are frequently methylated, not only in gastric cancer, but also among patients with pre-malignant gastric lesions.Therefore, dysregulation in CpG-island methylation is likely to be involved in the early stages of gastric carcinogenesis, and DNA methylation may be utilized to detect early stage gastric cancer.
In the present meta-analysis, we evaluated the diagnostic accuracy of methylated genes for detecting gastric cancer from plasma or serum specimens.Overall, DNA methylation detection in the peripheral blood of gastric cancer patients exhibited a potential diagnostic utility given a modest sensitivity of 57% (95% CI 50-63%), high specificity of 97% (95% CI 95-98%), and a moderate-to high AUC of 0.88 (95% CI 0.85-0.91),which are superior to other conventional cancer markers.For example, a metaanalysis reported the pooled sensitivities of 21% for CEA, 28% for CA19-9, and 30% for CA72-4 for detecting gastric cancer.Although these blood biomarkers are commonly used in clinical practice, sensitivities for detecting early stage cancer are even lower, ranging from 9-23% [20].Serum pepsinogen is another widely used biomarker to diagnose gastric cancer.A recent meta-analysis evaluating the combination of serum pepsinogen I levels and serum pepsinogen I/II ratio for detecting gastric cancer showed a Figure 4: Forest plots of multivariable meta-regression and subgroup analysis for sensitivity and specificity.www.impactjournals.com/oncotargetsensitivity of 70% (95% CI 66-75%), specificity of 79% (95% CI 79-80%), and the AUC of 0.78 (95% CI 0.72-0.81)[21].
Furthermore, we performed additional analysis to determine whether the diagnostic utility of DNA methylation markers differed among certain subgroups.We found that sensitivity of DNA methylation was higher among plasma-based (71%) compared to serum-based testing (50%) with similar specificities.In addition, multiple DNA methylation markers had higher sensitivity (76%) compared to a single DNA methylation marker (52%).Therefore, a panel of DNA methylation genes may improve current limitations of marginal sensitivity in detecting gastric cancer for clinical use.
Finally, we assessed diagnostic accuracy of DNA methylation markers in detecting early stage of gastric cancer.The pooled sensitivity of DNA methylation detection to detect early stage gastric cancer (TNM stage I + II) was 55%, specificity was 96% and the AUC was 0.85, respectively, while the pooled sensitivity of DNA methylation detection among advanced gastric cancer (TNM stage III + IV) was 68%, specificity was 96% and the AUC was 0.91, respectively.Our results suggest that the DNA methylation markers evaluated to date are limited in diagnosing early stage gastric cancer which is important for an optimal screening test.
Blood-based testing for detecting gastric cancer is advantageous over conventional strategy of screening endoscopy given greater convenience, superior safety, and lower cost when applying to a large screening population [22].However, currently available biomarkers (CEA, CA 19-9, serum pepsinogen I/II ratio) lack sufficient sensitivity and specificity for detecting early gastric cancer.Recently, microRNA as an emerging biomarker for detecting gastric cancer reported a pooled sensitivity of 78% (95% CI 73-81%), specificity of 80% (95% CI 76-84%) and the AUC of 0.86 (95% CI 0.83-0.89)respectively [23].Although the sensitivity of microRNA appears superior to DNA methylation markers, chemically instability of RNA compared to DNA may pose technical challenges as a diagnostic test.Given current limitations, a combination of DNA methylation test with other bloodbased biomarkers may enhance the sensitivity for detecting gastric cancer and increase its utility as a screening test.
The majority of the studies represented in our meta-analysis utilized methylation-specific PCR (MSP) to determine the methylation status of blood specimens.Although MSP is low-cost, which is an important factor for an optimal screening test, evaluation of only one or two CpG sites is possible which may impact the sensitivity.Although bisulfite sequencing that evaluates methylation status of each CpG locus has superior precision and is considered the gold standard, the application in clinical practice is currently limited by complexity and high-cost.However, improvement in the evaluation of DNA methylation technique in the future may increase generalizability as a screening test for gastric cancer [24,25].
Our meta-analysis has limitations.First, all the studies evaluating DNA methylation status as a diagnostic test for gastric cancer included in the meta-analysis were case-control studies, rather than cohort studies, which have a higher risk of bias.Second, substantial publication bias was detected by Deeks' funnel plot asymmetry test in the primary analysis.However, the subgroup analyses of plasma-based testing and evaluating multiple DNA methylation markers did not demonstrate evidence of publication bias supporting validity.Subgroup analyses results further suggested that the mode of testing, stage of patients, DNA methylation markers profiles, and the region of study may explain for the study heterogeneity, which was additionally confirmed by our meta-regression analysis.Third, a small number of patients from regions other than Asia and those with methylation in the promoter of multiple genes included in the meta-analysis may affect the generalizability of our findings.Our results should be interpreted with caution and will require validation in future population based-studies that encompass patients screening for gastric cancer.
In conclusion, blood based DNA methylation test had high specificity but modest sensitivity for detecting gastric cancer.Utilizing a combination of multiple compared to single gene methylation tests and evaluating plasma compared to serum sample may improve the diagnostic sensitivity.
MATERIALS AND METHODS
A comprehensive literature search using Pubmed, Cochrane, Web of science, and Embase for relevant articles to November 20, 2016 was conducted to identify studies assessing diagnostic accuracy of DNA methylation utilizing plasma or serum in patients with gastric cancer.The following search strategies were used: ((((("Stomach Neoplasms" [Mesh]) OR ("gastric neoplasm*" OR "gastric cancer" OR "gastric tumor" OR "gastric carcinoma" OR "gastric oncology*" OR "stomach neoplasm*" OR "stomach cancer" OR "stomach tumor" OR "stomach carcinoma" OR "stomach oncology*" biomarker OR marker))) AND (blood OR plasma OR serum OR sera).The references of relevant publications were also thoroughly searched for additional studies.
Study criteria
We included articles that met the following criteria: 1) patients with gastric cancer, 2) assays evaluating DNA methylation markers in blood specimens, 3) sensitivity and specificity values for detection of gastric cancer reported or calculable from the primary data.We excluded articles with any of the following criteria: 1) meeting abstracts, reviews, letters, comments, editorials, and meta-analysis; 2) evaluation of outcomes other than gastric cancer; 3) evaluating prognosis of patients with established gastric cancer; 4) non-English manuscript.
Data extraction
Data including name of the lead author, publication year, country of study origination, types of samples examined, analyzed genes, experimental methods, sample size, and frequency of DNA methylated status in cases and controls were extracted from the selected study.
Quality assessment
All publications that met our inclusion criteria were evaluated by QUADAS-2 guidelines.[26] Two authors (ZWF, LQF) independently abstracted and assessed the risk of bias for each study using standardized methods that include four key domains: patient selection, index test, reference standard, flow and timing (Supplementary Table 1).
Study measures
The primary endpoint was sensitivity and specificity of blood DNA methylation tests for detecting gastric cancer.Secondary endpoints included +LR, -LR, DOR of blood DNA methylation tests.In addition, primary and secondary endpoints were calculated in subgroups by blood specimen types (serum vs. plasma), geographic region of study origination (Asian vs. other regions), number of DNA methylation (single vs. multiple), and cancer stage (early vs. advanced stage).Early stage gastric cancer was defined by TNM stage I or II while advanced stage cancer was defined by TNM III or IV.Finally, study endpoints were calculated using the stage of gastric cancer as outcomes (early vs. advanced stage).
Statistical analysis
STATA 12.0 was used to perform the metaanalysis.Sensitivity, specificity, +LR, -LR, and DOR with corresponding 95% CI were calculated using a random effects model due to significant heterogeneity.
A SROC curve was plotted based on each analysis, and the AUC was used to evaluate the overall diagnostic test accuracy.Furthermore, the I 2 value was used to evaluate heterogeneity between studies.In addition, meta-regression and subgroup analysis were performed to identify the potential sources of heterogeneity.Presence of publication bias was evaluated by the Deeks' funnel plot analysis.
Figure 1 :
Figure 1: Flow diagram of studies identified in the meta-analysis.
Figure 2 :
Figure 2: Forest plot of individual study and pooled sensitivities (A) and specificities (B) of blood DNA methylation marker for detection of gastric cancer.
Figure 3 :
Figure 3: Summary ROC curve with confidence intervals and prediction regions around mean operating sensitivity and specificity points for detection of gastric cancer.
Figure 6 :
Figure 6: Deeks' funnel plot asymmetry test to assess publication bias in estimates of diagnostic odds ratio for (A) plasmabased testing, (B) presence of multiple DNA methylation markers. | 3,701.6 | 2017-11-03T00:00:00.000 | [
"Medicine",
"Biology"
] |
TIGAR/AP-1 axis accelerates the division of Lgr5− reserve intestinal stem cells to reestablish intestinal architecture after lethal radiation
During radiologic or nuclear accidents, high-dose ionizing radiation (IR) can cause gastrointestinal syndrome (GIS), a deadly disorder that urgently needs effective therapy. Unfortunately, current treatments based on natural products and antioxidants have shown very limited effects in alleviating deadly GIS. Reserve intestinal stem cells (ISCs) and secretory progenitor cells are both reported to replenish damaged cells and contribute to crypt regeneration. However, the suppressed β-catenin/c-MYC axis within these slow-cycling cells leads to limited regenerative response to restore intestinal integrity during fatal accidental injury. Current study demonstrates that post-IR overexpression of TIGAR, a critical downstream target of c-MYC in mouse intestine, mounts a hyperplastic response in Bmi1-creERT+ reserve ISCs, and thus rescues mice from lethal IR exposure. Critically, by eliminating damaging reactive oxygen species (ROS) yet retaining the proliferative ROS signals, TIGAR-overexpression enhances the activity of activator protein 1, which is indispensable for initiating reserve-ISC division after lethal radiation. In addition, it is identified that TIGAR-induction exclusively gears the Lgr5− subpopulation of reserve ISCs to regenerate crypts, and intestinal TIGAR-overexpression displays equivalent intestinal reconstruction to reserve-ISC-restricted TIGAR-induction. Our findings imply that precise administrations toward Lgr5− reserve ISCs are promising strategies for unpredictable lethal injury, and TIGAR can be employed as a therapeutic target for unexpected radiation-induced GIS.
Introduction
Unexpected radiation exposure during terrorist events (e.g., the use of "dirty bombs"), industrial or nuclear accidents (such as the nuclear disasters in Chernobyl and Fukushima) is a current and continuing threat to the future. Under homeostatic conditions, the rapid turnover of the intestinal epithelium is driven by leucine-rich repeat-containing G protein-coupled receptor 5 (Lgr5) high intestinal stem cells (ISCs), which are especially vulnerable to high-dose ionizing radiation (IR) 1,2 . A dose of 15 gray (Gy) of radiation is sufficient to abrogate the proliferative output of these mitotically active Lgr5 high ISCs, and thus causes severe acute damage of the epithelial integrity 2 . Within 7 days of high-dose IR exposure, mice suffered from diarrhea, malabsorption and weight loss always die with complications known as gastrointestinal syndrome (GIS). Although prophylactic administrations have demonstrated some desirable effects on preventing stem cell exhaustion and epithelial disintegration induced by high-dose IR exposure [3][4][5] , the current post-IR treatments based on natural products and antioxidants have shown very limited effects on reversing stem cell death and the deadly GIS [6][7][8] .
Besides the high-proliferating and radiosensitive Lgr5 high ISCs (i.e., crypt base columnar cells (CBCs)), a slow-cycling and injury-resistant pool of stem cells could be arisen to divide when the CBCs are depleted 9, 10 . These rare "+4" position cells mainly include the reserve ISCs marked by lineage tracing analysis with polycomb complex protein 1 (Bmi1)-creERT 11,12 and Lgr5 + label-retaining secretory progenitor cells which are regarded functionally distinct from reserve ISCs 13,14 . These radioresistant "+4" position cells are low-proliferative under homeostasis, while become proliferative from 3-4 days after high-dose radiation 15,16 . However, during lethal IR exposure, the CBCs are exhausted rapidly and the intestinal epithelium always disintegrates around 5 days after radiation, which happens even prior to effective "+4"-position-cell division and crypt regeneration. Hence, further elucidation of the mechanisms leading these quiescent cells to division after lethal IR-injury is required for mitigating fatal GIS.
The Wnt/β-catenin/c-MYC axis plays a central role in regulating the division of ISCs. However, the suppressed β-catenin/c-MYC pathway within "+4" position cells results in limited regenerative response within 3 days after lethal radiation 16 . Therefore, targeting β-catenin/c-MYC signal after lethal IR-injury may be potential countermeasures for accelerating the regeneration of these quiescent cells and the intestinal epithelium. TP53induced glycolysis and apoptosis regulator (TIGAR), a downstream target of c-MYC in mouse intestinal crypts 17 , has been indicated to be a critical scavenger of reactive oxygen species (ROS), which promotes DNA damage repair and cellular redox balance during genotoxic stress 18,19 .
In the present study, we demonstrate that overexpression of TIGAR may be promising in ameliorating the intestinal architecture and survival during unpredictable lethal injury. Mechanistically, TIGAR acts as a turnon switch that facilitates cell division of Lgr5 − reserve ISCs in an activator protein 1 (AP-1) dependent manner, which remedies the β-catenin/c-MYC-inhibited "defect" of these cells and gears crypt regeneration efficiently after lethal IR-injury.
TIGAR-induction exclusively gears Lgr5reserve ISCs to regenerate crypts By asymmetric division, a single reserve ISC could generate a daughter cell and an Lgr5 + CBC to replenish the active stem cell compartment 15 . The active CBCs then either generate transit-amplifying (TA) cells, which divide rapidly to produce large quantities of enterocytes, or differentiate into secretory progenitor cells which commit to Paneth cells, goblet cells, or enteroendocrine cells 10 .
TIGAR accelerates reserve ISCs toward division by limiting damaging ROS
It was noteworthy that redundant TIGAR activity failed to gear reserve-ISC division during homeostatic conditions. Upon a pulse of 4-OHT in vitro and TIGARoverexpression within reserve ISCs, the morphology and dynamics of TIGAR-overexpressing organoids remained the same as that of WT cohorts ( Supplementary Fig. 3a, b). The reason might be attributed to another "initiating signal", which was essential for gearing reserve ISCs toward regeneration. It was reported that the proliferative ROS signal was pivotal for CBC division and proliferation 27 . With the administration of N-acetyl L-cysteine (NAC), a traditional antioxidant which indiscriminately scavenged the damaging ROS and pro-proliferating ROS, the regeneration of reserve ISCs was examined to determine whether IR-induced pro-proliferation ROS acted as "initiating signal" in accelerating reserve-ISC division. As shown in Fig. 4a-d, NAC treatment could only drive the Bmi1-creERT + cell division to a limited extent, which was far away from that of TIGAR-overexpression did. In vitro analysis also revealed that the AP-1 activity within Bmi1-creERT + cells after 12-Gy irradiation was modestly enhanced by the NAC administration, with the degree lagging far behind the TIGAR-overexpressing cohort's ( Fig. 4e, f). To confirm whether TIGAR-induced crypt regeneration was in a dose-dependent pattern, intestinal organoids derived from both homozygous Villin-creERT2; H11-Tigar +/+ (Villin-creERT2;H11-Tigar) mice (Fig. 4g) and heterozygous Villin-creERT2;H11-Tigar +/− mice (Fig. 4h) were irradiated and stimulated by 4-OHT immediately post-IR. Ki67-based immunofluorescence assay illustrated that both homozygous and heterozygous TIGAR-overexpressing miniguts moved to proliferative phases that were notable at 3-5 days after irradiation (Fig. 4i, j). The degree, however, in the homozygous TIGARoverexpressing organoids was considerably higher than that in heterozygous ones (Fig. 4i, j), whose TIGAR expression level was between WT organoids and homozygous organoids. These data further confirmed that IRinduced pro-proliferating ROS, which was not scavenged by TIGAR, might be a critical "initiating signal" for gearing reserve ISCs toward regeneration. This mechanism also explained why preclinical treatments simply based on traditional antioxidants had very limited effects on reversing intestinal disintegration and lethal GIS.
survival rate (Fig. 5f) of Villin-creERT2;H11-Tigar mice resembled those of Bmi1-creERT;H11-Tigar mice after lethal irradiation, indicating that TIGAR-induction failed to promote Lgr5 + secretory progenitor cell division to support crypt regeneration, even if the reserve ISCs were already accelerated to proliferation. Furthermore, the data also revealed that the contribution of other epithelial populations to crypt regeneration might be vanishingly small. Actually, although crypt cells such as Alpi-CreERmarked TA cells are reported to repopulate the crypt compartment upon IR-injury, little evidence supports their functional importance in epithelial regeneration 28 . Meanwhile, IR-evoked apoptosis of intestinal crypts within 24 h post-WAI was almost the same no matter TIGAR was overexpressed or not ( Fig. 5g-j), suggesting that the post-IR treatment applied in the current study failed to attenuate WAI-induced crypt cell death. Hence, it was summarized that the amelioration of intestinal integrity induced by TIGAR-overexpression after lethal IR-injury was predominantly attributed to the acceleration of Bmi1-creERT + reserve-ISC division.
Discussion
A two-stem cell model is supported by burgeoning studies from the small intestine, involving an actively cycling but radiosensitive stem cell and a long-lived, injury-resistant reserve pool of ISCs which is regarded to reside upstream of the high-proliferating CBCs 11,24,29,30 . Classic theories of radiobiology demonstrate that cell's radiosensitivity is positively correlated with the proliferative activity. Indeed, relieving the proliferative suppression of the reserve pool of ISCs before irradiation can result in enhanced epithelial radiosensitivity and aggravated GIS 16 . Conversely, if the suppressed cell division of low-proliferating stem-like cells cannot be released in time, the intestinal integrity may also fail to be regenerated after lethal IR-injury. Using "cre-loxp" mouse model, the present study indicates the possibility of TIGAR-based post-IR treatment in accelerating reserve-ISC division and ameliorating mouse survival under grievous GIS.
To establish the involvement of TIGAR in driving the intestinal regeneration, we applied mouse models of which TIGAR could be efficiently induced 18 h after stimulation in vivo ( Supplementary Fig. 4a-f). During WAI, the head, neck, thorax, and extremities were shielded to protect the bone marrow (Fig. 1b), thus inducing predominant GIS 3,4 . Single intraperitoneal injection of tamoxifen was performed immediately after 15-Gy WAI to induce TIGAR expression timely. After lethal WAI exposure, Bmi1-creERT;H11-Tigar and Villin-creERT2; H11-Tigar mice revealed equivalent amelioration of intestinal epithelial integrity (Fig. 5c-e) and mouse survival (Fig. 5f), indicating that TIGAR-overexpression primarily accelerated reserve ISCs toward division to reestablish the intestinal architecture after lethal irradiation. It is worth noting that biomarkers of "quiescent" reserve ISCs are also found in a subpopulation of Lgr5 + crypt cells, while around 20% of Lgr5 + intestinal cells are largely quiescent 13,24 . This quiescent Lgr5 + population is mainly comprised of the Lgr5 + (label-retaining) secretory progenitors and a subpopulation of the Bmi1-creERT + reserve ISCs. Critically, by lineage cell tracing analysis, the feasibility of TIGAR-overexpressing quiescent Lgr5 + cells in rescuing mice from lethal GIS was ruled out (Fig. 2i-k; and Supplementary Fig. 1f-h). The mechanisms might be roughly attributed to the following two reasons. On the one hand, when compared with the exact quiescent Bmi1-creERT + reserve ISCs, quiescent Lgr5 + cells were reported to demonstrate much fewer tracing events in response to injury 13,14 , which made effective crypt regeneration incapable after lethal WAI exposure. On the other hand, the Lgr5 + characteristics endowed these cells with higher radiosensitivity 31 , which made them already lose viability or undergo apoptosis before TIGAR was introduced (Fig. 5h, j). However, the present study does not eliminate the indispensability of the de novo-generated Lgr5 high CBCs in intestinal regeneration after lethal IR-injury 32 .
Based on the lineage tracing analysis, a recent study indicated that the Bmi1 + cancer stem cells possessed an increased AP-1 activity that drove tumor recurrence 33 , suggesting that AP-1 played critical roles in endowing Bmi1-creERT + stem cells with proliferative potential. In the present study, a classical inhibitor of AP-1, 3-PA, was used to demonstrate the mechanism of TIGAR-induced proliferation after lethal IR. A significant abrogation of Bmi1-creERT + cell division, especially the first asymmetric division at the early stage (1 day) post-IR, was observed when the transcriptional activity of AP-1 was inhibited by 3-PA (Fig. 3j, k). Interestingly, AP-1 abolishment only dramatically abrogated the Bmi1-creERT + lineage after irradiation, but did not affect the proliferative activity of CBCs during homeostatic conditions (Supplementary Fig. 2c, d). This finding suggests that the AP-1 activity is dispensable for CBC-like stem cells during homeostasis, which might attribute to the high proliferative activities of Lgr5 high CBCs endowed by the Wnt/β-catenin signals 31,[34][35][36] . This also suggested that TIGAR-induction remedied the β-catenininactivated "defect" of the low-proliferating reserve ISCs, which facilitated the acceleration of cell division and crypt regeneration after lethal IR-injury (Fig. 6). Mechanically, TIGAR-induced activation of c-Fos/AP-1 might be attributed to the increased phosphorylation of c-Fos, but not the upregulation of c-Fos expression. In conclusion, the current study indicates that during unexpected disasters, quiescent Lgr5 − reserve ISCs can be awakened timely by TIGAR/AP-1 activation to reestablish intestinal architecture and ameliorate mouse survival. Meanwhile, our work reveals an unexplored role of TIGAR in accelerating reserve ISCs toward regeneration, and the capability of TIGAR-induction in activating AP-1 demonstrates its significant advantage over traditional antioxidant treatments (Fig. 6). If not otherwise stated, only male mice were used and littermates were randomly and blindly allocated to experimental groups. All experiments were conducted at 8-10 weeks of age. Genotyping was performed following the protocols of Jackson Laboratory. The study was conducted in compliance with local animal welfare laws, guidelines, and policies. All procedures were approved by the ethic committee of Soochow University (Approval No. ECSU-2019000150).
Mouse irradiation and tamoxifen administration
Mice weighing between 24 and 27 g were anesthetized and treated with a single dose of 15-Gy WAI at a dose rate of 1.6 Gy/min using an X-RAD 320iX Biological Irradiator (Precision X-ray, North Branford, CT, USA). A 3-cm area of the mice containing the gastrointestinal tract was irradiated (irradiation field), shielding the head, neck and upper thorax as well as lower and upper extremities and protecting a significant portion of the bone marrow. Immediately after irradiation, mice were injected with tamoxifen intraperitoneally. Tamoxifen (Sigma, Cat#T5648) was dissolved in corn oil (Sigma, Cat#C8267) at a final concentration of 20 mg/ml. Cre enzyme was induced by single injection of tamoxifen at a dose of 4.5 mg per 20 g body weight. The schedules for tamoxifen administration and radiation as well as mouse grouping were provided in the relevant figure legends. If not otherwise stated, H11-Tigar mice were used as controls and were given similar doses of tamoxifen.
Survival rate and small-intestine length
After 15-Gy WAI, the survival rate of the mice was monitored every day for up to 30 days. For intestinal length measurement, mice died before day 5 post-WAI were excluded, and the mice alive were administered with euthanasia on day 5 after 15-Gy WAI to analyze the small intestinal length.
Histology of small intestine
At day 1, 3, 5 after 15-Gy WAI, mice were administered with euthanasia, and the proximal small intestines were excised for histology. Small intestine tissues were fixed in 10% neutral-buffered formalin overnight. After embedded in paraffin, tissues were cut into 5-μm sections for haematoxylin and eosin (H&E) staining and observation.
Crypt isolation
Mouse small intestine was cut open longitudinally and washed with cold PBS. The villi were collected by scraping the intestine with a microscope slide and stored for protein analysis. While the remaining small intestine was cut into 5-mm pieces and incubated in PBS with 2 mM EDTA for 30 min at 4°C. After incubation, the tissue fragments were separated by vigorous shaking. The supernatant enriched for crypts was passed through a 70-μm cell strainer (BD Falcon, Cat#352350). After centrifugation, the final fraction of intestinal crypts was used for further culture.
Organoid irradiation and transfection
After 12-Gy irradiation in vitro, transfection of Tigaroverexpressing adenovirus (Adv-CMV-Tigar-3flag) was performed by mechanical separating the organoids from Matrigel, and 4-OHT (10 nM) was added into the medium soon after replanting the organoids. After 24 h, the medium was replaced by normal organoid culture medium.
Fluorescence-activated cell sorting (FACS)
The intestine was cut open longitudinally and incubated with 2 mM EDTA solution at 4°C for 30 min to isolate intestinal crypts. To generate a single cell suspension, cells were incubated with Accutase (BD Biosciences, Cat#561527) at 37°C for 10 min. Flow cytometry analysis was performed with CELL SORTER SH800S (SONY, Japan). Cells were gated for single cell based on the profiles of Forward-scatter area versus backward-scatter area (FSC-A vs. BSC-A) and forward-scatter height versus forward-scatter width (FSC-H vs. FSC-W). The size of the nozzle for all sorting is 100 µm.
Analysis of AP-1 activation
The DNA-binding activity of AP-1 was measured using the Trans AM kits (Active Motif, Cat#44096). Nuclear extracts of Bmi1-creERT + cells containing c-Fos/AP-1 factors were added into the multi-well plates precoated with consensus double-stranded DNA oligomers. After incubation, the transcription factor bound to DNA sequences was detected by using antibodies against c-Fos according to the manufacturer's protocol. The absorbance was examined by a Microplate Reader (BioTek, Synergy2, Winooski, VT, USA).
3-PA and NAC treatments
3-PA (MedChem Express, Cat#HY-12270) was dissolved in polyvinylpyrrolidone. For in vivo administration, mice were administrated with 3-PA (120 mg/kg body weight, i.g.) daily for 4 consecutive days before 15-Gy WAI and the subsequent tamoxifen induction. For in vitro administration, 3-PA (10 μM) was added into the organoid growth medium 1 day before 12-Gy irradiation. For NAC treatment, the organoids were treated with NAC (Sigma, Cat#A8199) soon after the irradiation at concentrations of 1.5 mM or 4.0 mM, respectively.
Statistical analysis
Data were expressed as mean ± SD from three independent determinations. Differences between groups with similar variance were analyzed by Student's t test. Kaplan-Meier survival analysis and log-rank comparison were performed for survival studies. Asterisks represent the p values as follows: *p < 0.05, **p < 0.01 and ***p < 0.001. | 3,887.8 | 2020-07-01T00:00:00.000 | [
"Medicine",
"Biology",
"Environmental Science"
] |
Inventory Management System for MSMEs
. This inquire about points to create a web-based stock framework that centers on reasonableness, user-friendliness, and usefulness custom-made to desires of Miniaturized scale, Little, and Medium Ventures (MSMEs). Through in-depth analysis of MSMEs' specific inventory management requirements, the system not only provides a comprehensive solution for stock management but also integrates the Apriori algorithm for associative analysis of inventory data. The system's development adopts an Agile approach, allowing flexible adaptation to changing needs throughout the development process. Key features include efficient stock management, intuitive Point of Sales (POS) transaction recording, and customizable inventory reporting. The implementation of this system is expected to enhance MSMEs' efficiency in inventory management, provide insightful data to support more informed decision-making, and positively contribute to strengthening competitiveness and growth in the dynamic and competitive business environment.
Introduction
Retail business players in the MSME category are often faced with challenges in planning, controlling stock and managing finances.One of the main obstacles faced is limited access to adequate information.This challenge arises because the transaction recording system is inadequate, especially in recording purchases, sales and inventory.This becomes an obstacle to obtaining the information sources needed to optimize business operations.
As a basis for this research, a study was carried out involving a literature review, namely, Design of an Information System for Inventory of Goods in MSMEs using Microsoft Excel at the Sumber Anugerah Motor Workshop.The use of Excel was chosen because administration and inventory management experienced problems.The qualitative descriptive research method was carried out at the Sumber Anugerah Motor Workshop, Buha Village, Environment 1.The results show that this system allows inventory control to computerized financial reports.Recommendations involve the adoption of designed application programs and training of employees in their use [1].
Furthermore, there is research on the application of a priori algorithms to web-based MSME inventory applications.The method used in this research is conducting analysis, system planning and using several diagrams such as use case diagrams, activity diagrams, sequence diagrams and class diagrams.This research resulted in a web-based inventory application created for MSMEs [2].Then there is another research, namely the inventory information system for MSME resellers of basic goods, the research method uses the waterfall model.This research resulted in a web-based application for an inventory information system for MSMEs [3].The application of information technology that is widely used in the business world is the application of webbased point-of-sale, inventory and cashier information systems [4].The application of technology by MSME actors to their businesses, including information technology, has an effect on income [5].
Therefore, this research aims to develop and implement a web-based inventory system specifically designed to support MSMEs.This system will prioritize affordability and ease of use.With this solution, it is hoped that MSMEs can increase efficiency in inventory management, reduce overhead costs and optimize sales opportunities.In addition, this system is also expected to provide better insight into inventory performance, enabling MSME owners to make more informed decisions.1. Product Backlog: Creates a priority list of features and tasks that must be implemented in the system.Sprints: Break down work into short iterations called sprints, which last 2 weeks.2. Daily Scrum: Daily meetings to discuss progress, bottlenecks, and teamwork plans.3. Sprint Review: Evaluate the results of completed sprints to ensure features and functionality have been implemented properly.d.System Testing and Validation: Testing systems to ensure performance, security, and functionality meet expectations.e. Conclusions and Suggestions: Make conclusions from the results of the study, highlighting the main findings and their practical implications.Provide recommendations for further actions or further development.
Apriori Algorithm
The definition of an algorithm is an effective method expressed as a limited series of well-explained instructions for calculating a function.Association analysis or association rule mining is a data mining technique for finding associative rules between a combination of items.An example of an associative rule from purchasing analysis in a supermarket is knowing how likely a customer is to buy bread together with milk.[6].
The a priori algorithm is one of the most popular algorithms in data mining in the association group.This algorithm will be suitable to be applied if there are several item relationships that you want to analyze.One way that can be applied is in the area of inventory data grouping [7].
Agile Method
Agile methods are fast methods for developing software that adapts quickly to changing needs.The main idea of Nimble Improvement is application development and collaboration.Reduce documentation so you can concentrate on working on your application.Close collaboration and communication between two or more people working on a single feature.The goal of Agile development in the form of literacy or iteration, is to respond and handle each change in a flexible way, thereby reducing project duration and ensuring client satisfaction.Agile development methods are good for small projects and small teams [8].
Inventory
Inventory, which includes all goods or materials needed in the production and distribution process that are used for further processing or sold [9].
Website
A website is a collection of electronic pages that are interconnected and can be accessed via the Internet [10].Apart from that, there are those who state that a website is a collection of web pages that can be accessed via a specific domain or address on the internet.Each web page on this website contains information which can be in the form of text, images, videos, or other multimedia elements.This information is structured and organized in such a way as to provide a structured and informative user experience.Websites can cover various topics, from news, education, entertainment, business, to online communities [11].
Data Collection
Sales transaction data collection began to understand purchasing patterns in MSMEs.Sales data will be the basis for developing association models with the aim of finding relationships between items that have been sold.This modeling process involves calculating support values and confidence values, which will be the basis for analyzing association relationships between items.An association rule is considered significant if the support value exceeds the minimum support value and the confidence value exceeds the specified minimum confidence value.In this context, the Apriori algorithm is used to identify relationships between items, and the analysis results will be used by the admin to provide recommendations for potentially related items.
Implementation of Apriori Algorithms
The author uses sales data taken from the period 10 November 2023 to 21 November 2023
Table 1. Sales Transactions
After that, continue to identify the frequency of appearance of each individual item (1-itemset) in the toy transaction dataset.For example, Lego appears 7 times, Barbie appears 8 times, and so on.The detailed data are in table 2.
Table 2. Support Itemset Value 1
The next step involves formin itemset 2 (2-itemset) by combining items from itemset 1.The author calculates the frequency of appearance of each itemset 2 in transactions.For example, {Lego, Barbie} appears 4 times, {Barbie, Puzzle} appears 5 times, and so on.The details are in table 3.In the final step, the author forms association rules by setting support and confidence values.In this example, the resulting association rule is that when customers buy Barbies and Action Figures, they are also more likely to buy Puzzles, with a confidence level of 75.0%.The details are in table 5 Table 5. Association Rule Results
System View
In the image below, you can see the interface of the Inventory Management System for MSMEs.This system is designed to efficiently manage goods data and sales data.
Conclusion
This research succeeded in developing a web-based Inventory Management System specifically designed to support Micro, Small and Medium Enterprises (MSMEs).With an Agile approach, this system offers affordable, easy-to-use, and functional solutions according to the needs of MSMEs.A priori algorithms are used in the analysis of inventory data associations, providing the ability to patterns of relationships between goods.The results showed that the implementation of the system can improve the efficiency of MSMEs in inventory management with key functions such as stock management, sales recording through intuitive POS, and customized inventory reporting.In addition, this system can provide goods recommendations to customers through data association analysis, improve sales strategies, and provide significant added value for MSMEs.The suitability of the system to the MSME business environment is reflected in the affordability and ease of use approach, which effectively supports the growth and competitiveness of MSMEs in a dynamic business environment.
SAGA:Figure 1 .
Figure 1.Research Stages a. Problem Identification: Identify the problem or need to be solved by conducting an in-depth analysis related to the variables to be studied.b.Data Collection: Data collection is divided into several processes, namely: 1. Interview: Conduct interviews with stakeholders, experts, or related parties, namely MSMEs to gain valuable perspectives and inputs.2. Literature Study: Collect data from scientific sources, journals, books, and related research to understand the theoretical basis and the latest findings related to the problem.c.System Development: Apply Agile methods in system development.The stages include:1.Product Backlog: Creates a priority list of features and tasks that must be implemented in the system.Sprints: Break down work into short iterations called sprints, which last 2 weeks.2. Daily Scrum: Daily meetings to discuss progress, bottlenecks, and teamwork plans.3. Sprint Review: Evaluate the results of completed sprints to ensure features and functionality have been implemented properly.d.System Testing and Validation: Testing systems to ensure performance, security, and functionality meet expectations.e. Conclusions and Suggestions: Make conclusions from the results of the study, highlighting the main findings and their practical implications.Provide recommendations for further actions or further development.
Figure 7
Figure 7. System View
Table 3 .
Item Support Value Itemset 2 Next, itemset 3 (3-itemset) is created by combining itemset 2. The author calculates the frequency of appearance of each itemset 3 in transactions.For example, {Lego, Barbie, Puzzle} appears 2 times, {Barbie, Action Figure, Puzzle} appears 3 times, and so on.The details are in table 4
Table 4 .
Item Support Value Itemset 3 | 2,258.2 | 2024-03-15T00:00:00.000 | [
"Business",
"Computer Science"
] |
Analysis of Process Data of PISA 2012 Computer-Based Problem Solving: Application of the Modified Multilevel Mixture IRT Model
Computer-based assessments provide new insights into cognitive processes related to task completion that cannot be easily observed using paper-based instruments. In particular, such new insights may be revealed by time-tamped actions, which are recorded as computer log-files in the assessments. These actions, nested in individual level, are logically interconnected. This interdependency can be modeled straightforwardly in a multi-level framework. This study draws on process data recorded in one of complex problem-solving tasks (Traffic CP007Q02) in Program for International Student Assessment (PISA) 2012 and proposes a modified Multilevel Mixture IRT model (MMixIRT) to explore the problem-solving strategies. It was found that the model can not only explore whether the latent classes differ in their response strategies at the process level, but provide ability estimates at both the process level and the student level. The two level abilities are different across latent classes, and they are related to operational variables such as the number of resets or clicks. The proposed method may allow for better exploration of students' specific strategies for solving a problem, and the strengths and weaknesses of the strategies. Such findings may be further used to design targeted instructional interventions.
INTRODUCTION
The problem-solving competence is defined as the capacity to engage in cognitive processing to understand and resolve problem situations where a solution is not immediately obvious. It includes the willingness to engage in these situations in order to achieve one's potential as a constructive and reflective citizen (OECD, 2014;Kurniati and Annizar, 2017). Problem solving can be conceptualized as a sequential process where the problem solver must understand the problem, devise a plan, carry out the plan, and monitor the progress in relation to the goal (Garofalo and Lester, 1985;OECD, 2013). These problem-solving skills are key to success in all pursuits, and they can be developed in school through curricular subjects. Therefore, it is no surprise that the problem-solving competency is increasingly becoming the focus of many testing programs worldwide.
Advances in technology have expanded opportunities for educational measurement. Computer-based assessments, such as simulation-, scenario-, and game-based assessments, constantly change item design, item delivery, and data collection (DiCerbo and Behrens, 2012;Mislevy et al., 2014). These assessments usually provide an interactive environment in which students can solve a problem through choosing among a set of available actions and taking one or more steps to complete a task. All student actions are automatically recorded in system logs as coded and time-stamped strings (Kerr et al., 2011). These strings can be used for instant feedback to students, or for diagnostic and scoring purposes at a later time (DiCerbo and Behrens, 2012). And they are called process data. For example, the problem solving assessment of PISA 2012, which is computer-based, used simulated real-life problem situations, such as a malfunctioning electronic device, to analyze students' reasoning skills, problem-solving ability, and problem-solving strategies. The computer-based assessment of problem solving not only ascertains whether students produce correct responses for their items, but also records a large amount of process data on answering these items. These data make it possible to understand students' strategies to the solution. So far, to evaluate students' higher order thinking, more and more large-scale assessments of problem solving become computerbased.
Recent research has focused on characterizing and scoring process data and using them to measure individual student's abilities. Characterizing process data can be conducted via a variety of approaches, including visualization, clustering, and classification (Romero and Ventura, 2010). DiCerbo et al. (2011) used diagraphs to visualize and analyze sequential process data from assessments. Bergner et al. (2014) used cluster analysis to classify similar behaving groups. Some other researchers used decision trees, neural networks, and Bayesian belief networks (BBNs) (Romero et al., 2008;Desmarais and Baker, 2012;Zhu et al., 2016), to classify the performance of problem solvers (Zoanetti, 2010) and to predict their success (Romero et al., 2013). Compared to characterizing process data, the research of scoring process data is very limited. Hao et al. (2015) introduced "the editing distance" to score students' behavior sequences based on the process data in a scenario-based task of the National Assessment of Educational Progress (NAEP). Meanwhile, these process data have been used in psychometric studies. Researchers analyzed students' sequential response process data to estimate their ability by combining Markov model and item response theory (IRT) (Shu et al., 2017). It is noteworthy that all these practices have examined process data that describe students' sequential actions to solve a problem.
All the actions, recorded as process level data, which are nested in individual level, are logically interconnected. This interdependency allows a straightforward modeling in a multilevel framework (Goldstein, 1987;Raudenbush and Bryk, 2002;Hox, 2010). This framework is similar to those used in longitudinal studies, yet with some differences. In longitudinal studies, measurements are typically consistent to show the development pattern of certain traits. For process data, however, actions are typically different within each individual. These successive actions are used to characterizing individuals' problem solving strategies.
It is common in computer-based assessments that a nested data structure exists. To appropriately analyze process data (e.g., time series actions) within a nested structure (e.g., process within individuals), the multi-level IRT model can be modified by allowing process data to be a function of the latent traits at both process and individual levels. It is noteworthy that in the modified model, the concept of "item" in IRT changed to each action in individuals' responses, which was scored based on certain rules.
With respect to the assessment of problem solving competency, the focus of this study is the ability estimate at the student level. We were not concerned with individual's ability reflected from each action at the process level, since the task needs to be completed by taking series actions. Even for individuals with high problem solving ability, the first few actions may not accurately reflect test takers' ability. As a result, more attention was put on the development of ability at the process level because it can reveal students' problem solving strategies. Mixture item response theory (MixIRT) models have been used in describing important effects in assessment, including the differential use of response strategies (Mislevy and Verhelst, 1990;Rost, 1990;Bolt et al., 2001). The value of MixIRT models lies in that they provide a way of detecting different latent groups which are formed by the dimensionality arising directly from the process data. These groups are substantively useful because they reflect how and why students responded the way they did.
In this study, we incorporated the multilevel structure into a mixture IRT model and used the modified multilevel mixture IRT (MMixIRT) model to detect and compare the latent groups in the data that have differential problem solving strategies. The advantage of this approach is the usage of latent groups. Although they are not immediately observable, these latent groups, which are defined by certain shared response patterns, can help explain process-level performance about how members of one latent group differ from another. The approach proposed in this study was used to estimate abilities both at process and student levels, and classify students into different latent groups according to their response strategies.
The goal of this study is to illustrate steps involved in applying the modified MMixIRT model in a computer-based problem solving assessment then to further present and interpret the results. Specifically, this article focuses on (a) describing and demonstrating the modified MMixIRT model using a task of PISA 2012 problem-solving process data; (b) interpreting the different action patterns; (c) analyzing the correlation between characteristics of different strategies and task performance, as well as some other operational variables such as the number of resets or clicks. All the following analysis was based on one sample data set.
Problem Solving Item and Log Data File
This study illustrates the use of the modified MMixIRT model in analyzing process data through one of the problem-solving tasks in PISA 2012 (Traffic CP007Q02). The task is shown in Figure 1. In this task, students were given a map and the travel time on each route, and then they were asked to find the quickest route from Diamond to Einsten, which takes 31 min.
The data are from the task's log file (CBA_cp007q02_logs12_SPSS.SAV, data source: http://www. oecd.org/pisa/data/) (an example of log data file is shown in Appendix 1). The data file contains four variables associated with the process. The "event" variable refers to the type of event, which may be either system generated (start item, end item) or student generated (e.g., ACER_EVENT, Click, Dblclick). The "time" variable is the event time for this item, given in seconds since the beginning of the assessment, with all click and double-click events included. The "event_value" variable is recorded in two rows, as a click event involves selecting or de-selecting a route of the map. For example, in the eleventh row where the state of the entire map is given, 1 in the sequence means that the route was selected, and 0 means that it was not; the twelfth row records an event involving highlighting, or un-highlighting. A route of the map represents the same click event, and it is in the form "hit_segment name" (The notes on log file data can be downloaded from http://www.oecd.org/pisa/data/). All the "click" and "double-click" events represent that a student performs a click action that is not related to select a route. Table 1 shows the label, the route and the correct state of the entire selected routes.
Sample
The study sample was drawn from PISA 2012 released dataset, consisting of a total of 413 students from 157 American schools who participated in the traffic problem-solving assessment (47.2% as females). The average age of students was 15.80 years (SD = 0.29 years), ranging from 15.33 to 16.33 years.
For the traffic item response, the total effective sample size under analysis was 406, after excluding seven incomplete responses. For the log file of the process record, there were 15,897 records in the final data file, and the average record number for each student was 39 (SD = 33), ranging from 1 to 183. The average response time was 672.64 s (SD = 518.85 s), ranging from 58.30 to 1995.20 s.
Process-Level Data Coding
In this task log file, "ACER_EVENT" is associated with "click." However, in this study we only collected the information of ACER_EVENT and deleted the redundant click data. Then, we split and rearranged the data by routes, making each row represent a step in the process of individual students, and each column represent a route (0 for deselecting, and 1 for selecting). Table 2 shows part of the reorganized data file, indicating how individual student selected each route in each step. For example, the first line represents that student 00017 selected P2 in his/her first step. Process data were first recoded for the analysis purpose. Twenty-three variables were created to represent a total number of available routes that can possibly be selected (similar to 23 items). The right way for solving this problem is to select the following six routes: Diamond-Nowhere-Sakharov-Market-Lee-Mandela-Einstein (i.e., P1, P5, P7, P8, P13, and P17). For the correct routes, the scored response was 1 if one was selected, and 0 otherwise; for the incorrect routes, the scored response was 0 if one was selected, and 1 otherwise. Each row in the data file represents an effective step (or action) a student took during the process. In each step, when a route was selected or not, the response for this route was recoded accordingly. When a student finished an item, all the steps during the process were recorded. Therefore, for the completed data set, the responses of the 23 variables were obtained and the steps were nested within students.
The Modified MMixIRT Model Specification
The MMixIRT model has mixtures of latent classes at the process level or at both process and student levels. It assumes that possible heterogeneity exists in response patterns at the process level and therefore are not to be ignored (Mislevy and Verhelst, 1990;Rost, 1990). Latent classes can capture the interactions among the responses at the process level (Vermunt, 2003). It is interesting to note that if no process-level latent classes exist, there are no student-level latent classes, either. The reason lies in that student-level units are clustered based on the likelihood of the processes belonging to one of the latent classes. For this particular consideration, the main focus in this study is to explore how to classify the process-level data, and the modified MMixIRT model only focus on latent classes at the process level.
The MMixIRT model accounts for the heterogeneity by incorporating categorical or continuous latent variables at different levels. Because mixture models have categorical latent variables and item response models have continuous latent variables, latent variables at each level may be categorical or continuous. In this study, the modified MMixIRT includes both categorical (latent class estimates) and continuous latent variables at the process level and only continuous (ability estimates) latent variables at the student level.
The modified MMixIRT model for process-level data is specified as follows: Process-Level For the process level, in Equation (1), i is an index for ith route (i = 1, . . . , I), k is an index for a student (k = 1,. . . , K), j is an index for the jth valid step of a student during the response process (j = 1, . . . , J k ),(J is the total steps of the kth student) and g indexes the latent classes (C jk = 1, . . . , g. . . G, where G is the number of latent classes), C jk is a categorical latent variable at the process level for the jth valid step of student k, which captures the heterogeneity of the selections of routes in each step. P y jki = 1 θ jkg , C jk = g is the probability of selecting an route i in the jth step of student k, which is predicted by the twoparameter logistic (2PL) model, and α ig.W is the discrimination parameter of process-level in class g, W means within-level, β ig is the location parameter in class g, and θ jkg is the latent ability of examinee k for a specific step j during the process of selecting the route, which is called the process ability in this study (θ jkg ∼N(µ jkg , σ 2 jkg )). The process abilities across different latent classes are constrained to follow a normal distribution (θ jk ∼N(0, 1)). In Equation (2), P y jk1 = ω 1 , y jk2 = ω 2 , · · · , y jkI = ω I is the joint probability of the actions in the j th step of student k. ω i denotes either selected or not selected for ith route. For the correct routes, 1 represents that the route was selected, and 0 otherwise; for the incorrect routes, 0 represents that the route was selected, and 1 otherwise. γ jkg is the proportion of the jth step in each latent class and G g=1 γ jkg = 1. As can be seen from the Equation (2), the probability of the actions (y jki ) are assumed to be independent from each other given class membership, which is known as the local independence assumption for mixture models.
For the student level, in Equation (3), α i.B is the item discrimination parameter where B represents between-level. β i is the item location parameter which is correlated with the responses of the final step of the item. θ k is the ability estimate at the student level based on the final step of the process, which also represents the problem-solving ability of student k in this study (θ k ∼N(0, 1)).
Figure 2 demonstrates a modified two-level mixture item response model with within-level latent classes. The squares in the figure represent item responses, the ellipses represent latent variables, and 1 inside the triangle represents a vector of 1 s. As is shown in the figure, the response for each route of the jth step [y jk1 ,..., y jki ,..., y jkI ] is explained by both categorical and continuous latent variables (C jk and θ jkg , respectively) at the process level; and the final response of students for each route [y k1 ,..., y ki ,..., y kI ] is explained by a continuous latent variable (θ k ) at the student level. The arrows from the continuous latent variables to the item (route) represent item (route) discrimination parameters (α ig,W at the process level and α i,B at the student level), and the arrows from the triangle to the item responses represent item location parameters at both levels. The dotted arrows from the categorical latent variable to the other arrows indicate that all item parameters are class-specific.
It should be noted that the MMixIRT model is different from the traditional two-level mixture item response model in the definition of the latent variables at the between-level. In the standard MMixIRT model, the between-level latent variables are generally obtained from the measurement results made by within-level response variables [y jk1 ,..., y jki ,..., y jkI ] on between-level latent variables (Lee et al., 2017). In this study, Frontiers in Psychology | www.frontiersin.org the process-level data mainly reflect the strategies for problem solving, while the responses at the last step represent students' final answers on this task. Therefore, students' final responses are used to estimate their problem-solving abilities (latent variable at the between-level, i.e., ability of the student level) in the modified MMixIRT model. Mplus Software Muthén, 1998-2015) was used to estimate the parameters of the modified MMixIRT model, as specified above. In addition, the detailed syntax are presented in Appendix 5. Table 3 shows the proportion of each route selected by the students in the correct group and in the wrong group, respectively. The correct group consists of students who selected the right routes, and the wrong group refers to students who failed to do so. There are a total of 476 students, with 377 in the correct group and 99 in the wrong group. The results show that most of the students in the correct group selected the right routes, while a large number of students in the wrong group selected the wrong routes. To further explore the differences of the proportion of students selecting the wrong routes in the two groups, χ 2 -tests were conducted. No significant differences were found between the correct group and the wrong group in terms of the proportion of students who clicked four wrong routes, including P4 [χ 2
Model Selection
The determination of the number of latent classes has been discussed in many studies (Tofighi and Enders, 2008;Li et al., 2009;Peugh and Fan, 2012). Several statistics of the mixture IRT models are often computed to compare relative fits of these models. Akaike's (1974) information criterion (AIC) incorporates a kind of penalty function for over-parameterization on model complexity. A criticism of AIC has been that it is not asymptotically consistent because the sample size is not directly involved in its calculation (Janssen and De Boeck, 1999;Forster, 2004). Schwarz (1978) proposed BIC as another informationbased index, which attains asymptotic consistency by penalizing over-parameterization by using a logarithmic function of the sample size. For the sample size in BIC, the number of persons is used in multilevel model (Hamaker Ellen et al., 2011) and in multilevel item response model (Cohen and Cho, 2016). Most studies suggested the BIC value as the best choice because it was a sample-based index that also penalized the sophisticated model. However, Tofighi and Enders (2008) indicated in their simulation study that a sample size-adjusted BIC (aBIC) was an even better index. Smaller AIC, BIC, and aBIC values indicate a better model fit for mixture IRT models. Besides, entropy value has been used to measure how well a mixture model separates the classes; an entropy value close to 1 indicates good classification certainty (Asparouhov and Muthén, 2014). The model selection results for the modified MMixIRT models are given in Table 4. The model fit indicates that LL, AIC, BIC, and aBIC decreased consistently as the class number increased to eight classes, and the nine-class model did not converge. As noted above, the best fit for AIC, BIC, and aBIC was determined or dictated by the smallest value in the ordered set of models from the least to the most complex. As suggested by Rosato and Baer (2012), selecting a robust latent class model is a balance between the statistical result of the model fit and the substantive meaning of the model. The model that fits best and yields meaningful classes should be retained. In this study the proportions of latent classes were examined to ensure the empirical significance, and the interpretability of each class was considered accordingly. For the 6-class model, the proportion of each class was 18.1, 30.7, 18.1, 20.1, 7.2, and 5.9%. And for the 7-class model, the proportion was 19.9, 13.4, 6.0, 12.3, 13.5, 27.4, and 7.5%. Compared to the 6-class model, in the 7-class model, the extra class of the steps was similar to class 2 of the 6-class model, while mixing class 4 at the same time. This makes the 7-class model hard to interpret. For the 8-class model, the proportion of one of the classes was too small (only 2.7%). Taking into account both the model fit index and the interpretability of each class, the 6-class model was retained in this study.
Description of Class Characteristics
The most likely latent class membership are displayed in Table 5. In this matrix, steps from each class have an average probability of being in each class. Large probabilities are expected on the diagonal. The numbers on diagonal are greater than 0.9. It can be concluded from the results that the modified MMixIRT model can classify students properly based on process data. Figure 3 presents the characteristics of route selection for each class based on the 6-class mixture IRT model, with , , .... indicating the order of the routes. Based on the results of the modified MMixIRT model, the number of clicks of the 23 routes (P1-P23) in each class is listed in Appendix 2. The characteristics of route selection can be obtained pursuant to routes that get more clicks than others in each class, as well as the relations among routes shown in Figure 1. For example, P17, P13, P1, P8, P5, P16, and P7 in Class 1 were clicked more than other routes; however, Figure 1 shows that there is no obvious relationship between P16 and other routes. Therefore, the characteristic of Class 1 was defined as P1-P13-P17-P8-P5-P7 and P16 was removed. These routes were sequenced by the number of clicks they got, with the most clicked routes taking the lead. As indicated in Figure 3, different latent classes have typical characteristics depending on the similarity of the correct answers. For example, the route selection strategy of Class 1 best approximated the ideal route required by the item. Based on their last click, almost all the students in Class 1 gave the correct answer. Therefore, Class 1 could be regarded as the correct answer class, while the rest classes took different wrong routes.
The numbers in circles (, , ....) indicate the order of the routes.
As is illustrated in Table 6, different classes demonstrated different means of process-level ability. It is obvious that the mean process ability in Class 1 is the highest (0.493), followed by Class 6, Class 2, Class 4, yet Class 5 and Class 3 with the lowest process-level ability. A closer check of these classes in Figure 3 indicates that the selected routes of Class 5 and Class 3 were incredibly far away from the correct one, and they took far more than 31 min. Therefore, it is no surprise that the mean process-level ability estimates of these two classes were the lowest and were both negative (−1.438 and −0.935, respectively). In addition, as can be seen in the number of students, almost all the students in Class 1 provided the right answer, demonstrating that different latent classes had different probabilities of the correct answer. In summary, the processlevel ability is different across latent classes, which is related to different strategies of students' route selection or cognitive process.
The Sequence of Latent Classes at the Process Level
Based on the results of the modified MMixIRT model, the characteristics of the strategy shifts between step-specific classes were explored and summarized. To capture the characteristics of students' strategy shifts during the response, it is necessary to identify the typical route selection strategy of each class in the first place. In this study, if a student applied the strategy of a certain class three or more times consecutively, it was considered that the student had employed the strategy of this class at the process level. Three times was chosen as the rule of thumb because it demonstrated enough stability to classify a solution behavior. Then the strategy shifts of each student during their clicking procedure could be obtained in orders. The typical route selection strategy of different classes and the class shifts of students in the correct group are presented in Appendixes 3, 4, respectively. The results in Appendix 4 provide useful and specific information about the strategy shifts used by students over time. For example, in the correct group, 58 students shifted from one class to another, including 22 from Class 2 to Class 1, 3 from Class 3 to Class 1, 30 from Class
The Relationship of the Two Level Ability Estimates and Operational Variables
To validate whether students with different patterns of actions will have different process-level ability, the descriptive statistics In the column of no of Students, the last step of the process within each student is classified into one of the six latent classes. Then, the numbers of students who gave the correct or wrong answer are summarized based on the latent classes.
were conducted of operational variables such as the number of route clicks and resets and their correlation with the mean ability estimate of process-level ability (See Table 7 for details).
To further explore the differences of click actions between the correct group and the wrong group, several T-tests were conducted. The results indicate that students in the correct group did significantly fewer resets than their counterparts in the wrong group [t (404) = 2.310, P < 0.05]. No significant differences were detected of the number of routes clicked or the response time between the correct group and the wrong group [t (404) = 1.656, P = 0.099; t (404) = −0.199, P = 0.843]. The results in Table 7 suggest two things. Firstly, positive correlation existed between the estimate of student-level ability and that of processlevel ability. This means that the process-level ability estimate provides consistency and auxiliary diagnostic information about the process. The students with higher process-level ability had higher ability estimates of student level. Secondly, for the processlevel ability, a significant negative correlation existed between the mean process-level ability estimate and variables such as the valid number of route clicks and the number of resets for students in the correct group. It is concluded that in the correct group, the less frequently a student clicks the routes and resets the whole process, the higher process-level ability he or she is likely to obtain. For students in the wrong group, however, no significant correlations were observed between the mean ability estimate and the variables discussed above. Instead, a significant negative correlation was found between the mean process-level ability estimate and the absolute time of difference from 31 min. For these students, their process-level ability decreased as the time cost by the wrong routes increased. Third, the mean process-level ability estimate for the correct group was 0.310, in contrast to −0.175 for the wrong group, which reveals a significant difference between the two groups [t (404) = 8.959, P < 0.001]. In terms of student-level ability, the estimate for the correct group was significantly higher than for the wrong group [t (404) = 112.83, P < 0.001].
The result in Table 8 indicates that the sequence of latent classes are consistent with the ability estimates at both process and student levels. For students in the correct group, the mean process-level ability estimate decreased as the number of class shifts, clicks and resets increased. Students with higher processlevel ability tended to select the correct route immediately or after a few attempts. Consequently, these students clicked and reset for fewer times because they had a clearer answer in mind and therefore were more certain about it. In contrast, for students in the wrong group, the mean ability estimates at both process and student levels were rather small when the number of class shifts were 0 and 1. When the number of class shifts was 0, students failed to stick with a specific strategy to solve the problem during the process. It took them a longer response time with about two resets on average; as a result, the time cost for their route selection was nearly twice the target time. When the number of class shifts was 1, these students simply stuck to a totally wrong route for the entire time, with shorter response time and fewer numbers of clicks. However, unlike the correct group, the number of class shifts in the wrong group showed a non-linear relationship with the mean ability at both process and student levels. At first, when the number of class shifts increased from 0 to 4, the ability estimates at both levels increased as well. The explanation was that because these students figured out the right routes, they should have higher abilities than the 0 shift group that sticks to the wrong route all the time. For example, students with four shifts all ended up using strategy of Class 1, which was the right strategy class (Appendix 4). Therefore, they were supposed to have the highest process ability in the wrong group. However, when the number of class shifts increased from 5 to 6, the processlevel ability estimate dropped. This has much to do with the fact that too many shifts reflected little consideration and a lack of deep cognitive processing.
DISCUSSION
A modified MMixIRT model was described for modeling response data at process and student levels. The model developed in this study combined the features of an IRT model, a latent class model, and a multilevel model. The process-level data provide an opportunity to determine whether latent classes or class shifts differ in their response strategies to solve the problem. The student-level data can be used to account for the differences of students' problem solving abilities. The ability estimate at both process and student levels are different across latent classes. The modified MMixIRT model makes it possible to describe differential strategies based on process-level and student-level characteristics. If a student's specific strategies and their strengths and weaknesses can be described in the process of solving a problem, then the assessment of a student's proficiency in problem solving can guide instructional interventions in target areas.
As process data from various computer-based assessment or educational learning system have become common, there is an urgent call for analyzing such data in an accurate way. The psychometrical model-based approach has a great potential in this aspect. Latent classes and the characteristics of latent class shifts obtained from process data can reveal students' reasoning skills in problem-solving. The findings of characteristics of process-level latent classes make it easy to uncover meaningful and interesting action patterns from the process data, and to compare patterns from different students. These findings provide valuable information to psychometricians and test developers, help them better understand what distinguishes successful students from unsuccessful ones, and eventually lead to better test design. In addition, as shown in this study, some operational variables such as the number of resets and the number of clicks or double clicks are related to the ability estimates at both process and student levels and therefore can predict student scores on problem solving assessment. Since students' different abilities capture individual patterns in process data, it can be used to score or validate the rubrics. Williamson et al. (2006) explain that a "key to leveraging the expanded capability to collect and record data from complex assessment tasks is implementing automated scoring algorithms to interpret data of the quantity and complexity that can now be collected" (p. 2).
The extension of the modified MMixIRT approach proposed in this study can be implemented in several ways. Firstly, it can be simplified in removing the process-level ability parameters, and also be extended to include student-level latent classes instead of abilities. Secondly, one of the advantages of this proposed model is that item parameters can be constrained to be equal across the process-level and student-level. So the abilities of both levels are on the same scale and can be compared and evaluated. Lastly, the main benefits of multilevel IRT modeling lie in the possibility of estimating the latent traits (e.g., problem solving) at each level. More measurement errors can be accounted for by considering other relevant predictors such as motivations (Fox and Glas, 2003).
The psychometrical model-based approach also has its limitations. First, even though latent class shifts preserve the sequential information in action series, they do not capture all the related information. For instance, for the purpose of convenient analysis in this study, some unstable characteristics of a latent class such as random shifts were not used in our definition of class characteristics and class shifts. Fortunately, in many cases, as in this study, this missing information does not affect the results. If it becomes an issue in some cases, it can be addressed by considering more details about the latent class shifts to minimize the ambiguity. Second, this study only takes a single route as an analysis unit, yet failing to consider possible route combinations. For example, in some cases two routes are available, it makes full sense to combine these two routes into one to conduct analysis, because the link between these routes is exclusive. In the future, we may consider the transition model for different route combinations, such as Bi-Road. In terms of the generalizability of the modified MMixIRT model for solving complicated problems, if the process data for another single task can be recoded or restructured as the data file in this study, similar models can be applied to explore the latent classes and characteristics of the problem solving process. However, the difficulty during the analysis lies in how to recode the responses into dichotomous data. For multiple tasks, a three-level model can be applied, with the first level as the process level, the second as the task level and the third as the student level. If there are plenty of tasks, the ability estimates of the student will stay stable. Therefore, while the generalizability of the model may be conditional, the main logic of the MMixIRT approach can be generalized.
AUTHOR CONTRIBUTIONS
HL research design, data analysis, and paper writing. YL paper writing. ML data analysis, and paper writing. | 8,204 | 2018-08-03T00:00:00.000 | [
"Computer Science"
] |
Determinants of Secondary School Teachers ’ Job Satisfaction in Tanzania
This study examined teachers’ job satisfaction in Tanzania. It addressed one research question: what factors determine secondary school teachers’ job satisfaction?The studywas conducted in eight secondary schools in two regions of Tanzania. It used focus group discussion as the data collection tool. Results show that teachers were satisfied by both monetary and nonmonetary incentives such as community support. They were pleased with fair remuneration packages that related to their labour input, opportunities for career development, a well-defined individual appraisal system, timely promotion, and requisite workplace conditions. The study also showed that teachers’ friendship and cooperation with coworkers and students as well as the respect of community members also enhanced their satisfaction in teaching. Also important to their satisfaction is their students’ success in and after school, which reveals the teachers’ sense of duty and responsibility. Teachers’ job dissatisfaction can lead to their search for other means to gain economically. It is recommended that care should be given to address teachers’ pertinent issues, especially salaries, workplace conditions, and timely promotion, to enhance teachers’ physical and mental attachment to their workplaces.
Introduction
Provision of quality education is important for facilitating a nation's development.Research has found that, to improve individual learners' values, attitudes, behaviours, and skills, quality education is of paramount importance [1][2][3].Teachers are the heart of classroom instruction, so they are key to learners' productivity and hence to society's efficiency.Teachers' effectiveness depends on their competence, both academic and pedagogical, as well as a correlation between their training and skills and their position, workload, and work encouragement [2].Their satisfaction with their jobs is the focus of this study.While there are many studies on workers' job satisfaction, little has been investigated about teachers' job satisfaction, particularly among secondary school teachers in Tanzania.To gain this insight, the article attempts to answer one important question: what factors determine secondary school teachers' job satisfaction?
Accepting the teaching role calls for sacrifice and devotion.Alongside the demands of teaching, teachers have other duties such as guiding, counselling, and disciplining students and managing classes [4].For teachers to devote their efforts to serving the community, they need to see that they are valued and are being properly supplied with the things necessary for them to accomplish their duties.
When teachers are at school, they require a conducive workplace environment to conduct their profession effectively.They also need adequate remuneration [5].According to Narimawati [6], employees are attracted to jobs that make it possible for them to meet their daily needs.Unless these needs are achieved, teachers cannot realise their full potential and will begin to be less committed to teaching.Rasku and Kinnunen [7] found that Finnish secondary school teachers expressed satisfaction in teaching when they were assured of their well-being both economically and in the workplace environment.
With regard to income, teachers' appreciation of their schools is enhanced by the salaries they receive, especially when these salaries correspond to their levels of education, responsibilities they hold, and duties they perform in the school [4].For teachers, financial rewards are an important aspect in relation to their satisfaction in teaching and related services.Arguably, when teachers feel positive about 2 Education Research International their income, especially their salaries, their accountability is boosted.
It is believed that, to generate teachers' commitment to the school, overall job satisfaction and perceptions of school support are key emotional and cognitive attributes.Teachers' feelings of job satisfaction operate through independent channels to mediate the impact of work experiences on their devotion to the school [8].Sometimes job satisfaction is affected by whether the outside world is perceived as supportive or not.In Kenya, it is reported that, more than ever before, teachers do not feel supported.Instead, they experience tremendous and constant pressure from politicians, parents, and local communities to deliver quality education [4].Overwhelming pressure to perform in the absence of support might explain secondary school teachers' dissatisfaction with their work.
Methodology
This qualitative study was designed to explore determinants of job satisfaction among secondary school teachers in Tanzania, using an in-depth exploration of teachers' perceived contentment.The research provided respondents with an opportunity to express their feelings and views about job satisfaction.
Study Sites and Participants.
The study was conducted in Kilimanjaro and Lindi Regions of Tanzania.These regions were purposely selected on the basis of their performance in the 2015 national examinations.Kilimanjaro was in the higher performing cluster while Lindi was in the lower.In addition, the two regions have different historical relations to education provision in Tanzania.The Kilimanjaro Region had an early contact with European missionaries and so accessed modern education ahead of other regions [9].Lindi had no such contact and consequently remains one of the regions that lag behind in terms of access to secondary education.
In each region, four secondary schools were purposely selected for the study.Schools were chosen based on three criteria: location (urban or rural), performance in national examination (high or low), and type of school ownership (private or public).For government secondary schools, both old-established and new community schools were chosen.For private schools, the selection of participating schools depended on a variety of ownership categories.Four categories of private school ownership exist: Christian schools, Muslim schools, schools operated by the Tanzania Parents' Association, and schools that are owned by Trust Funds and Cooperatives.In all, one school from each category was selected.
The wide-ranging types of school ownership allowed the researcher to discover varying teachers' views, reflections, and opinions as they related to the teaching environments.Secondary school teachers in Tanzania are paid differently depending on the income and economic power of their employers.Thus, in each region, two public secondary schools and two private ones in both rural and urban settings were selected, based on the mentioned criteria.
Focus group discussion was the main data collection tool.Groups of between eight and twelve teachers with similar backgrounds were invited to participate in these groups.The advantage of focus groups is that participants could interact with each other rather than with just the researcher.The interaction created an opportunity for participants to engage in a conversation regarding the topic of investigation.This ensured confirmability of data.The role of the researcher was to guide the proceedings of the discussion.Semistructured questions were prepared beforehand to guide the discussion.
Data Management and Analysis.
Focus group discussions were recorded using a tape recorder.They were transcribed verbatim, and transcripts in Kiswahili were translated into English and printed.Data were analysed following a thematic analysis framework, using the NVivo version 7.0 computer software for analysing qualitative data.Codes were assigned to a section of transcribed data where a word or phrase was taken.This facilitated putting together concepts or themes that were raised by respondents thus helping in developing data categories.Care was taken to ensure that only terms that occurred throughout the whole data set were taken to constitute major categories that were later developed into themes.Participants' quotes were selected to illustrate the themes and topics that emerged for discussion.
Findings and Discussion
Data on job satisfaction emerging from the inquiry showed factors that were clustered into three categories: first, monetary incentives; second, satisfaction with the school and the work environment created for the teachers; third, satisfaction with society.
Monetary Incentives.
With regard to monetary incentives, most respondents desired their income to correspond to their workload.Thus, such aspects as monthly salaries, transfer allowances, periodic adjustments to their salary scales, and leave allowances had to be realistic.This is in line with Jonathan et al. [5] who found that teachers' job satisfaction would be improved if their welfare and workplace conditions such as streamlining salary structures and remuneration packages are fine-tuned in proportion with other professions.
The reason why monetary incentives matter is that they are a tangible expression of the school and community's value of the teacher.The relationship between teachers' labour and the kind of remuneration they were given translates to commendation for work and recognition of their value.As argued by Albee and Piveral [10], appropriate salary levels foster commitment, thereby assuring that capable individuals continue to work in the school.
It was found that teachers are satisfied by both good salaries and flexible teaching schedules.Better incomes and benefits are instrumental in satisfying teachers' economic needs.Salami [11] similarly found that professionals who are typically well paid benefit their organisation throughout their career span.Good salary is also necessary to recruit wellqualified teachers.
Issues of inequitable salary scales for teachers with the same qualifications and work experience in the same schools were raised by several respondents in several private secondary schools.Teachers were particularly concerned about such irregularities.Most respondents raised the issue of delayed salaries.One of them had this to say: We are given different salary scales although having the same qualifications and teaching experience.The salary is unreliable.Sometimes, I go without pay for a period of 3 to 4 months.This is painful.I have adopted survival strategies such as giving private tuition.(Teacher, School E, Lindi Region) The findings from this study showed that teachers in secondary schools expressed dissatisfaction with their salary levels, fringe benefits, and allowances.Therefore, they felt the need to top up their salaries with nonteaching activities such as private tutoring, small-scale businesses, gardening, and animal keeping.As a result, teachers were much less committed to their primary jobs.
Nguni [12] found that the majority of teachers complained about poor salaries, which explains why they embarked on second jobs, mostly to the detriment of their school and students.The same study revealed, as is also the case in the present study, that a lot of teachers searched for alternative teaching opportunities or changed jobs to increase their income.Teachers' salaries were insufficient to support their families to live decently.
Voices were raised concerning unrealistically low remuneration packages.In addition, delays in receiving their monthly salary hampered teachers' devotion to perform their duties because they found life hard, which affected their intention to remain with their employer.The findings indicated that teachers working in government schools are not paid the same level as other Tanzanian civil servants despite similarities in academic qualifications.One teacher, for example, pointed out the following:
I am in the teaching profession, but my colleagues who joined other institutions such as the Tanzania Revenue Authority are well paid. We are all university graduates, but the salary scales are different. When I think of this by looking at how I am doing at school, I get discouraged. (Teacher, School A, Kilimanjaro Region)
Attractive remuneration packages enhanced teachers' attachment to the school.Yet government secondary school teachers had fixed salary scales determined centrally by the government.Nguni [12] observed that top administration in schools does not determine teachers' salaries because they are centrally determined by the government, irrespective of the amount of teachers' actual work or the quality of that work.
According to some teachers, the government recognises the insufficiency in salaries and encourages teachers to commit themselves to the school but also denies them the opportunity to seek extracurricular sources of income.The following is one of the comments made by teachers:
In a government school, I feel satisfied because I get enough time to do my personal activities. I have economic activities to attend to, particularly my farm, garden and poultry project. I earn enough money to support my family. The only time I am at school fully is when I teach or when I am on duty. (Teacher, School C, Kilimanjaro Region)
Salary scales in private secondary schools, in contrast, were determined largely by specific schools' income and negotiations between employers and teachers.School heads or school managers often create packages that are employeefocused.Thus, what teachers obtain differs from one private secondary school to another.
With regard to equitable rewards in the form of pay, several teachers expressed satisfaction and increased devotion to their duties when they perceived such rewards as fair.It is argued along the same line as Liu and Wang's [13] study that employees are satisfied when they perceive that fair decisions are made resulting from policies and procedures which fix salary scales that correspond to their tasks.Thus, it is only when these basic conditions are met, irrespective of whether the school is public or private, that both teachers and employers are satisfied.Teachers quitting or working additional jobs and shirking their binding responsibilities are the dangerous consequences of not providing adequate teacher salaries.
As experience from other countries shows, employees who are deprived of deserved salary and rewards experience job dissatisfaction [4,7,14].Hence, either teachers strive to seek other means of economic gain or they leave to seek work elsewhere.
Diminishing devotion to their duties because of poor remuneration was profoundly true of teachers in government secondary schools, although it was also the case for a few teachers in the sampled private secondary schools.In both cases, teachers experienced unequal treatment in terms of their salary, compared to employees in other public or private sector institutions, despite similarities in the academic or professional qualifications.These findings are similar to those proposed by Michael [15] describing private firms.Private firms use calculative, instrumental, and business-oriented approaches in paying employees.They make greater use of performance-based pay, individual appraisal systems, and direct communication with employees, leading to salary negotiations that result in employees' attachment to their places of work.
Satisfaction with the School.
With regard to the contribution of the school to teachers' job satisfaction, findings revealed that several factors beyond mere remuneration led to teachers' well-being.Most teachers said that timely and regular promotion would contribute to their comfort with the profession, but they reported that often their deserved or expected promotions were not realised.The teachers stated that lack of timely promotion hindered their readiness to serve their employer.One teacher, for example, pointed out the following: We are not promptly promoted.Teachers have no problem with working hard, but when our basic rights are compromised, we become dissatisfied.Personally, I am committed to teaching, but when I see that my promotion is delayed, and I am not getting salary increments, I feel disappointed.(Teacher, School F, Lindi Region) Sharma and Bajpai [16] observe that employees' satisfaction with promotional opportunities depends on several factors, including the probability that employees perceive fairness in the encouragement process in terms of the timing of promotion after meeting the required standards.In the present study, quite a lot of participants reported dissatisfaction resulting from delayed promotion and subsequent failure to increase their salary increment.
Overall, teachers displayed happiness with the profession when they were provided with an opportunity for academic and professional development as well as timely promotion.This enhanced their advancement in the realm of skills, capacity, and experience.Teachers' loss of morale, especially in government secondary schools, had roots in their loss of hope that the government would address their concerns regarding salary scales and timely promotion.Such results align with findings on a study regarding predictors of job satisfaction among Nigerian teachers by Ololube [8].Results revealed that employees who perceive limited opportunities for career advancement and low salary have decreased job satisfaction.
The issue of teaching and learning materials was of paramount importance for sustaining teachers' eagerness to work.The majority of teachers pointed out that they were frustrated when they had not received the teaching materials needed to help students understand concepts being taught.They mentioned that the nonavailability of teaching materials was an obstacle to their optimal output.This demoralised them because they were unable to serve students effectively.The following was mentioned by one teacher:
I teach chemistry, but, at the school, there is no laboratory equipment. How can I rely on abstract content teaching? Science needs learning by doing. Students need to see and learn the subject matter practically. (Teacher, School E, Lindi Region)
Another respondent reported the following:
Our school lacks laboratories for all science subjects. Do you expect students to get divisions one and two in national examinations? It is difficult. Students' failure makes me demoralised because the society considers that I am unable to perform and do my work better. (Teacher, School D, Kilimanjaro Region)
Teachers attributed their inability to meet society's expectations of students' performance to the absence of requisite teaching and learning materials.In a study conducted about job satisfaction among secondary school teachers in Transkei, South Africa, Mwamwenda [17] revealed that teachers were also concerned about the inadequate supply of teaching and learning materials.
In addition, government or community's failure to meet basic infrastructure standards when a school was established put a strain on teachers.One teacher complained as follows: There are many problems in these newly established community secondary schools.The proper requirements were lacking when the schools were started.This puts a burden on us teachers.(Teacher, School F, Lindi Region) Teachers' positive feelings about workplace conditions enhanced their work in a way that promoted their attachment to the job.As argued by Odhiambo [18], productivity at the workplace is optimised when workers perceive sufficient attention being given to their physical work facilities.Teachers asserted that their teaching life was difficult when workplace conditions lacked basic teaching facilities and resources.
Nearly all respondents were discontented about deteriorating work conditions, especially in most public secondary schools.One teacher, for example, said the following:
The government does not focus on how students learn and whether or not there are good classrooms, libraries, laboratories or learning and teaching materials. It just encourages people in the community to erect sub-standard buildings they call classrooms, but with no teachers or requisite work conditions. How can students acquire knowledge and skills under such conditions? (Teacher, School G, Lindi Region)
The teachers indicated that if education providers (both government and private school managers) did not carefully address the issue of undesirable workplace conditions, their schools would remain buildings with no teachers.At best, teachers would be at their workplace physically, but their commitment would be elsewhere.
Workplace conditions surfaced as a factor that accounted for teachers' satisfaction with their teaching and attachment to their school in a number of ways.Teachers responded differently regarding an issue of workplace conditions, depending on the physical space and facilities that their schools provided.Some government and private schools had averageto-good facilities such as teachers' houses, while others lacked the same.It was observed in one private school, for example, that each teacher had a specific place in which to carry out his or her duties comfortably.The staffroom had a notice board, chairs, and tables for every teacher as well as a computer connected to the Internet.
In general, however, in most government secondary schools, teachers lacked adequate office space necessary for reading, lesson preparation, or marking of students' assignments and exercises.Teachers often had to share office space and furniture, and, ultimately, this affected their concentration and productivity.
These findings imply that the need to address workplace conditions is an issue of paramount importance if teachers are to have their total physical and mental presence at their workplace.Employers' failure to meet teachers' basic requirements put strain on teachers, resulting in reduced morale.
Regarding teachers' opportunities for professional development, some respondents reported that, through the support and encouragement of schools heads, teachers went for further studies in higher learning institutions.Professional growth allowed advancement or increased responsibility.The following was reported: We are given part of the fees and a per diem when we travel for science laboratory work and examinations, and we use the school library for our studies.This scheme promotes our devotion to our school's advancement.(Teacher, School D, Kilimanjaro Region) Teachers' opportunities for career advancement exerted an influence on their job satisfaction, comfort in the profession, and readiness to serve their employers.Teachers' satisfaction with the day-to-day execution of their duties was achieved when they believed that their future prospects were good.This finding matches experiences in Israel [19] that teachers' professional development and growth in their current workplace enhances their satisfaction and encourages persistence in teaching.
Support for teachers was also reflected in the way the school administration provided opportunities for them to perform school duties with minimal supervision.There is empirical evidence from various studies that teachers' good relations with their supervisors and coworkers affect job satisfaction [4,20,21].These researchers agree that job satisfaction is related to employees' opportunities to interact with others at the workplace.Wasserman and Yehoshua [19] emphasize that lowering of supervisory pressure on teachers improves their teaching and strengthens the cooperation of the teachers with the administration and their work colleagues.
Teachers were happy to work hard because of the satisfaction they obtained from the friendships they had established with coworkers, students, and parents.The respect they received from community members, the moral satisfaction of their profession, and the pleasure they acquired by seeing their students excelling after school sustained their morale at work.Some teachers reported that they had gained significantly by making friends with their leaders, fellow staff members, and students, whose affection was of help in time of need.Care and support from significant others reinforced teachers' attachment to the school and the teaching profession.One teacher reported the following: I was once seriously sick.I am single.I do not live with any family member.Fellow teachers and my students helped me.They took care of me.I am glad to have friends.(Teacher, School B, Kilimanjaro Region) These results support the findings of previous studies, such as that by Sirima and Poipoi [4] who analysed perceived factors influencing public secondary school teachers' job satisfaction in Kenya.In their study, findings revealed that the greatest need for teachers centres on interpersonal needs.Healthy relationships with colleagues and school leaders significantly increase teachers' concern for delivering good educational services.The results from a study conducted in Tanzania by Jonathan et al. [5] reflect similar observations that the quality of close friendships among employees promotes positive work outcomes.Additionally, friendship opportunities are associated with increased job satisfaction, job involvement, and commitment to the school.Thus, a school should seek to create group cohesion amongst its teachers.
Satisfaction with Society.
Most teachers reported that they were generally satisfied with the recognition they received from the surrounding community.One teacher, for example, said the following: Community members accord me respect.I am teaching their children.This impresses me, and I am happy that my contribution is recognised.(Teacher, School A, Kilimanjaro Region).
Teachers express satisfaction with teaching when they perceive that community members recognise and respect their contribution to education.Teachers work hard at helping students learn because they expect to receive respect for the role they play in addressing students' physical, academic, and moral growth [22], which is for the betterment of individual students, their parents, and the nation as a whole.
The nature of societal orientation in Tanzania influences community members' readiness to support the schools' management practices within specific community settings [15].In other words, the preliberalisation society in Tanzania lived according to a socialistic ideology, which promoted cooperation in building the nation.Teachers were regarded as primary participants in maintaining social cohesion in society, which made the teaching profession satisfying to them.
On the other hand, the absence of community support in recent times has resulted in teachers' decreased morale in their work.Quite a lot of teachers attributed lack of support to the change in the perceived purpose of education in liberalised Tanzania.It was observed that the provision of education in a liberalised economy focused on values that stressed individual rather than communal gains from education [15].A corollary to this is parents' belief that teachers are conducting their profession for individual reasons (such as salary, rewards, or other incentives), rather than because they are dedicated to teaching.
In response to the idea that money can potentially impel them, most teachers reported that in fact they were pleased to be teachers.According to them, teaching gave them the moral and social benefits inherent in the profession, most notably being free from corrupt practices.This result is consistent with the findings regarding caring teaching as a moral practice by Gholami and Tirri [22].It was found that teaching is a practice that gives teachers pleasure when they bring about students' learning.This was substantiated by the field findings: What makes me satisfied with the teaching career is that, despite everything, I am performing my duties without being prone to corruption, meaning that there is no room for corruption practices as is the case in other professions!I use my energy to get what makes me live a happy life.(Teacher, School D, Kilimanjaro Region) Most teachers acknowledged that while good salary is a concern, they also need moral satisfaction and the appreciation of community members.They reported that a teacher could survive lower wages if he or she is appreciated for his or her performance.Most respondents reported that they were gratified when they saw their former students excel in life after school.One had this to say: I am satisfied when I see my students excel.When I come across my former students, I am satisfied that I had not only taught that person the subject matter, but I also helped to equip a person who is helping the nation.(Teacher, School E, Lindi Region) Supporting this assertion, another respondent remarked the following: Teachers are satisfied when they see their learners excel in the field of their choice.We have the duty of preparing young ones to be good citizens of the country and excel both academically and socially.Thus, if this is achieved, it makes teachers happy.(Teacher, School C, Kilimanjaro Region) The teachers' sense of satisfaction that is derived from students' success after school was rooted in their sense of duty, to transform students into responsible citizens.Congruent to this, Gholami and Tirri [22] found that teachers feel a responsibility to be a force for broadening students' intellectual and moral horizons, because it is their role to improve students' moral life focusing on matters pertaining to what is fair, right, just, and virtuous.
The majority of teachers were gratified that they had helped students grow and realise their goals, not only in examinations, but also in relation to moral, ethical, and religious upbringing.These respondents argued that a teacher is primarily called upon to perform his or her duties, and the issue of earning their daily bread was regarded as secondary.According to respondents, a primary motivation was the teachers' inner urge to help students to excel in their future.
Conclusions and Recommendations
Evidence from the study shows that job satisfaction among secondary school teachers in Tanzania is determined by their positive relationships with coworkers, students, and parents, plus respect and recognition of teachers' contribution in educating society.They were gratified to see their students excel in studies.
Workplace conditions are currently demoralising in many schools.There is vast room for improving teachers' lives and their teaching environment.Unavailability of teaching materials and the absence of laboratory equipment were frustrating.Job dissatisfaction led to teachers' search for alternative tuition or performing nonteaching activities for economic gain.School administrators should do what they can to create a motivating environment.
It is recommended that schools, both public and private, should, first, ensure competitive salaries to retain teachers in their profession; second, employers should address teachers' timely promotion and career advancement to sustain teachers' satisfaction; third, workplace conditions should be enhanced.This will promote teachers' commitment to teaching and their physical presence in the classroom as well as dedication to their students.
This study was limited with regard to its thematic analysis.Usually, it is advised that more than one person should read the transcript and, together, decide the emerging themes [23].The data were analysed using the NVivo computer programme to come up with themes.However, this does not affect the conclusions of the study, because care was taken to link the eight focus group transcripts through codes to the whole data set from which major categories and themes were developed. | 6,326.6 | 2017-01-01T00:00:00.000 | [
"Education",
"Economics"
] |
Seismic Vulnerability Assessment from Earthquake Damages Historical Data Using Constrained Optimization Technique
This present work falls within the context of efforts that have been made over the past many years, aimed in improving the seismic vulnerability modelling of structures when using historical data. The historical data describe the intensity and the damages, but do not give information about the vulnerability, since only in the ’90 the concept of vulnerability classes was introduced through the EMS92 and EMS98 scales. Considering EMS98 definitions, RISK-UE project derived a method for physical damage estimation. It introduced an analytical equation as a function of an only one parameter (Vulnerability Index), which correlates the seismic input, in term of Macroseismic Intensity, with the physical damage. In this study, we propose a methodology that uses optimization algorithms allowing a combination of theoretical-based with expert opinion-based assessment data. The objective of this combination is to estimate the optimal Vulnerability Index that fits the historical data, and hence, to give the minimum error in a seismic risk scenario. We apply the proposed methodology to the El Asnam earthquake (1980), but this approach remains general and can be extrapolated to any other region, and more, it can be applied to predictive studies (before each earthquake scenarios). The mathematical formulation gives choice for regarding, to the optic of minimizing the error, either for the: 1) very little damaged building (D0-D2 degree) or 2) highly damaged building (D4-D5 degree). These two different kinds of optics are adapted for the people who make organizational decisions as for mitigation measures and urban planning in the first case and civil protection and urgent action after a seismic event in the second case. The insight is used in the framework of seismic scenarios and offers advancing of damage estimation for the area in which no recent data, or either no data regarding vulnerability, are available. How to cite this paper: Benaïchouche, A., Negulescu, C., Sedan, O. and Boutaraa, Z. (2018) Seismic Vulnerability Assessment from Earthquake Damages Historical Data Using Constrained Optimization Technique. Journal of Geoscience and Environment Protection, 6, 89-111. https://doi.org/10.4236/gep.2018.62007 Received: December 20, 2017 Accepted: February 25, 2018 Published: February 28, 2018 Copyright © 2018 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/
Introduction
Optimization algorithms have started to be widely used in the field of earthquake engineering because of their capability to calibrate the model when no data are available for important parameters associated to behavior of the system (e.g. a system can be a building, a bridge, a network…).The first utilization of the optimization algorithms has been at the scale of one structure generally for fitting the model parameters (e.g. as the Young modulus, reinforcement bar diameter, compression resistance…).Several studies have also focused on effective computational methods of cost optimization for bridges [1] [2] [3] and structures in general [4].Recently, [5] used optimization process for the design of structures.The application of the framework proposed by them is illustrated in a bridge design optimization problem where the column reinforcement bar diameter and concrete cover are considered as design parameters.[6] presents a hybrid optimization methodology for the probabilistic finite element model updating of structural systems.In paper [6], the model updating process is formulated as an inverse problem, analyzed by Bayesian inference, and solved using a hybrid optimization algorithm.
Even if the optimization process is widely used at the scale of one structure, it is less common to apply it at the scale of a group of buildings, networks or the scale of town analysis.More recently, the optimization algorithms were also used for the natural hazard studies at large scales, and more particularly, for manage the decision after the catastrophic event.[7] proposes a simulation model to find out optimum evacuation routes, during a tsunami using Ant Colony Optimization (ACO) algorithms.[8] proposes a framework that clarifies the interrelationships between notions of coping capacity, preparedness, robustness, flexibility, recovery capacity, and resilience, previously espoused as independent measures, and provides a single mathematical decision problem for quantifying these measures congruously and maximizing their values.[9] proposes a fuzzy multi-criteria model to deal with "qualitative" (unquantifiable or linguistic) or incomplete information and illustrates it within the post-earthquake reconstruction problem in Central Taiwan, including the restoration concerning the safe and serviceable operation of "lifeline" systems, such as electricity, water, and transportation networks, immediately after a severe earthquake.
The seismic risk scenario is now one of the most powerful tools for assessing damages either for prevention and mitigation aims or for crises management situation.Many studies [10] [11] [12] [13], more or less detailed, are published Journal of Geoscience and Environment Protection on this subject, which demonstrate the demand of stakeholders and first responders (e.g.civil protection) to have a better understanding and evaluation of damages and the various outcomes from these kind of studies.Different methodologies are applied for the assessment of the earthquake damages, which are generally based on two mandatory terms: the level of hazard and the level of vulnerability of the exposure (i.e.element at the risk).A very complete state of art of the development of seismic vulnerability assessment methodologies for variable geographical scales over the past 30 years is presented in [14].Different methodologies are facing over the years: the widest used methodologies based on a vulnerability index [15] [16] [17] [18]; pushover-based vulnerability analysis [19]; displacement-based vulnerability assessment procedure [20] and buildings' vulnerability assessment using the parameter less scale of seismic intensity [21].[22] [23] provide a detailed description (Summary of Software, Methodology, IT Details, Exposure Module, Hazard Module, Vulnerability Module and Output) of the existing software for seismic risk assessment that is either open source or has been made available to the GEM Risk Team.Here we are dealing with empirical methodologies based on the vulnerability index obtained from observational data after earthquakes events.These methods are very useful for the representation of the vulnerability at large scale (see [13] for more details of the used methodology).
Concerning empirical methods, starting from the first "Earthquake damage probability matrices" derived by [24] after the saint Fernando earthquake and presented to the "Fifth World conference on earthquake engineering", all the empirical methods for assessment of the seismic damage of structure are based on the relation between the observed intensity and the observed damage.One key moment in the collecting and the utilisation of the Earthquake damage data is the development of the European Macroseismic Scale-EMS98 [25] which is the first intensity scale that clearly defines the concept of vulnerability.Table 1 gives the classification of damage in EMS98 according to masonry and reinforced concrete buildings.Before this scale, the previous scales Mercalli-Cancani Sieberg Scale-MCS [26], Modified Mercalli scale (MM-31 and MM-56) [27] and Medvedev-Sponheuer-Karnik scale (MSK-64) [28], do not clearly correlate the vulnerability of structure with the intensity and the damage scale.One of the main objects of the EMS98 scale was the fact to be consistent with the pervious scale; hence, that data collected using previous scales can be adapted to the definitions of the EMS98 scale.The data sets are a very laborious and time-consuming task, but is the key of the all these methods, and their representatively depend on the accuracy of the collected data.In Italy, the concerted effort to collect earthquake damage data over the past 30 years has led to the development of an extensive database from which vulnerability predictions for the Italian building stock can be derived [29].In France, a database which repertory the historical earthquake [30] [31] is filled, but the information related to the damages are quite thin.Some other databases exist, developed by other countries, or within framework of some European Commission projects, but their accessibility, especially to original "raw" data, is fastidious.
[32] systematically compared statistical modelling techniques for different empirical datasets and explored many of the issues raised regarding the treatment of uncertainty.
Database typologies and their typical issues depend on the manner on which the observations were obtained they were been obtained: for Detailed "Engineering" Surveys and Surveys by Reconnaissance Teams the main issue is the possibility of unrepresentative samples; for the Rapid Surveys methods, generally the evaluation concern the habitability or the safety not the evaluation of the damage; while for remotely sensed survey method, which is quite new and not already able to efficiently evaluate other damage degrees than the collapse or very heavy damage [33].For all these cases it is quite rare to have the three essential terms at the scale of the building or census block: intensity measure, building typology, and hence it vulnerability, and damage degree.Moreover, once this "original/unchanged" dataset is defined, usually data manipulation and combination were done in order to associated to some related parameters and develop new method for vulnerability assessment.
Hence, in our paper we are using a literature available "original/unchanged" dataset that represent the large majority of the available historical dataset, namely dataset that have only the intensity and the damages degrees by census block.Our paper proposes a new mathematical manner of treating the existing datasets, in which vulnerability information are not available, and of estimating Journal of Geoscience and Environment Protection the best vulnerability index and its characteristics to be use in the seismic risk scenarios using optimization procedure.
Historical Case Study and Available Observed Data on Damages
A retro-scenario of the October 10, 1980 El Asnam earthquake was performed by the authors [34] [35] using Armagedom tools [13] developed by the French Geological Survey (BRGM).In this paper, we are just using the intensity and damage observations in order to find the vulnerability index that will give the smaller error between the observed damage and the damages calculated numerically with Armagedom software.
Chlef city (formerly named El Asnam) situated 200 km west of the capital of Algeria Algiers, is an area that suffers of important seismic activities due to the interaction of the two Eurasian and African plates.Major earthquakes rocked the city during the last century, the last event that caused major damages being the one of October 10, 1980; known as El Asnam earthquake and killed around 3000 persons.This event was qualified as the largest earthquake ever known in the west-Mediterranean region.Different international studies [36] [37] [38] were performed based, or in collaboration, with the Algerian Technical Inspection of Construction (CTC) that conducted a large field investigation and damage inventory in El Asnam city on 5131 buildings.Figure 1 presents the ten sectors in which the city was divided after the CTC's survey (adapted from [37]).
Based on this data, [36] established a buildings damage classification, in which the damage levels are: Green-very little damage.Can be occupied immediately; Orange: needs further study before it can be either occupied or condemned; Red: condemned and should be demolished.Table 2 presents the number of buildings and the damage classification (number of red, orange and green) performed by [36] in each of the ten sectors of the city.The seismic vulnerability assessment of the existing buildings has never been done either.
The data given in Table 2 represents the collected observational dataset.This representation of information is the very common for post-earthquake damages assessment after seismic event.This concern ancient's earthquake as well as the most recent earthquake (Aquila 2009), for which intuitively we can imagine to have more detailed information's, but in reality, this is not the case.
Model Description
Figure 2 schematize the general methodology for seismic damage calculation.It presents each of the most important phases necessary for creating an earthquake scenario: regional seismic hazard, local seismic hazard, etc.In the case of El Asnam, the hazard modules (module 1, 2 and 3) are not performed, since we got the observed macroseismic intensity.Here all performed simulations (9800 runs) were executed on the Armagedom tools [13] developed by the French Geological Survey (BRGM) and used for simulation of the damage scenarios.
For damage assessment, in Armagedom software the procedure developed within RISK-UE is used [39].Firstly, the vulnerability to earthquake shaking of exposed elements at risk (for this study, buildings) are characterized by vulnerability indices (V i ), which range from zero (not vulnerable) to one (building is highly vulnerable).Depending on the buildings proprieties (materials, age of construction, height…), these last are regrouped in categories; commonly known as typologies (see [40] for more details).Therefore, a value of V i is assigned to each building typology.For each typology, the mean damage degree (µ D : between zero and five) is estimated based on a vulnerability function: I represent the seismic hazard described in terms of macroseismic intensity (EMS98 scale), V i the vulnerability index, and φ the ductility index, which is evaluated taking into account the building typology and its constructive features ( [39]); it controls the slope of the curves and assumes different values to fit the data obtained through damage surveys.For residential buildings, it takes a value of 2.3.
Finally, the damage distribution is derived using beta probability density function (Equation ( 2)) and the beta cumulative density function (Equation (3)). ) ( ) ( ) With < < and ( ) 0.007 0.052 0.287 . The parameters a, b, t, and r are the parameters of the distribution and Γ is the gamma function.The parameter r is the variance and controls the shape of the distribution.
The parameter t (named after T beta ) resulting from the probability calculation by the law β determines the dispersion of the values; it therefore represents the propagation of uncertainties that can come from different sources as the input data, or the variable behaviour of the same typology structure to the seismic hazard.In the Risk-EU method, this one is fixed to t = 8 which represent the best fits for the European buildings [40].However, the study posits to leave it variable in a range of values, hence the optimal value for each typology can be found numerically.
In order to use the beta distribution, it is necessary to refer to the damage grades D k (k = 0 to 5) defined by EMS scale; for this purpose, it is advisable to assign value 0 to the parameter a and value 6 to the parameter b [18].
The outcome is the distribution in terms of the six levels defined in EMS98: D0 (undamaged), D1 (slight damage), D2 (moderate damage), D3 (heavy damage), D4 (partial collapse) and D5 (total collapse) for each location that is separately considered, given by the equation (Equation ( 4)).
( ) ( ) To perform seismic damage scenarios, very often, many typologies of buildings exist in the same studying area.Therefore, to run the simulation model (Armagedom software) we need to describe: • the total number of typologies; • the vulnerability indices (V i ) and T beta parameters for each typology; • the spatial distribution of the typologies in each district (polygon) of the affected area vulnerability (nbBat).
In Armagedom software, these parameters are stored in file (*.txt) as follows: • the V i and T beta are regrouped in the same file (Figure 3) called *.tvi_t.The file is structured in four columns; • type of building as character, vulnerability index as float, T beta parameters as float and period as float.Each line represents a specific typology; • the nbBat are stocked in another file (Figure 3) called *.nbbat.This file is given as table in which, each line represents the area, the number of columns represent the number of typology and the value the number of buildings.
Vulnerability Assessment Using Inverse Optimization
Here, the proposed methodology that answer to the question: "What is (are) the value(s) of vulnerability parameter(s) that optimize the fit of the model's prediction The observed data in Table 2 give two major information: i) the macroseismic EMS98 intensity, which is uniform (IX) for all the city; ii) the observed damages, regrouped on three classes: green (regroups D0, D1 and D2), orange (represents D3) and red (regroups D4 and D5).
Our problem is clearly considered underdetermined (for number of typologies higher than 1) because there is fewer equations than unknowns.Here, the total number of parameters needed to run the model depends on the total number of typology.Therefore, for PQRS the number of parameters to determine is ( ) The expert can also reduce the solution space by introducing additional constraints to get more "realistic solutions".These constraints can be related to the operational needs, better knowledge on the studying area, etc. Follow are exam-ples of questions than can be answered by experts: • What error needs to be reduced?
1) Uniform error on all damage degree D0 to D5; 2) Error on D4 to D5 damage, useful for civil protection in case of seismic catastrophe; 3) Error on D0 to D2 damage, useful for mitigation and planning.• Which parameters can be fixed?1) Number of typology and its corresponding V i and T beta ; 2) The nbBat parameters; repartition of number of buildings per typology in each area.
• Which parameter can be constrained?1) Number of typology between 1 and 6; 2) Values of V i , not varying from 0 to 1 but constrained by the fuzzy function (Risk-UE) method (Figure 4); 3) T beta between 4 and 16.
In our study, we have explored a large number of solution in order to evaluate the impact of expert's constraints on the final solutions.Finally, validate the most appropriate one.We underline that, the range of variation of the vulnerability index is governed by the fuzzy function method (Figure 4), which adapt with the number of typologies [18] [40] [41].
For our simulations, we used Armagedom for the forward modelling and the augmented Lagrange multiplier method described in [42] for the inverse one.
Figure 5 illustrate the conceptual scheme followed for solving the problem.
The proposed methodology was implemented in R programming language and environment [43] using the SOLNP algorithm available in "Rsolnp" package [44].This process was repeated 100 times to generate 100 values of V i , T beta and nbBat values.These solutions represent potential solution, which will be analyzed by the expert in order to select the most appropriate one.We will show in the next section, that all the 100 solutions present a very little difference.The algorithmic parameters used are as follows: population size = 50, generations = 200, crossover fraction = 0.8, and stall generations = 100.In fitting convex function For each observed damage grade, we compute the misfit error between observed and estimated damage (Equation ( 5)).In order to have a uniform distributed damage grade error for all polygons we define for each damage grade the entity defined in Equation (6).Finally, the objective function to minimize can be defined as a weighted sum of the theses entities (Equation ( 7)).
The choice of the weights 1 2 3 , , ω ω ω depends on the application and should be fixed by the expert.For example, for people who make organizational decisions J 1 must be important for them, therefore 1 ω should be greater than 2 ω and 3 ω .In the other hand for civil protection and urgent action after a seismic event, 3 ω should be greater than 2 ω and 1 ω .
In this study, we have tested 4 hypotheses: 1) Case 1: error on D4 to D5 useful for civil protection in case of seismic catastrophe; in this case, we take ( ) ( ) ω ω ω = 2) Case 2: error on the damage degree D0 to D2, useful for mitigation and planning; in this case, we take ( ) ( ) ω ω ω = 3) Case 3: error on damage degree D3 the most complicated to evaluate; in this case, we take ( ) ( ) ω ω ω = 4) Case 4: uniform error on all damage degree; in this case, we take ( ) ( ) ω ω ω = The number of simulations runs needed was determined in order that providing stable predictions for numerical results.The choice of 100 was guided by two raisons: -The variability of 100 estimated V i indices values is very low.This is shown in section 4 (Figure 8(a1) and Figure 8(a2)).
-We first execute 10, 20, 50 then 100 model runs during the experimentation phase.We clearly observe that first (mean) and second (variance) moment-order for 50 and 100 models run were similar.As optimization process is not time-consuming process, we fix to number of model runs to 100.
Results and Comments
In this section, we present a synthetic analysis of obtained results.We remember that our main objective here, is the determination of optimal vulnerabilities parameters (V i , T beta and nbBat) fitting at best the historical observed damages.The originality of the method relies on practical and operational aspects.Indeed, the method gives a panel of optimal solutions (ill-posed problem), and final choice return experts.For clarity of exposition, we will first present the typical results obtained for a specific hypothesis (here case 4; uniform distributed error for all damages) and for a fixed number of typologies (here 4 typologies).Then, synthetic results for two cases (from D0 to D2 in Figure 9 and, from D4 to D5 in Figure 8) of all executed simulations (2400 for each case).remains distributed uniformly.This can be explained by the fact that the problem is ill posed and infinity of local minima are possible, and very different solution can satisfy the same optimization criteria.However, this limitation is very interesting in our study, were experts must validate the optimal solution.These results emphasise the idea of global consideration of results, by looking to the error at the scale of the city and in all the districts of the studied area.Looking separately at a zoomed area the interpretation of results can be inexact, since, for example for the solution 17, for the district five the error is very reduced, and vice versa, for example for the district one of the solution 61.
In follow, we will present the obtained results for two practical cases.• The first one, very useful for crisis management.Where, the authority is interested by the number of highly damaged buildings; here named as D4 to D5.All other categories, remains less important in this case.
• The second one, very useful for mitigation and planning.Where, concerned authority need information about habitability.
For each case presented below, we have executed 2400 simulations in order to test some hypothesis and it allows to answer to two important questions: 1) Does higher number of typologies implies lower misfit error?
2) In the Risk-UE method, T beta parameter was calibrated according to the European buildings and fixed to 8. So, changing its value improves the accuracy of the model predictions?
Case1: Crisis Management
In this section, we will summarize the obtained results for the 2400 executed simulations.In Figure 8, the graphics in the left column show results for fixed T beta parameters (equal to 8) according to the European buildings, and the right one for optimal T beta .We present the results for the three sets of observational damages grade described below (D0 to D2, D3, D4 to D5).
In Figure 8(a1) and Figure 8(a2), boxplots represent results for possible value for vulnerability indices (y-axis) for each number of typologies (x-axis).Each boxplot contains 100 values.We observe that the variability of V i indices values is very low for the two cases; fixed and optimal T beta .In Figure 8(a1) for one typology, the V i value is always the same because the optimization process converges for the global minimum.In this specific case, the well-posed problem has a unique solution, which is V i equal to 0.70; T beta is fixed to 8 and nbBat is equal to total number of buildings.
In Figure 8(b1) and Figure 8(b2), boxplots represent results for possible value for T beta parameters (y-axis) for each number of typologies (x-axis).Each boxplot contains 100 values.In Figure 8(b1), all T beta values are equal to 8 because it is fixed input parameter; the graphic plotted only to show for comparison with Figure 8(b2).This last, show clearly that for all number of typologies, the T beta parameters varies from 4 to 16 (expert's constraint) with a high dispersion.
In Figure 8(c1) and Figure 8(c2), boxplots represent results for mean Figure 8. Results for crisis management case: (a1) the value of V i indices for different typologies with fixed T beta , (a2) the value of V i indices for different typologies with optimal T beta , (b1) the value of T beta parameters for different typologies with fixed T beta , (b2) the value of T beta parameters for different typologies with optimal T beta , (c1) the mean absolute error in percentage for different typologies with fixed T beta , (c2) the mean absolute error in percentage for different typologies with optimal T beta , (d) Comparison between minimal mean absolute error in percentage for fixed and optimal T beta .Journal of Geoscience and Environment Protection absolute error (y-axis) for each number of typologies (x-axis).Each boxplot contains 100 values.Here, we have compare the results for three indicators: mean absolute error on D0 to D2 (green), on D3 (orange) and on D4 to D5 (red).We remember that here, we are minimizing according to high weight on D4 to D5.Therefore, we expect a best performance on D4 to D5 (red) and less on the other one.The two graphics Figure 8(c1) and Figure 8(c2) confirm that error on D4 to D5 is very small (less than 10%) for the 2 cases; with fixed and optimal T beta comparing to the error on D3 and D4 to D5. 8 are the 25 th and 75 th percentile, the ban near the middle of the box is the median and the ends of the whiskers are the minimum and maximum.This graphics allows the comparison between the two cases (fixed and optimal T beta ).
We observe that for: • D4 to D5 indicator that for fixed and optimal T beta the misfit error on D4 to D5 is less than 2%, except for the case with one typology.Moreover, the increase of the number of typologies does not improve the model's results.This can be explained by the fact that increasing a number of typologies, the V i index have more constraints for variation range given by the fuzzy function.
And the second one that T beta fixed to 8 was accurately calibrated in the Risk-UE project [15] and it is representative of European buildings.
• D0 to D2 and D3 indicators, we show that considering T beta variable and increasing the number of typologies reduce the misfit error (up to 20%).However, it's meaningless because, the optimization criterion is highly weighted on D4 to D5.
For El Asnam area, for the crisis management situation, after the analysis of the mathematical solutions, we suggest to use for the predictive damage scenario five or six typologies, with the V i equal to the values presented in the Figure 4 and the T beta fix equal to 8. The variation of the T beta do not ameliorate the results and the use of less typologies give less information for the same error (less than 2%).The choice of the expert is constantly balanced between the acceptable error and the needed information.In the case of El Asnam, if we are looking only to the D4 to D5 damages, as is generally the case for crisis management, we prefer to choose more typologies (that will represent better the geographical distribution of the damage) for the same error.However, if for one reason, we are also interested by D3 damages, in this case the error increase at around 30% for five typologies and hence, we recommend using two typologies for which the error is less than 10%.9. Results for mitigation and planning case: (a1) the value of V i indices for different typologies with fixed T beta , (a2) the value of V i indices for different typologies with optimal T beta , (b1) the value of T beta parameters for different typologies with fixed T beta , (b2) the value of T beta parameters for different typologies with optimal T beta , (c1) the mean absolute error in percentage for different typologies with fixed T beta , (c2) the mean absolute error in percentage for different typologies with optimal T beta and (d) comparison between minimal mean absolute error in percentage for fixed and optimal T beta .same manner as Figure 8.In Figure 9(a1) and Figure 9(a2), boxplots represent results for possible value for vulnerability index (y-axis) for each number of typologies (x-axis).Each boxplot contains 100 values.We observe that minor differences in the V i indices value variability exist for the two cases; fixed and optimal T beta .In Figure 9(a1) for 1, 3 or 5 typologies, the V i value always converge to a global minimum.This is very specific to study case and cannot be generalized.The same analysis can be reported in Figure 9(a2) for 5 typologies.
Case 2: Mitigation and Planning
In Figure 9(b1) and Figure 9(b2), boxplots represent results for possible value for T beta parameters (y-axis) for each number of typologies (x-axis).The same analysis can be done here as in Figure 9(b1) et Figure 9(b2); the values of T beta parameters are highly dispersive.
In Figure 9(c1) and Figure 9(c2), boxplots represent results for mean absolute error (y-axis) for each number of typologies (x-axis).Each boxplot contains 100 values.Here, we have compare the results for three indicators: mean absolute error on D0 to D2 (green), on D3 (orange) and on D4 to D5 (red).We remember that we are minimizing with high weight on D0 to D2.Therefore, we expect a best performance on D0 to D2 (green) and less on the other one.The two graphics seems giving equivalent results in order of magnitude for D0 to D2 errors.In addition, we observe that this error grow with the increase of the number of typologies.ics allows the comparison between the two cases (fixed and optimal T beta ).We observe that: • For D0 to D2 indicator that for fixed and optimal T beta the misfit error on D0 to D2 is less than 20%.We observe that the increase of the number of typologies increase the misfit error.This can be explained by the fact that increasing a number of typologies, the V i index have more constraints for variation range given by the fuzzy function.And the second one that T beta fixed to 8 was accurately calibrated in the Risk-UE project (Mouroux et al., 2004) and it is representative of European buildings.
• For D0 to D2 and D3 indicators, we show that considering T beta variable and increasing the number of typologies reduce the misfit error (up to 2%) for D4 to D5 indicator.We also notice, that, for 2 typologies the result for D4 to D5 indicators with fixed T beta is better than with optimal one.These results depend on the study case and it's meaningless to extrapolate the results except for D0 to D2 indicator in this case.
For El Asnam area, for the mitigation and planning situation, after the analysis of the mathematical solutions, we suggest to use for the predictive damage scenario two typologies, with the V i equal to (0.6 and 0.87) and the T beta fix equal to 8. If for one reason, the used decide to use a different number of typologies, in this case we recommend using variable values of T beta , since in this case this ame-liorate the results (from Figure 9(d).we can see that the error is quite stable for 3, 4 and 5 typologies with optimal T beta , but increase for the same number of typologies with T beta fixed to 8).
In Figure 9, the bottom and top of the boxplot are the 25 th and 75 th percentile, the ban near the middle of the box is the median and the ends of the whiskers are the minimum and maximum.From the results of the two cases, we can deduce that the methodology is calibrated for table the important damage degrees, that can be managed by the optimization process as can be noticed in the Figure 8(d), where the error is less than 2% for any number of typology.This is more hardly to manage for the error relative to the case 2: mitigation and planning, for which, the error, although stable for 1, 2 and 3 typologies, grow up for the situations with 4, 5 and 6 typologies.
The fact that we use different typologies introduces more error, but that also gives supplementary information relative to the location of the damages.If the question of the user is just to know the total error without knowing the distribution by districts, in this case the use of only one typology can be useful.However, generally, even in crises management situation, in addition to this question, the second question is the identification of the most affected districts in order to send the rescue.In this case, the geographical description of the error is required.
Conclusions
We have aimed in this work to develop unified computational method for assessment of vulnerability of structures, to be used for seismic risk scenarios at the town scale, when the underlying uncertainties are modelled using random variables, interval analysis, and (or) fuzzy variables.The developed approach is based on the minimization of the error between observed and estimated damages and it allows the determination of the number of typologies classes, the estimation of their vulnerability index and associated T beta value as well as the spatial distribution in the studied area (nbBat).This method remains general and can be applied for any observed dataset, for which generally the information related to the typologies and their vulnerability are not available.The main insights after the analysis of the results are: • The Risk-EU methodology and the model that it proposed fit very well the damages D4 to D5.When the attention is played to the damages D0 to D2 the estimation is correct but damage D3 remains the most difficult to be assessing because the diagnosis experts on this category remain very subjective and the description remains very vague.
• The results of our work corroborate the affirmation on the Risk-EU methodology, where the T beta was fixed equal to 8, based on the statistic treatments of all the European data.Our analysis (Figure 9(b2) and Figure 8(b2)) shows that by modifying it between 4 and 16, the best results are improved by 10%.Thus, our approach confirms that the latter does not bring too much variability on the results and the choice of T beta = 8 must be respected.• Contrary to the expectance, increasing the number of typologies does not reduce the error for the results.Indeed, the fact of constraining by the expert the V i by the fuzzy function restricts the search space.Therefore, the total error is increased but, by increasing the number of typologies, the geographic description of the vulnerability in the studied area is improved and brings supplementary needed information (e.g. the hierarchizing of the most damaged areas).The two cases (crisis management and planning and mitigation) have two different trends response: i) for mitigation case, increasing the number of typology increases the misfit error; ii) on the other hand in the case of crisis management increasing the number of typology improves the model prediction.In some cases, the optimization algorithm can detect the number of classes that minimize the error, that means on one hand that, a smaller number of classes are insufficient (small number of degree of freedom), and on the other hand that, spending time for dividing and refining the number of classes is useless.In the case of El Asnam situation, the optimum number of typologies was not clearly identified.The mathematical formulation of the objective function gives the opportunity to have a perfect fit of the vulnerability parameters (error that tends to zero) by minimizing the error for a specific damage grade (e.g. a minimization related to D4 to D5 damage gives the most accurate vulnerability parameters for civil protection for urgent action after a seismic event).Hence, the approach can answer to different actors that work at the scale of the town in different situations (e.g.planning, mitigation action, retrofitting, crises management).Of course, the total error (total number of building as a sum of building in each area) is much less big than the error related to each area.This observation should be considered in the studies in which only the total error is computed, which have certainly big utilities in certain almost "real time crises situation" when the first question is the rough estimate of number of victims; but quickly after, the spatial localization of damages is asked, in order to know where to send the first aids, and hence the error at the area level become very important.The method that we propose presents a better integration of the vulnerability and loss results could allow city councils or regional authorities to plan interventions based on a global view of the site under analysis, leading to more accurate and comprehensive risk mitigation strategies that support the requirements of safety and emergency planning.
Figure 1 .
Figure 1.Identification of the ten El Asnam sectors for CTC survey (adapted from [37]): collapse-black solid fill, very heavy damage-dark checkerboard pattern, heavy damage-dark diagonal line pattern, moderate damage-lighter diagonal line pattern and slight or no damage-lighter checkerboard pattern.
Figure 2 .
Figure 2. General methodology and modulus in Armagedom software for seismic damage calculation.
Figure 3 .
Figure 3. Required file for Armagedom software for the case with three typologies (A1, A2, A3): top-the *.tvi_t file containing the vulnerability index V i and bottom-the *.nbbat file containing the distribution of the typologies in each polygon.
for this ill-posed problem, the optimization algorithm leads to local solutions.This limitation, often problematic for solving optimization problem is very interesting for us.Where, not only one final solution is obtained but all solutions minimizing the objective function are retained.The final decision comes to the expert to choose/validate the most appropriate one.
Figure 4 .
Figure 4. Fuzzy function for vulnerability classes (from A to F following EMS98) that constrain the vulnerability index for the optimization method.
Figure 5 .
Figure 5. Conceptual scheme for optimization process.
Figure 6
Figure 6 illustrates the organigram of explored solutions.
Figure 6 .Figure 7
Figure 6.Number of runs for all the study by each step.
Figure 7 (Figure 7 .
Figure 7(c) and Figure 7(d) present the errors on the number of structures given in the number and in percentage for two arbitrary solutions (samples 17 and 61).The barplots presents the error on the damages degree D0 to D2 (green color), D3 (orange color) and D4 to D5 (red color) for the 10 districts.It shows that the distribution is very different for the two case, in the other hand the error
Figure 8 (
Figure 8(d) reports the lowers band in Figure 8(c1) (continues lines) et Figure 8(c2) (dashed lines) in the same graphics.Each plot represents the minimum error (among the 100) for the considered indicator for each number of typologies.Bottom and top of the boxplot in Figure 8 are the 25 th and 75 th percen-
Figure 9
Figure 9 summarize results of the obtained 2400 simulations.It's organized in
Figure
Figure 9. Results for mitigation and planning case: (a1) the value of V i indices for different typologies with fixed T beta , (a2) the value of V i indices for different typologies with optimal T beta , (b1) the value of T beta parameters for different typologies with fixed T beta , (b2) the value of T beta parameters for different typologies with optimal T beta , (c1) the mean absolute error in percentage for different typologies with fixed T beta , (c2) the mean absolute error in percentage for different typologies with optimal T beta and (d) comparison between minimal mean absolute error in percentage for fixed and optimal T beta .
Figure 9 (
Figure 9(d) reports the lowers band in Figure 9(c1) (continues lines) and Figure 9(c2) (dashed lines).Each plot represents the minimum error (among the 100) for the considered indicator for each number of typologies.This graph-
Table 1 .
Classification of damage to masonry and reinforced concrete buildings (according to EMS98). | 9,319.8 | 2018-02-28T00:00:00.000 | [
"Geology"
] |
A smartphone application for enhancing educational skills to support and improve the safety of autistic individuals
This paper presents a smartphone application that provides learning and communication support to children with autism spectrum disorder (ASD), especially in emergency situations. This application provides learning with video modeling in case of disaster, i.e., fire and rain to instruct the ASD children about safety skills. In addition, the application eases collaboration support between caregivers and ASD children. The single-subject design is used to measure the usefulness of the application, and the analysis is performed for two males and one female ASD children. The results show that the proposed application enhances the satisfaction level of all the participants with significant improvement in learning skills.
Introduction
Autism spectrum disorder (ASD) is a developmental disorder that produces difficulties in thinking, social contact, verbal/non-verbal communication, tough attitudes like hyperactivity, and an increase in anger. It affects how somebody observes and meets other people [1,2]. The intensity of autism spectrum disorder (ASD) can be high or low as it differs on the basis of cognitive functioning and observed shortage levels that disturb an individual. Computerized technologies have provided a huge advantage to researchers and also clinicians over the last ten years in the form of remedial and educational tools for people with ASD [3]. These days smartphone apps are being used for individuals with ASD which help in various aspects of their lives (e.g., communication, social interaction, daily living, and vocational independence) [4]. Video modeling has been useful in teaching different skills that include behavioral, social, and functional skills to people with ASD. Moreover, it gives the learners a chance to watch the model depicting target skills and asked to perform them [5]. People with ASD try to enhance their freedom, and they also endeavor to learn how to respond if unpleasant situations occur. Parents of individuals who are diagnosed with ASD are concerned about their safety because in certain situations they could be in danger; for instance, they may not be able to make a correct decision whether they are lost or not, or even if they are in an emergency situation or they might fail to talk for getting help in order to be reunited with their caregivers [6]. It is one of those areas where location detection is very helpful for caregivers to keep track of their autistic children [7]. Few studies have inspected the genuine usage of GPS technology in autism spectrum disability, dementia, or development disability [8]. In this study, a smartphone application is introduced for enhancing educational skills to support and improve the safety of ASD children.
The proposed application is mainly divided into two segments, the first segment designed for autistic individuals which is further divided into two subparts (e.g., learning and emergency). The first part uses video modeling for teaching safety skills (e.g., fire and rain). The second part provides a one-touch interface to autistic individuals so they contact their caregivers in emergency situations. The second segment is purely designed for caregivers. They can easily track the location of their autistic individuals, and also, they can set a safe zone to ensure the safety of their children.
In the remaining of the paper, section two covers the related work that describes the technological work done for people with autism. In section three, the proposed methodology is described. Section four presents the experimental details along with the data collection and analysis techniques. In section five, the experimental results are described. Section six describes the discussion performed in the current study. Finally, section seven presents the conclusion and future directions.
Related work
Increasing independence and interaction with society is an important objective of people with disabilities. However, lack of support by the community could raise some major safety issues. In [9], a mobile application developed to indicate the level of panic to the autism person in the panic state is presented. Once the level of a panic attack is selected, the device then detects the context automatically and helps the person by calling a caregiver. Another study [10] evaluated the benefits of using the iPhone 4 by adults who have a slight intellectual disability so that they could be able to send their location using video captions whenever they get lost in public. Goel et al. [11] used smart bands as a source of communication between children and their parents so that parents could keep track of their kids. The training of safety skills is often neglected for people with autism spectrum disorder (ASD). However, the importance of safety skills cannot be ignored as they could prove life-saving when needed, for example skills for emergency situations like in a fire or rainfall [12]. According to the United States Fire Administration [13], children of young ages are at higher risk of being injured than older children due to a lack of cognitive ability. The importance of smoke alarms to alert children about the danger is also described. Different studies were conducted to teach these skills to children with autism spectrum disorder ignoring whether they were needed or not. Another study [14] evaluated the fire safety skills taught to five people with autism spectrum disorder (ASD). They used a virtual reality computer program that includes the detection of fire-related disasters and evacuating in such situations. Results show that four participants did very well on a fire drill. Morrongiello et al. [15] designed a computer game for young children with autism to teach them fire safety skills. The children have to get the animated character out of the fire hazard situation as a task. It was concluded that the game effectively improved the knowledge of children about fire safety skills. A recent number of studies show the importance of video modeling to teach safety skills to young people with autism by using portable digital devices. These devices included laptop computers, handheld personal digital assistant (PDA) devices, iPods, portable augmentative, and alternative communication (AAC) devices used to display video models [16][17][18][19][20][21][22]. Taylor et al. [23] teach children with autism to seek help when lost using behavioral training. They were taught target skills with the help of video modeling and physical guidance in the school and community settings. Another study evaluated the use of spherical video-based virtual reality (SVVR) intervention smartphone app to teach adaptive skills for adults with autism spectrum disorder. The evaluation process consisted of the content experts' reviews and the actual testing with adults with ASD. Results indicated the usefulness of the proposed application as all the participants found SVVR easy to use [24]. In this study [25], the impact of embodied digital technology (DT) on the four adults with autism spectrum disorder for improving daily living skills such as doing the laundry and washing dishes was assessed. Reversal single-subject design (RSSD) used for the evaluation process and the collected data show that the participants complete the task activities without taking educators' help. Ying et al. [26] used a storytelling application for smartphones to affect five kids having ASD so that consciousness about road safety could be enhanced in them. Class teachers were asked to assess the behavior of the participating children with autism spectrum disorder. In the study, two types of storytelling techniques were used, i.e., social stories storytelling technique and digital storytelling technique. Both of these techniques were used to learn about awareness level of children, and later assist them or support them about road safety. Engaging children with ASD to road safety awareness is difficult, and by the obtained results, this study was proved to be beneficial for raising awareness. Both of these techniques were equally important in this regard.
Purposed methodology
In Fig. 1, the core components of the application along with the communication procedure between both users are shown. The proposed smartphone application is divided into two segments. The first segment is purely designed for autistic individuals, while the second segment is designed for their caregivers. The first segment is further divided into two parts (e.g., learning support and emergency support). In learning support, video modeling is used to teach safety skills (e.g., fire and rain) to autistic individuals. In emergency support, ASD children send the location to caregivers if they face emergency situations related to fire and rain using a onetouch interface. In the second segment (e.g., additional support) caregivers can easily track the location of the autistic individuals. They can set a safe zone to ensure the safety of their children. Once the child moves outside from the safe zone, the application notifies the caregivers by sending the current location of the child.
Participants and data analysis
Three participants, two males and one female ASD children aged between 14 and 18 years, took part in this study. The Childhood Autism Rating Scale (CARS) is a behavior rating scale developed by Schopler et al. [27], used to diagnose autism in children. The main purpose of the scale was to differentiate the autistic children from those with other developmental disabilities. All the participants were recruited from a nonprofit organization for children with special disabilities. The main aims are to provide them with free public education as well as to teach them life skills (e.g., positive social skills and etiquette, verbal/non-verbal communication, self-confidence, overcome stage fear, work ethics, and so on). All participants had their vision within normal range, and due to this reason, they were considered good candidates for learning from video modeling. Moreover, not a single child had previously received video-based instructions for learning. It was seen by informal observations that all participants had imitation skills, but no official assessment was conducted.
The proposed study was concisely assessed and ethically accepted by the university ethical committee. Before the study conducted, an informed consent form was filled in and received by the caregivers of autistic individuals. As the participants are unable to read and understand the consent form, approval was received by their caregivers. Demographic information is provided in Table 1 for each participant. It includes the name, age, gender, and diagnoses. Selection criteria of participants included a few things, i.e., the participant should have some knowledge of using smartphone, satisfactory vision and hearing as revealed by the school system's hearing and vision test, IEP goals connected to self-help and vocational skills, the capability to attend a short video segment and generalized motor imitation.
Safety skill task and equipment
The focus of the intervention was mainly on instructing the participants to finish safety skill tasks related to both (a) fire and (b) rain (see Table 2). These tasks were considered important for each participant to enhance safety skills. The participants' teacher also identified these tasks as vital and described that they had not taken any instruction on fire and Video modeling: Rain and fire safety skills were taught using video modeling shown on a mobile phone. Two videos were recorded with the help of an adult model and shot from the performer's perspective [28] to teach target behaviors about safety skills (e.g., fire and rain) for ASD children. Each video consists of eight sequential steps depicting target behaviors for fire and rain safety skills. The digital video camera was used for recording videos, and those videos were uploaded onto a computer for editing. Before starting a step, a number of that step was displayed on the screen. After that, a video clip of that particular step was played (e.g., video of a hand touching the Fire Picture Thumbnail to send current location"). The duration of both videos was almost 2 min long. Participants watched a video and learned how to perform the task by looking at the performer going through the sequential steps to achieve target behaviors linked with fire and rain safety skills.
Setting: All the participants were presented in the classroom devoted to children with moderate ASD equipped with a table and chairs. Two mobile phones, a Motorola G4 plus (running Android 7.0) and a Huawei Y6 (running Android 5.0) were used. All the participants used the first phone, i.e., Motorola G4 plus to perform the tasks. The other device, Huawei Y6 was used by the caregiver for receiving the current state and location of the autistic children. Moreover, the participants were taught to touch the thumbnail of fire or rain to alert the caregivers with their current situation (Tables 3 and 4). The child locates the nearest shelter 3. The child exits through the door and walks to the playground area 3. The child walks to the shelter 4. The child opens the smartphone 4. The child opens the smartphone 5. The child touches the application thumbnail 5. The child touches the application thumbnail 6. The child touches the " Fire Picture" Thumbnail 6. The child touches the " Rain Picture" Thumbnail 7. The child waits for the confirmation dialog 7. The child waits for the confirmation dialog 8. The child remains there until the caregiver approach 8. The child remains there until the caregiver approach
Outcome measures and data collection
The main dependent measure was the percentage of correct responses for both tasks performed by the participant every time he/she heard the fire alarm sound (to perform the fire safety task) or thunder sound (to perform the rain safety task). The first author of this manuscript and a female educator from acted as observers during the sessions. The role of observers was to maintain the checklist and evaluate the target behaviors of all the participants. Every participant was evaluated on each target behavior (e.g., fire and rain) after the sound of a fire alarm/ thunderstorm. If the target behavior was correct, then it was marked with a check [✓] sign. Similarly, if the target behavior was wrong, then it was marked with an [x] sign. Each target behavior was counted as a chance for a child to make a free-response. A behavior is regarded as correct if the next step is initiated within 10 s and completed within 20 s. Incorrect behavior was defined in different ways: if the student could not finish the step within 10 s, if the student did not initiate a target behavior within 10 s, or if the student completed a step out of order according to the task sequence. To get the percentage of right answers, a total number of right answers were divided by the total number of steps in the given task analysis. Training sessions happened twice or three times a week, and during these one-to-one training sessions, data were collected. Every session was almost 15 min long, and after each session, the participants were appreciated for participating using verbal praises.
Data analysis
A single-subject design (SSD) [29], precisely A-B design was used in this study along with maintenance after the intervention. In the design, the "A" corresponds to the baseline phase and "B" corresponds to the intervention phase. At the baseline phase, the participants were not taught by video modeling about fire and rain safety skills and it was supposed that all the participants are weak in displaying safety skills. In the intervention phase, safety skills were taught to participants having ASD with the help of video modeling using a mobile phone. After they had mastered the set of safety skills in the intervention phase, the maintenance phase was conducted in the next two weeks for safety skills. In the maintenance phase, the skills of a child were evaluated by providing the sound of fire alarm and thunder.
No video model was shown to the children in the maintenance phase. To find the amount of intervention effect on the performance of those children, the PND (percentage of non-overlapping data) approach was used. This approach is labeled as a "meaningful index of treatment effectiveness" [30]. The non-overlap calculation provides the percentage of treatment or intervention phase data that surpasses the maximum values in the pre-treatment or baseline phase [31]. Visual analysis has many tasks and one important task is to detect the amount of difference or non-overlap in the data points through successive conditions; that is why visual analysis settles nicely with non-overlap methods and these methods deliver important information about treatment effects [32]. If the non-overlap score is above 90%, then it is considered very effective, if it is in the range of 70-90% is considered effective, 50-70% questionable, and below 50% suggests that the treatment was not effective. Moreover, the aggregated analysis was conducted for all phases, by finding the average percentage of skill proficiency by subject and overall skill proficiency in general.
Procedure
Prior to baseline: Two training sessions were held before the baseline phase. Session one included instructions on fire safety skills, and session two included instructions on rain safety skills. Training sessions were conducted two times per week, and the maximum duration of each session was almost 15 min. Baseline: During the baseline phase, participants had to perform the desired tasks for both safety skills. In the fire safety skill session, the participant was brought to the classroom. He/she sat on a chair and was told to perform required tasks related to fire safety after hearing the sound of a fire alarm. For the rain safety skill session, the participant was brought to the ground and he/she was asked to perform the desired tasks related to rain safety after hearing the sound of thunder. Evaluation of each task was carried out within three sessions or until the stabilization of baseline data. If any participant performs the task inaccurately or cannot give a response, the observer intervened to help the participant and completed the task himself/herself, and then the student was again provided with the opportunity to complete the next step in the list. Throughout these sessions, the observer recorded the amount of correctly performed tasks. The session was finished if the participant could not initiate the first step within 10 s or failed to finish the previous task within 20 s. Intervention: During the intervention phase, the participants were provided the smartphone having the installed application with module two already opened, and set to play the video for the targeted task. Video modeling was used for training. They were directed to carry out a task through an instruction like "Watch this." The participant then touched the screen and watched the video on how to do the task. They were then asked to perform that task after the observer said, "Now you do it" and then the participants tried to copy the relative behavior shown in the video clip about fire and rain safety skills once they hear the sound of fire alarm and thunder. Participants also receive verbal praise, i.e., "Nice job," on performing the task correctly after every third step. They were given 10 s to initiate the task and 2 min to finish it. If the participant failed to finish the task within 2 min or failed to initiate a task within 10 s, then the session was dismissed. Unsuccessful tasks were left incomplete because the tasks had to be completed in the specified order of respective task analysis. No other prompts, responses, or instructions were delivered. Throughout the intervention phase, the safety skills of participants were assessed two times a week for a total of nine data points.
Maintenance: In the maintenance phase, participants were not shown any video clip for training through video modeling about fire and rain safety skills. They were brought to the classroom and ground of the school for fire and rain safety skill tasks, respectively. In the fire safety skill task, they heard the sound of fire alarm, in the rain safety skill task they heard the sound of thunder, and then they were told to perform according to the desired behaviors taught throughout the intervention session. A maximum of two sessions was required for the evaluation of each task. If the performance declined below the accepted level, then students were allowed to watch the videos again to see whether their performance recovers.
Results
The percentage of steps for fire safety skill tasks performed correctly by each participant is presented in Fig. 2. Child A finished 14 intervention sessions that were distributed in three phases. At baseline, he was found to have very low proficiency (25%) on one of the three data points in the fire safety drill. He improved his skill proficiency throughout the training phase throughout weeks 1-4. He finished all the steps in the last session of the intervention phase attaining 100% proficiency. He performed all the fire safety skill steps with 100% proficiency in the maintenance phase of the intervention. PND of child A in fire safety skill was equal to 78% and 100% in case of maintenance. On the basis of these results, the PNDs at the maintenance and intervention phases propose that video modeling was effective in improving the fire safety skills of the child. At baseline phase, Child B was assessed with having very low proficiency (25%) in the fire safety skill. He significantly increased proficiency from session four (25%) to session six (75%) during the intervention stage. Child B achieved 100% proficiency in showing the fire safety skills at the tenth session. However, this was not true for the eleventh session as the child reverted back to 88% proficiency. He was successful in completing all fire safety skill steps by twelfth session and displayed a very high skill proficiency (100%). In the maintenance phase, Child B showed 88% proficiency in fire safety skills as depicted in Fig. 2. PND of child B in fire safety skill was equal to 89% and 100% in intervention and maintenance phase, respectively. On the basis of these results, the PNDs at maintenance and intervention phases propose that video modeling was very helpful and effective in improving the fire safety skill of Child B. During baseline, Child C obtained 13% proficiency in the fire safety skill which gradually increased to 100% in the intervention phase. In the maintenance phase, Child C exhibited 100% proficiency in the fire safety skill as shown in Fig. 2. This result was obtained without the use of a video model, and all steps were retained during this twosession follow-up. PND of child C in fire safety skill was equal to 100% during the intervention phase and also 100% in the maintenance phase. On the basis of these results, the PNDs at maintenance and intervention phases propose that video modeling was very helpful and effective in improving the fire safety skill of Child C.
The percentage of steps for rain safety skill task performed correctly by each participant is shown in Fig. 3. When child A was evaluated at baseline phase, he had very low rain safety skill proficiency (13%). At that point he only showed the ability for the first step of rain safety, "Child remains calm on thunder sound." Child A significantly increased his proficiency during the intervention stage from the fourth session (13%) to the end of the intervention session (100%). In the maintenance phase, Child A maintained 100% proficiency in a demonstration of the rain safety skill. PND for child A in the rain safety skill was equal to 89% and 100% in intervention and maintenance phase, respectively. These results suggest that video modeling was effective in improving the rain safety Fig. 3 Graph describing the percent of steps performed correctly by all three participants during baseline, intervention, and maintenance in fire safety skill skill of Child A. Figure 3 illustrates 14 intervention sessions attended by Child B which were divided into three phases, i.e., baseline, intervention, and maintenance phase. At the baseline phase, he had very low proficiency (13%) across the three data points. He gradually improved his proficiency from 13% at the beginning to 100% by the end of the intervention phase. At the end of the tenth and eleven sessions, Child B gained 75% proficiency during the intervention phase, and in the last session of the intervention phase, i.e., the twelfth session, he completed all the steps and showed 100% proficiency. PND for child B in the rain safety skill was equal to 100% during the intervention phase and also 100% in the maintenance phase. Based on the results, the PNDs at intervention and maintenance phases suggest that video modeling was effective in improving the rain safety skill of Child B. Child C finished 14 intervention sessions that were distributed into three phases. At baseline, she was found to have very low proficiency (13%). At the beginning of the intervention session, she initiated the first three rain safety skill steps. At the end of the seventh session, Child C showed a relatively higher proficiency of 75% in the rain safety skill. However, at session eight, Child C regressed to 63% proficiency. Then, she improved her proficiency throughout the remaining sessions and reached 100% proficiency by the twelfth session as shown in Fig. 3. Moreover, she maintained her rain safety skill proficiency during the maintenance phase as well. The resulting PNDs are equivalent to 100% during the intervention phase and 100% in the maintenance phase. Hence, this shows that learning with the help of video modeling was very effective for improving safety skills than learning with old traditional techniques (Fig. 4).
Aggregated scores
At the baseline phase, all three children had very low proficiency in a demonstration of fire safety skills, averaging at 19%. This proficiency improved in the intervention phase. During the intervention phase, all three children showed relatively better proficiency in a demonstration of fire safety skills averaging at 75%. This rate increased in the maintenance phase as the children showed higher proficiency than the other two phases averaging at 96% in fire safety skills.
At the baseline phase, all three children had very low proficiency in a demonstration of rain safety skills averaging at 13%. This proficiency improved in the intervention phase. During the intervention phase, all three children showed relatively better proficiency in a demonstration of rain safety skills averaging at 64%. This rate increased in the maintenance phase as the children showed higher proficiency than the other two phases averaging at 96% in rain safety skills.
Discussion
Safety is a major problem for young people with autism; thus, it is a concern for caregivers because these people are at a higher risk of being hurt. Safety skills education is important for children with ASD as it promotes changes in behavior. There are different types of safety skills, and the paper has focused on two of them: (1) fire safety and (2) rain safety skills. The current study adds to the literature on fire and rain safety skills for children with ASD by developing a smartphone application. It assists ASD children in an unpleasant situation and instructs them on how to deal with these situations. To measure the effectiveness of the application in assisting children with ASD, percentage non-overlapping data (PND) technique was used. The PND score of all the participants was more than 70%, which shows that the proposed application is quite useful for providing assistance related to fire and rain safety skills. Additionally, video modeling is an effective approach for teaching children with ASD. The means of baseline, intervention and maintenance phases in case of fire safety skills are 19, 75, and 96 percent, respectively. On the other hand, the means of baseline, intervention, and maintenance phases in case of rain safety skills are 13, 64, and 96 percent, which clearly shows that the proposed application effectively improves the learning skills of autistic children. Furthermore, the satisfaction level of autistic individuals was measured with the help of questionnaires, filled by their teachers and caregivers. Results obtained from the questionnaire depict that they feel satisfied with the use of the proposed application. Moreover, they describe that children were not annoyed and do not show any hyperactivity while using the application.
Conclusion and future guidelines
Developmental disabilities such as autism spectrum disorder (ASD) induce numerous challenges to autistic individuals in definite areas of life, especially in communication, social interaction, imagination, learning, self-help, and independent living. At the same time, safety is also a major issue for these individuals, and lacking safety skills could be harmful. Therefore, individuals with ASD need to be aware of potential dangers in the environment and get familiar with the proper safety skills to stay safe. With the help of video modeling, different types of skills can be taught to individual with autism spectrum disorder. It is an effective way of teaching in which numerous learners can benefit at a single time. This study provides assistance regarding safety skills (e.g., fire and rain) to individuals Fig. 4 Graph depicting the percentage of steps performed correctly by all three participants during baseline, intervention, and maintenance in rain safety skill with ASD. It also reports that the learning skills of autistic individuals were gradually enhanced with the help of video modeling. Moreover, results show that autistic individuals feel satisfied and remain active while using proposed smartphone application. There are certain lines of future work related to the current study. In the future, the generalization phase will be considered and evaluation be made using several different sounds (alarms or thunder), which can deliver different auditory stimuli that can stimulate the fire or rain safety behaviors. Also, it would be valuable to generalize these skills to different environments (e.g., home and work settings). Future research should consider increasing the number of participants which will reinforce validity. At last, extending the range of age groups can make the intervention procedures stronger. | 6,850.2 | 2021-01-29T00:00:00.000 | [
"Education",
"Psychology",
"Medicine",
"Computer Science"
] |
Past Facts and the Nature of History
We defend a realist account of history: past facts are discoveries not creations. We show how ‘moderate’ realists, who admit the critical role of perspective, while insisting on history’s metaphysical independence from historians, can accommodate Paul Roth’s arguments in favor of irrealism. Moreover, our position is consistent with a dynamic past: as history unfurls past events gain new properties. Realism is necessary, we argue, to capture substantive disputes within history. It also grounds history’s reflexivity: the point of the continual re-examination of history (and history’s history!) turns in part on there being mind-independent past facts to be had.
papers and a recent monograph.3 Roth draws on the insights of past philosophers, especially Danto and Mink, to argue that historical events metaphysically depend upon the actions of historians: A narrative traces a path of development, a path not defined or marked by any known laws or the like. The event emerges as an event only because our interests call it into being; events so constituted do not embody some natural kind.4 For the irrealist, past events depend metaphysically on historians, they 'call them into being' . Realists, by contrast, hold that narrative events are sometimes metaphysically independent of the actions of historians: past factseven dynamic past facts -and the relationships between past facts, can hold regardless of what historians have said about them. Although we defend realism, we share Roth's worries about naïve forms of it. Roth's exemplar realist is Mandelbaum.5 As Roth puts it: [for Mandelbaum] Historical pictures are successively filled in by collecting more evidence concerning the events of interest. The picture is always partial, but what history provides is an ever clearer picture of things as they actually were … The work of a historian, on Mandelbaum's conception, is more like that of a scribe than an author.6 This is in a relevant sense similar to the view developed by Hempel7 that casts historical narratives as "best explanatory sketches," wherein greater historical detail (and appeal to natural law) makes for better explanations. The job of the historian qua explainer is to continually "fill out" the sketch.
Following Roth, historians are not like scribes: history is too complex, too multi-faceted, and our interests too multitudinous to think of the historian as simply 'copying out' accounts from some pre-determined history. On our view, however, his is a point about the epistemic practices and difficulty of doing history, not the metaphysical status of past facts. Although we don't think he establishes his metaphysical thesis, aspects of Roth's views are friendly to our position. First, the past is dynamic: a class of past facts change and emerge as history moves forwards. Second, it is in capturing this dynamic process that narrative forms and structures become necessary for historical explanation. 8 We won't deny such claims here. 9 We'll begin (section 2) by arguing that the moderate realist need not worry about Roth's arguments, before turning to our positive project: reconciling moderate realism with dynamic facts. Beginning with an example from biology (section 3), we'll demonstrate how the past can be dynamic without mediation or intervention from historians. We'll then provide a positive argument in favor of forms of moderate realism (section 4). We'll claim that (1) historians have substantive disagreements which turn in part on past facts, and that irrealists cannot accommodate those disagreements; (2) in some circumstances, historians do influence temporally dynamic facts, but do not do so in a privileged or special way; it is simply that the practices of historians constitute one way in which historical events can take on new properties. Moreover, we'll argue, moderate realism can capture the motivations behind the reflexivity of historical practice.
A note on terminology. Throughout we'll refer to past 'facts' , 'properties' and 'events' . At base, we'll take 'events' as a catch-all term for entities, processes, and so forth. Events have properties; properties are what we might ascribe to them. We'll take a 'fact' to be a true description, sentence, or proposition about a property ascribed to an event. We don't think this way of speaking presupposes realism, anti-realism or irrealism. On Roth's view, there are facts about properties, but such facts hold only under historians' descriptions of events. That is, the historian categorizes a past event, and as such, creates new properties, and statements about them can only be said to be true or false relative to the categories the historian themself created. By contrast, on our view, the events and properties are (except under circumstances we'll clarify) independent of historians. Further, although (on some versions of minimal realism) 8 Roth, "Narrative explanations: The case of history"; P.A. Roth, "Essentially narrative explanations", Studies in History and Philosophy of Science Part A, 62 (2017), 42-50. 9 The reader should note that Roth's conception of narrative has it that historical narrative is both descriptive and explanatory; that is, description within a narrative and narrative explanation are essentially the same kind of activity. This idea is controversial, and has been criticized by F. Dewulf, "Paul A. Roth, The Philosophical Structure of Historical Explanation", OEconomia. History, Methodology, Philosophy, (10-2) (2020), 363-367. Our argument requires no stake in this.
facts might be relative to a description or sentence, their being facts turns critically on events and their properties. And on other versions, facts can be understood as propositions whose truth is entirely indifferent to token descriptions or sentences, uttered by historians or otherwise.
Irrealism
Roth's arguments begin with three points about narrative explanation, intended to underwrite his irrealism. Two, 'non-standardization' and 'nonaggregativity' are, we think, red herrings. That is, Roth's arguments do not distinguish irrealism from moderate realism (although they are telling against naïve realism). His third point concerning indetatchability, however, is critically important for understanding the dynamism of the past and the nature of narrative. But this still does not save irrealism from the realist challenges we develop in later sections.
2.1
Two Red Herrings Two of Roth's arguments for irrealism should not trouble moderate realists. Our aim here is not to argue in favor of realism (that is for later sections), but rather to demonstrate that many non-naïve minimal realist positions can accommodate his position (hence the arguments being 'red herrings').
Roth argues that narrative events are non-standardized. This is to say that they do not come 'typed' , treated as instances of regularities: they are not lawlike. Roth's arguments for the non-standardization of historical explananda do not threaten moderate realism. First, it is not obvious that all such explananda are non-standardized. Historians do craft at least some narratives around something like historical regularities.10 This is, for instance, the strategy of Peter Turchin11 in his work on the dynamics of empires. He points to regular causal processes arising from imperial frontiers as central to explaining these cyclical processes. This might fall short of "lawfulness," but it is nonetheless in some sense a "regularity." This should be familiar to philosophers of science. There is a long tradition of examining 'law-like regularities' , ceteris paribus laws and the like: a regularity need not be necessary for it to play legitimate roles 10 Sterelny, "Contingency and History"; A. Currie and K. Sterelny, "In defence of story-telling", Studies in History and Philosophy of Science Part A, 62 (2017), 14-21. 11 P. Turchin, "Population dynamics and internal warfare: a reconsideration", Social Evolution & History, 5(2) (2006). in science,12 and so the same goes for history.13 Second, even if historical events are non-standard, they're plausibly linked to processes that are, and this fact can help to underwrite the realist's explanatory aspirations. That some shard of ancient pottery is found in some location in a particular sedimentary formation might be non-standard, but archaeologists still make use of nearby regular, standardized causal processes to bolster their explanations.14 In short, moderate realists might agree that historical events are not always 'typed' , but this doesn't require conceding that those events are metaphysically dependent on historians.
Onto the second red herring. Drawing on Mink,15 Roth emphasizes the non-aggregativity of historical narratives. Different narratives often characterize events in non-consistent ways. In virtue of this, their accounts cannot be woven into a unified narrative. You can't just capture the past in terms of a set of narratives joined via conjunction without inconsistencies. This is because past events are sensitive to description; how the events are characterized (what they 'are' if you want) are partly determined by our explanatory interests in them.16 The 'same' event might, for one historian, count as the beginning of a new movement; for another it could be the middle of an unfolding process. The argument here seems to be that realism requires that events be strung together in a maximally coherent way: that there is one 'grand narrative' to be had. But realism need not assume any such thing.
Philosophers of science have long considered the critical role of distortion and omission -abstraction and idealization -in scientific work, particularly models.17 Idealization is often presented as a challenge for scientific realism: if science aims at truth, why is it that so many scientific representations 12 S. Schiffer, "Ceteris paribus laws", Mind, 100 (1) Mink, "Narrative form as a cognitive instrument." 16 Roth might deny that 'sensitivity to description' is the right reading here, after all, he thinks that the events are metaphysically dependent on description! Our reply is simply that sensitivity to description captures just as well the phenomenon he is interested in, and moreover doesn't beg the question against realism or irrealism, as the latter reading appears to. 17 M. Weisberg, "Three kinds of idealization", The Journal of Philosophy, 104 (12) contain untruths? Answers are varied and typically pluralist. Some idealizations function as something like approximations: they are true enough for our purposes. Others mitigate a lack of understanding or a lack of computational power. Others isolate causal dynamics thought to have a privileged explanatory role and aim for unification across systems with those dynamics. In light of this, some philosophers adopt what might appear to be anti-realist positions: the view that models are 'convenient fictions'18 or that they primarily aim for understanding rather than truth.19 But crucially, such views are not anti-realist in the relevant way. All claim an important relationship between the worldly systems the models aim to represent and the model's success. Faced with scientific idealizations, philosophers of science do not then claim that facts metaphysically depend upon scientific representations. If Roth is right about the consequences of non-aggregativity, this is surprising because differently idealized models seem of apiece with historical narratives: their targets are sensitive to description, they capture the events in strikingly different, non-consistent ways. So idealized models, like narratives, do not aggregate into a unified system. Philosophers of science are not irrealists about scientific facts in the face of non-aggregativity because it simply doesn't follow from our making idealizations -even ineliminable idealizations -that the facts are not 'out there' to be had. There is an important sense in which non-aggregativity does not actually lead to inconsistency. Non-aggregativity is due to events being sensitive to description, that is, the events are not treated as 'bare' , but as events-qua-some perspective. It is perfectly consistent to capture some event qua perspective a, and the 'same' event qua perspective b, even if event-qua-a and event-qua-b would be inconsistent if treated as either perspectiveless or from the same perspective. It does follow that there is no consistent, single, non-perspectival 'god's-eye-view' to be had.20 The full picture will be irredeemably pluralistic. But pluralism is not in conflict with realism insofar as it amounts to the denial of those events only existing because of scientists taking those perspectives.21 Why will become clearer below, when we consider substantive, cross-perspective To put things in another way, realists need not (and should not) be committed to the possibility of a kind of 'ideal chronicle' which in a unified, non-perspectival way, captures all the past facts.
debates between historians. The inference from an event being sensitive to description to that event being brought into being by that description is invalid. Relatedly, Roth argues that events themselves do not come pre-carved into categories, thus it is our categorization practices which bring those categoriesand thus the events themselves -into existence. "Events simpliciter cannot be shown to exist; they are not known to be of nature's making rather than of ours. Events exist only by proxy."22 This is a non-sequitur: it doesn't follow from our not being able to show that events exist simpliciter to them only existing via proxy, that is, their only existing because historians have done their categorization work. At most, it speaks in favor of agnostism about 'events simpliciter' . However, as we've already seen, realists need not be committed to the existence of events simpliciter, nor of non-perspectival, privileged 'god's-eyeview' events. One way, then, to gloss a realist response to Roth's argument for non-aggregativity is to say that nature (and a fortiori history) is structured or patterned. A particular phenomenological pattern corresponds, roughly, to one potential carving of the world's natural (or political, economic, etc.) history. Some of the ways that historians carve the world are more similar to true patterns of events than others, and this fact is what underwrites the substantiveness of historical debate (as we will argue below). The moderate realist, then, can deny that these potential carvings are the inventions of historians, they are instead the events, processes, and patterns that historians seek to find and understand.
2.2
Indetachability So, neither narrative events failing to aggregate nor their being nonstandardized provide routes to irrealism. At best, these arguments hold against the kind of realist who thinks there is a single privileged description of the past. But moderate realists commit to no such thing. Roth's stronger argument comes from what he calls non-detachability. This is expressed both as a feature of narratives and the nature of past facts. Let's begin with Danto's notion of a narrative sentence.23 A narrative sentence defines some past fact in terms of some later fact. For instance: The 40th President of the United States hosted General Electric Theatre in the 1960's.
The structure of a narrative sentence is indetachable in the following sense. The earlier fact (the subject hosted General Electric Theatre) is defined in terms of later facts (the subject's being elected president in 1980). As such, one cannot make sense of the earlier event understood in those terms without 22 Roth, The Philosophical Structure of Historical Explanation, 30. 23 A.C. Danto, "Narrative sentences", History and Theory, 2(2) (1962), 146-79.
its being related to that later fact. This matters for Roth (and Danto) in part because it tells us something about the structure of narrative explanations. Where for some explanations the explanans and explananda may be logically decoupled, thus allowing for the explanation to be represented in terms of a deductive argument, this is impossible for a narrative argument as the content of the explanans and the content of the explananda overlap. If represented as an argument, narratives would be circular. This, for Roth, marks a syntactic difference between 'scientific' explanations and historical narratives (though it is worth noting that explanatory relations, as understood in much contemporary work on scientific explanation, need not be in any sense deductive).24 We're happy to go along with these points about narratives for the purposes of this paper. However, there are also metaphysical conclusions drawn. The indetachability of narrative sentences derives from the connection between the content of a past event and a later event. The fact that a T.V. host became president is only true in light of the electoral process that unfolded at some later time (namely, 1980). This is not a mere feature of linguistic practice: in a very real sense it wasn't true that the 40th President hosted General Electric Theatre until he won the later election. That is, the truth of the sentence, which refers to the 1960s, depended on later events occurring as they did. Roth captures this in terms of historical practice: it is in virtue of historian's categorization practices that the events are the events they were. As he puts it: [narrative sentences make] vivid and logically explicit why retrospective characterizations of the past add truths to past times not knowable at those times.25 Note that Roth infers from the claim that some past truths depend on later occurrences (what we'll soon call 'temporal dynamism') to the claim that it is in characterizing past truths that the truths are 'added' . That is, the past's dynamism depends upon the historian's characterizations; they act as a kind of truth (or event)-making mediator. But as we'll see there are bountiful cases of temporal dynamism where no mediation is required (indeed, Reagan's case is one! This point has been raised elsewhere. L. Tsilipakos, "Descriptive Accuracy in History: The Case of Narrative Explanation", Philosophy of Social Science, 50(4) (2020), 283-312. Argues, along a similar vein, that the temporal dynamics of past facts is not really captured by our adding truths to the past. Rather, we enhance our epistemic position with respect to past Roth's central point regarding the dynamism of the past concerns the formation of new concepts which couldn't have been known/applied by past actors. We might want to say that, for instance, "ritual sacrifice" is an example of a "social control strategy" even though that concept was not available to any of the historical actors under the event description as given by historians. As such, for Roth, the new concept's generation entails the generation of the past fact. The reasoning here is fairly common in the philosophy of social science, particularly that drawing on Hacking's work.27 In short, what options are open to us as agents are in part constrained and enabled by what conceptions we have of what we might be or do. There is, then, a feedback between the conceptual environment and social patterns and ways of being. This is how some categorizations can be 'self-fulfilling prophecies': the very act of categorizing changes the behavior of those in the society. However, it doesn't follow from our legitimately categorizing using a non-actor's category that historians create the events that are so-legitimately categorized. Or at least the realist need not acquiesce to the inference.
Temporal Dynamism
As we've seen, Roth argues from non-standardization, non-aggregativity and indetachability to irrealism. However, for moderate realists his arguments involve a series of non-sequiturs. The inference from an event's sensitivity to description to its being created by that description doesn't follow; inferring from the past being dynamic to those new facts being created via the mediation of historians doesn't follow. The same can be said, mutatis mutandis, for any argument that takes event description to be ineluctably linked to the something like a "framework" as might be the case with certain kinds of positivism, especially Carnap.28 We see no substantive difference between the notions of "sensitivity to description" and "framework dependence" (or similar), given facts when we come across new, relevant findings. There is, then, a historical dynamism to historical facts, but a relatively metaphysically modest one. that each points essentially to a kind of linguistic determination of the status of events qua metaphysical entities, which we deny. At best, these positivistic arguments speak in favor of agnosticism: we simply haven't enough evidence to pick between a past with historian-created-events and one with independent events. In this section, we'll turn to a biological example to demonstrate how temporal dynamism occurs without historians' inventiveness. We'll then analyze the case in terms of a distinction between temporally dynamic and static facts. Looking forward, this distinction will underwrite our argument in favor of moderate realism.
A Turn to Biology
Marc Ereshefksy has argued that biological species are 'path-dependent entities' .29 That is, what makes a species the species it is turns not on its intrinsic properties or details of its origin, but on its unique trajectory. Importantly for our purposes, he argues that for a speciation event to be the event it is depends upon downstream occurrences.
… prominent theories of speciation imply that speciation is a pathdependent process. They imply that whether a branch (on the Tree of Life) is a species is determined by events in the path of that branch, not merely at its initial branching event.30 On allopatric models of speciation, speciation occurs when a population is divided, blocking interbreeding and allowing variation between the subpopulations to accumulate. Classic examples are geographical: some segment of the population makes it to an island, or some natural event (the rising of a new mountain range, say) cuts off subpopulations. This spatial division removes homogenizing processes like interbreeding. Over time, the two subpopulations accumulate differing traits as differing mutations arise and have differing successes depending on the environments they find themselves in. Island gigantism or dwarfism are classic examples. A subpopulation arrives on an island, finding itself in an environment profoundly different from the mainland: potentially with different flora and fauna, different niches, and different resource availability. Over time, the subpopulation adapts, individuals becoming smaller in response to resource availability, or larger to occupy new niches. 29 M. Ereshefsky, "Species, historicity, and path dependency", Philosophy of Science, 81(5) (2014), 714-726. 30 Ibid., 717.
Downstream, due to reproductive isolation and accumulating mutations, the subpopulation becomes increasingly divergent from its mainland cousins. Looking back, we can identify the populations splitting -the subpopulation arriving on the island -as the beginning of a speciation event. Critically however, that the branching event is a speciation event is not knowable until the two populations have diverged.
… we see that a branching event, a unique origin, does not make for a new species. Whether there is a new species at that branching event depends on what happens later. It depends on the historical path of that branch.31 To see this, consider circumstances where a subpopulation arrives on an island but fails to thrive: perhaps the right mutations do not arise, or they fail to compete in the new environment. More dramatically, say the island falls prey to a natural disaster: a volcano erupts, the island sinks, etc. Here, despite the branching event, we have no speciation event as the subpopulation did not evolve into a new species.32 The upshot is that the branching event -the population's splitting -at t 1 is not, from the perspective of t 1 alone, a speciation event. It is only from the perspective of a later time, t 2 , where we have two different species, that t 1 counts as speciation. Speciation has the same structure as narrative sentences. As with historical narratives, in speciation we posit that the event of the population's division gains a new property once the speciation has occurred. The populations' initial isolation being a speciation event is indetachable from their diverging genetically and phenotypically. But from where does the new property (and fact) emerge? On Roth's model, we would say that it is via the intersession of biologists. When a biologist points at a population splitting and names it a speciation event, a new fact comes into existence. But this is implausible: the population division is a speciation event because it led to two new species. The species do not care whether biologists notice them. 31 Ereshefsky, "Species, historicity, and path dependency", 720. 32 We may note here, however, that as regards the issues of indetatchability and nonstandardization, it is plausible to think that a great deal of explanatory information, in this evolutionary case, is contained at a high level of generality. Though a particular speciation may be path dependent, the processes that generate species, and so explain the event, will cover a range of type-level mechanisms.
Consider Roth's discussion of the notion of a 'career' , such that of the American President Roosevelt: Roosevelt's career does not exist until constituted by a historian. The grouping represents an artifact, a colligation by historians studying a particular person or period.33 Consider an analogous statement: the speciation event did not exist until constituted by a biologist. The event being a speciation is an artifact, a colligation by biologists studying that lineage at that time. We might agree that the event is an artifact insofar as biologists represent it in various ways, and that it has a life through biological practice.34 But recognizing that representations are artifacts doesn't require claiming that the existence of the event or property -that the speciation is a speciation -requires the mediation of biologists. In fact, it is the representational activities of biologists that allows us to uncover past facts of interest, such as speciation events. Similarly, we'll argue, neither does Roosevelt's career require intercession by historians to exist, and neither does it take our construction of a narrative sentence for it to be a fact that the 40th President of the United States hosted a television show sponsored by General Electric.
Dynamic and Static Facts
We've seen that both in human and natural history earlier events gain new properties as later events occur. Two populations being isolated becomes a speciation event once they diverge into differing species. But this doesn't seem true of all past facts: that the populations became isolated, or the population's phenotypic and genotypic makeup at some time-slice, do not seem to have this dynamic property. Let's capture this difference conceptually: some facts (or properties) are dynamic while others are static.
Static facts can be understood as facts which are not sensitive to future events; while dynamic facts are those which are sensitive to future events. In virtue 33 Roth, The Philosophical Structure of Historical Explanation, 52. 34 It is even possible to agree that this is, in fact, a kind of "colligation" on the part of biologists (or historians) without making the further claim that this threatens realism. of what might a fact or event be dynamic or not? This, we think, depends on whether the facts turn on temporally spread processes. A process is 'temporally spread' when it takes time to occur, and thus has space for defeaters. What could turn out to be a speciation event might not turn out to be such if, say, the island in question sinks, the right mutation doesn't arise, or so forth. And indeed: Reagan could have lost the 1980 election. All, or almost all, events require temporally spread processes to occur. Because things take time to happen, whether or not a fact is dynamic is indexed to a temporal scale. More carefully: Some property p of an event e is dynamic at time t 1 just in case e's having p at t 1 depends upon the occurrence of another event e* at a later time, t 2 .
Once e* occurs at t 2 , e having p at t 1 becomes static.
Note our account refers to properties, not facts, but as we take facts to refer to properties, this is an innocuous difference.35 Compare the populations being isolated at t 1 and the isolation being a speciation event at the same time. If we take our temporal index to be t 1 , then the population's isolation is a static property, as the temporal processes of isolation have occurred. Once we take the temporal index to include t 2 , once the speciation event has occurred, then the speciation becomes static. But this doesn't mean that the speciation at t 1 is detachable from the two species' existence at t 2 . The defining feature is that the event was not the event that it was (that is, possessing the properties it does) except in light of the later event. Dynamic properties can be expressed using narrative sentences, but there is no challenge for realism here. We can, then, identify two kinds of static properties. The first concern properties detectable at a time-slice. These might include, say, the genotypes and phenotypes present in the populations at some time; or the number of individuals in a population; their locations and so on. The second concerns properties which were dynamic at some earlier time, but have since become fixed as the temporally-spread processes have completed. The former we might call timeslice properties, the latter completed-process properties. Whether a property is of the time-slice or completed-process variety may turn on certain sensitivities 35 Further, we make no particular commitments concerning any particular ontology to underwrite our characterization of dynamic facts. That past facts are in some sense dynamic we take to be a broadly empirical claim; one need only examine some case studies in order to see that this is the case. This can be made compatible with an ontology that takes processes as fundamental, entities as fundamental, events as fundamental, etc. This is an interesting question in its own right (one worthy of further exploration), but we do not attempt to answer it here.
to description (e.g. explanatory interests). But, according to moderate realism, they are no less properties of the world for that fact. "Population number" may be a time-slice property relative to one description and a completed-process property relative to another, but even so propositions about these properties are truth-apt and mind-independent. Concerning dynamic events, note that there are many -potentially infinite -new properties that may arise as time passes. This is because events accumulate downstream causal influences.36 Say that our population-splitting event is the seed for a great radiation: the two new species themselves speciate, spreading into many new forms and niches. By a later time, t 3 , the progeny of the original population now dominate a variety of ecosystems. Now we might say that the event at t 1 was both a speciation event, and the beginning of a macroevolutionary radiation. At t 2 the event at t 1 gains the property of being a speciation event. At t 3 the event gains the property of being the beginning of a radiation. Further, we might pick out different aspects of the event. Say that the success of the radiation is due to some novel trait. Now, more and more completed-process properties are added to the event at t 1 .
So, we can understand the speciation case in terms of temporally dynamic properties becoming stable. At t 1 , the population splitting event is not a speciation (although it could turn out to be such). But by t 2 , when the speciation has occurred, the event at t 1 becomes a speciation event (that is, the event takes on the property of being a speciation). Assuming the process is complete, at this point the speciation becomes a completed-process event. So, as time goes by, new facts are added to past events as they gain new properties.
A crucial aspect of temporal dynamism is the openness of past events to acquiring new properties. Although many processes are complete, it doesn't follow from this that the event itself has therefore ceased, or will therefore cease, to acquire new properties as new events occur. We've already seen this in our biological example. Although (say) at t 2 the population isolation at t 1 has gained the property of being a speciation event, it is not therefore closed: as at t 3 it gains yet another property; being the basis of a macroevolutionary radiation. So, on our account the past isn't simply dynamic, it is in principle open-ended. As we'll see, this open-endedness comes to the fore especially in light of the reflexivity of human investigation of the past. 36 C.E. Cleland, "Methodological and epistemic differences between historical science and experimental science", Philosophy of Science, 69 (3) (2002), 447-451.
Dynamic Historical Realism
We've thus far seen that Roth's arguments do not conflict with moderate historical realism, one which happily makes perspectival or ecumenicist claims about history -there is no one best, maximally unified, account of historybut nonetheless insists that historical events are discovered, historians do not bring them into being. What we've not seen yet is a positive argument for taking on such a view. Our aim in this section is to do exactly this. To begin, let's reconsider Roth's argument. Roth argues that historical events are the events they are in virtue of historians categorizing them into narrative explanations. On this view, past events gain new properties (or facts) or, more strongly, new events come into existence, via the historians' actions. As such, historical facts (events, etc.) are metaphysically dependent on historians. But in the last section, we saw biological examples wherein past events gained new properties without the biologist as an intermediary. Events like speciation are temporally dynamic: the event being the event it is depends upon later events. Such events, then, transition from dynamic to static as the temporally extended processes they depend upon run to completion. This at the very least suggests that appeal to the powers of historians can only partially explain the temporal dynamism of historical facts.
On our picture of dynamic properties, there is a dependency relation between the dynamic property and the outcome of a temporally extended process. Until the process has completed, whether the dynamic process holds is undecided. How might Roth's historian intermediaries fit here? Roth must either replace a dependency between dynamic property and temporally extended process with one between the dynamic property and the historian, or make the dynamic property rely on two things: the process and the historian. The first cannot make sense of historical practice, the second adds a conceptually unnecessary extra ingredient.
We'll first make explicit how our account of dynamic past properties underwrites realism, second turn to a historical case study to defend realism.
4.1
How to Be a Realist What is required to be a realist about historical events? If our contrast is Roth's irrealism, then the realist must claim that history's events exist independently of historians. If you want, we might say that history comes 'pre-carved' . But this is a highly misleading metaphor, as the realist need not say that there is a single, unified, privileged carving. Instead, the 'carving' will be multi-faceted, sensitive to description and potentially open-ended. There's no need, for the moderate realist, to appeal to any special sense of 'carving' . Historical processes lead to a patterned history, some highly contingent, some more robust, and these patterns and patchiness are the targets of historical discovery.
A crucial aspect, and a point of agreement between ourselves and Roth, is how past events afford, or are amenable to, a multitude of characterizations and complex interrelations with other facts.37 Let's go back to our toy biological example. We discussed a population becoming isolated at t 1 , becoming a speciation at t 2 , and a macroevolutionary radiation at t 3 , and how the temporal relations holding between them determine what kinds of events they are. Now let's imagine that this radiation was shortly followed by an externally-caused mass-extinction event at t 4 (an asteroid impact, say). From one perspectivethat of the mass extinction -the events from t 1 to t 3 may be insignificant: they made no difference (or at least very little) to how that mass-extinction played out. But from another perspective (that of the radiation, say) the events from t 1 and t 2 are critically significant. There is an apparent contradiction here: t 1 is both significant and not significant. But as we've seen this is innocuous: significance is relative to description, indexed to a perspective. Whether it is true that t 1 mattered from the index of the radiation turns on what actually happened: whether in fact that seed of the radiation turned on the population's being isolated. And whether it is true that t 1 didn't matter from the perspective of the mass extinction turns on what actually happened. On the realist take, then, sensitivity to description just marks off the way different kinds of facts are connected to each other relative to particular questions and the like. The connections (or carvings) already exist, but different ones will be selected given a different set of questions and explanatory concerns.
There is, then, a set of realist positions fully compatible with a dynamic past. All the realist needs to concede is that some events being the events they are depend upon later events, and the in-principle open-endedness of past events. The historian's narrative, then, typically aims to pick out a static processcompleted event: that, say, Reagan in fact did win the 1980 US election. In this sense, speaking of narratives as 'describing' past properties and events rather than creating them is happily consistent with those events being dynamic. 37 For discussion of the relationship between complexity and pluralism in historical explanation see: Førland, "The Ideal explanatory text in history: A plea for ecumenism"; Currie, "Narratives, mechanisms and progress in historical science"; A. Currie, "Simplicity, oneshot hypotheses and paleobiological explanation", History and philosophy of the life sciences, 41(1) (2019), 10; Currie and Walsh, "Frameworks for historians and philosophers"; K. Sterelny, "Explanatory pluralism in evolutionary biology", Biology and Philosophy, 11(2) (1996), 193-214; T.A. Grantham, "Explanatory pluralism in paleobiology", Philosophy of Science, 66 (1999), S223-S236.
Further, when the historian generates a new narrative, or new categorization, they do not thereby 'create' a new event or fact. They rather describe (or at least attempt to describe) a static process-completed event. The truth-conditions of the historian's analysis are rooted in what actually occurred in the past. Let's situate this view. We can pick out flavors of historical realism and anti-realism by considering what factors constrain historical narratives. Naïve realists -monists -hold that there are a set of static facts and of these there is a privileged set more or less irrelevant of explanatory interests. A more minimal kind of realism (we think this a plausible reading of Danto) claims there are a set of past facts which constrain narratives (a chronology) but in constructing narratives, historians make significance attributions to various events in that chronology.38 On this view, the events picked out must have occurred, but there are no empirical constraints on significance-attributions. Roth's antirealism goes further, denying that there is a chronology to be had in the first place.
On our account, chronologies are multi-faceted and perspectival, but the admissibility of an historical narrative is partially constrained by those chronologies (by what actually happened). Thus, it is a form of realism. However, it is stronger than Danto's because we also think there are constraints on significance attributions. That is, there is often a fact of the matter to be had about whether some past event is significant, given some explanatory interest. To see the importance of these constraints on significance properties, we'll turn to an example of a substantive historical dispute: barter economies.
Substantive Historical Disputes & Barter Economies
One of realism's critical advantages is that it can make good sense of the claim that historians have substantive disputes about what occurred in the past. Irrealism, however, cannot.
Irrealism situates historical truth in the models and frameworks historians employ. The inference, which we've claimed is a non-sequitur, is to then argue that the events and truths metaphysically depend -are 'created by' -those models and frameworks. But then historical disputes appear to involve warring created facts. Historical disputes, as Roth describes them, turn on non-empirical 38 See Gallie's discussion of narrative "turning points" for some helpful background. Turning points within a narrative are, for him, something like the "crucial moments" that give a developmental narrative its distinctive shape. Gallie factors concerning frameworks, their fruitfulness, say.39 However, there are many historical disputes that are substantive, and explicitly involve denying that some 'created facts' are facts at all. Some of these are mundane: disagreeing about temporally static properties such as the date someone died, the actual population at a time, and so forth. But others concern what were once temporally dynamic properties. Here, historians argue about whether some categorization of an event gains purchase: how the other historical narrative characterizes the event turns out to be false or inapt. This sense of false characterization is a deeply empirical activity and, we think, requires a realist treatment.
In substantive historical disputes the rubber hits the road: the aptness of competing narratives turns on what actually happened in the past. This, we argue, necessitates realism because the independence of the past to historians is required for the rubber to hit the road. To see this, we'll examine a case study in which conceptual and empirical aspects are related in deep and interesting ways. As we'll see, the debate is interwoven, which is to say empirical, conceptual and interpretive issues are bought into iterative contact. But it is also substantive, which is to say the debate critically turns on new empirical and material discoveries about the actual past. This latter feature underwrites an argument for realism.
In a barter economy I exchange something I have and do not need or necessarily want -but you do -for something you have and do not necessarily want -but I do. Let's say Angela has finally finished working her way through her copy of Hilberg's The Destruction of the European Jews, and is interested in reading some relevant philosophy of history. Jang-Mi has just completed Roth's The Philosophical Structure of Historical Explanation (less daunting length-wise than Hilberg!) and Roth's discussion has piqued her curiosity about Hilberg's book. Happily, Angela has something that Jang-Mi wants, and Jang-Mi has something that Angela wants. The two can now negotiate and agree on an exchange.
The concept of a barter economy is popular largely to Adam Smith, who used it in The Wealth of Nations similarly to how Hobbes used his 'state of nature' (although the concept's history is long, looming large in discussions of political justice in Plato and Aristotle). For Hobbes, the pre-state nature of 39 Or, if there are empirical disputes for Roth, they are about empirical facts that are entirely "theory laden" (see Roth, The Philosophical Structure of Historical Explanation, chapter 3). But this doesn't help Roth, since disputes over empirical facts will have to involve appeals to the theories (or models) which constitute the empirical facts. Thus it would appear such disputes still lack the kind of substantiveness we claim to be present in such debates. the human world is marked (derogatorily) by general anarchy; a struggle of all against all. Contractual obligations -and eventually the state -emerge necessarily as a response to this chaos, which we are rationally obligated to avoid.40 Similarly, for Smith in some prior state humans exchanged via barter and nearinevitably developed money.41 The crux of the problem with barter is that it requires what economists call a double-coincidence of wants. In order for the barter to be successful, I must want what you have, and you must want what I have. Jang-Mi might reflect on the almost 1400 pages of Hilberg's book and decide she doesn't want it after all; without the coincidence of wants, the barter collapses and Angela (assuming she's nothing else Jang-Mi wants) will have to look elsewhere for Roth's tome. That is, unless she offers to pay Jang-Mi. As Smith puts it: But when barter ceases, and money has become the common instrument of commerce, every particular commodity is more frequently exchanged for money than for any other commodity. The butcher seldom carries his beef or his mutton to the baker or the brewer, in order to exchange them for bread or for beer; but he carries them to the market, where he exchanges them for money, and afterwards exchanges that money for bread and for beer.42 Smith's model is neat: because exchange in a barter economy is fundamentally limited by people wanting each other's stuff, the invention of money opens up economies by endowing a set of tokens with general exchange-value. So long as someone wants something, they'll likely be willing to sell their surplus goods for cash. Later economists, most prominently Robert W. Clower,43 employ transaction costs to explain the emergence of money from barter economies. Finding folks who have what you want and who want what you have can be tricky, so people will begin to gather together for such purposes, and whatever the most common item exchanged is will, he argues, inevitably become monetized. We can, then, identify two ways of understanding barter. First, barter as a model incorporating transaction costs and the double-coincidence of wants. Second, barter as historical claim that economies develop from barter to monetized systems. Barter as a model of exchange dynamics still looms large in economics, and is apparently taken seriously as a real historical claim in popularizing and pedagogical contexts (economics text-books for instance).44 It also still turns up in serious economic work: Modern monetary theory shows that monetary exchange takes place if individuals are sufficiently specialized in consumption and production in the sense that they frequently end up in situations where the double coincidence of wants does not hold, and the good serving as money does not lose its value too quickly over time. or bargaining in exchange. This latter notion opens the door to a 'barter' system, whereby even cash can be on object of barter. "By definition, barter is a complementary exchange in which each participant bargains until he or she is satisfied."54 These claims about the nature of barter economies were popularized in David Grueber's History of Debt (chapter 2). There, Grueber draws on the kinds of historical and anthropological evidence we've discussed in an attempt to overturn the Smith-based model.
The economist Jualo Huato's response to Grueber is illuminating: … if it is plausible to argue that barter imposed large opportunity costs on transacting parties (by requiring from them an improbable "double coincidence of wants"), then barter cannot be expected to have existed as a regular, stable, or dominant social practice in any well-defined historical period -and, therefore, to be readily observable in the historical record.55 Huato is pointing out that according to Smith-like models of barter economies, we should expect them to be fleeting -and thus invisible in the historical record. It might be tempting to read this as an attempt to accommodate the anthropological and historical data into the pre-existing account but this is too quick. Rather, the argument demonstrates that Smith-like models are still helpful in discussing the far-more complex reality of market dynamics than the barter-then-money picture has it. In short, folks like Huato retain the barter model (perhaps heuristically), while denying the veridicality of Smith's historical claim. Let's highlight some important features of the debate, before demonstrating how Roth's account cannot accommodate it and moderate realists can. First, the debates are interwoven. That is, historical inferences, conceptual machinery and explanatory models are heavily interlinked. The argument that barter economies are not an early stage of the development of economies relied upon (1) empirical observations such as the lack of barter in ethnographic studies, (2) conceptual developments, such as Humphrey's negotiation-based conception of barter, and (3) explanatory models, such as Huato's suggestion that Smith-models can be put to work in explaining the transient nature of barter economies. Roth's account is well-placed to accommodate these interwoven aspects. However, the debates are also substantive, which is to say they turn crucially on new empirical discoveries -on confronting the conceptual and theoretical with the historical and ethnographic records. Although the conceptual and the empirical are interwoven, the rubber hits the road with the empirical record. Responses like Huato's do not involve sticking to theoretical guns, but rather showing how that conceptual machinery can nonetheless retain utility given the new empirical information. Humphrey's conceptual innovations are due to her interacting carefully with ethnographic information. It is this substantiveness that irrealism cannot accommodate. Roth argues that debates between narratives are primarily theory-driven, not evidence-driven: [testing hypotheses] will primarily be a function of assessing competing explanations, and so draw on evaluative criteria more akin to theory appraisal than to hypothesis confirmation.56 Or, more strongly: The significance of 'the empirical' disappears on the assumption that theories either determine what counts as experience or explain away any apparently discordant evidence. What comes to be termed 'empirical' can readily become instead an artifact of theorizing. The empirical so understood then ceases to have a determinate function in the assessment of theories under consideration.57 It is true that the empirical and theoretical interweave, but it does not follow from this that the empirical disappears. But this isn't to commit to a naive empiricism, either. A steady diet of Kuhn over the last several decades has disabused all parties of the notion that theoretical and empirical entities can be plausibly treated as independent of one another. But the examples above, we argue, show that the relation of determination can't flow strictly from the theoretical to the empirical; the recalcitrance of the empirical data is what invites so much reworking of the same problems, so the empirical facts seem to be in the driver's seat, even if there is partial determination flowing in both directions at once. The empirical data appears to be the stronger force vector, in some cases, at least. On Roth's account, why or how this should be so is unclear: if historians are merely comparing different theoretical frameworks in terms of "how they focus and shape subsequent inquiry and debate" (81), then the role of empirical data is left mysterious. If irrealism is true, then we're 56 Roth, The Philosophical Structure of Historical Explanation, 66. 57 Ibid., 127.
not sure why historians should hunt down original texts, attend to the material and ethnographic record, and generally have concern for the veridicality of their claims, after all, their debates are not substantive, merely turning on theoretical virtues. Moderate realism makes sense of substantive historical disputes. They do not merely turn on theoretical preferences, but our evidence for what occurred in the past. There is a fact of the matter about whether past economies were originally barter economies, and whether such economies formed a basis for monetary economies. No doubt determining this is tricky and involves conceptual innovation, but this in itself is no reason to deny the existence of those past facts prior to historian's discussing them. And no doubt the past is complex: it may be that the barter economy model applies better in some instances than others, but determining this in itself partly turns on the empirical facts historical work uncovers. Empirical data turns out to do what nothing seems to be able to do on Roth's account of narrative evaluation: dislodge some historical narratives in favor of others. It is only from some form of a minimally realist perspective that this is sensible. Realism provides the necessary anchor for explaining the substantiveness of historical debate.
4.3
Historian-Created-Facts An advantage of Roth's approach is its emphasis on the reflexivity of historians' practices: history is often not built directly from interaction with inferred past facts or primary texts, but from interaction with the work of other historians. If past facts are constituted by the actions of historians, this is unsurprising. However, realists can also make sense of this reflexivity, in fact, we'll argue, they do so better than irrealists.
In our example involving barter economies, it seems relatively clear that the earlier categorization of certain kinds of economic activities as "bartering" (as in the case of Smith) opened up a set of interesting conceptions for future historians and social scientists. The conceptual refinements that served to clarify our picture of early economic activity was, in some sense, made possible by the extended dialectical process in which historians are engaged. Humphrey could only turn the classical model of "barter first, then currency" on its head, because the introduction of the initial model itself opens up a dialectical space where particular questions can be asked, concepts refined, and, consequently, new facts introduced (or created).
How might the iterativity and reflexivity of historical practice threaten realism? Roth (2020, chapter 3) argues (roughly following Hacking58) that there is no stable way to characterize human actors into natural kinds (especially regarding things like social behaviors and categories). The reason for this instability of kinds, from a synchronic perspective, is relatively clear. As we've already said, the behavior of human agents can turn out to be sensitive to the way that such behavior is categorized, and this can be a source of kind instability.59 It's far less clear how this kind instability is epistemically significant from a historical (or diachronic) perspective. While past human actors can't turn out to be sensitive to description in anything like Hacking's terms (economic actors in the depths of human history can't be sensitive to being described as "barterers," mainly because they're dead), Roth does think there is a kind of instability of kinds at play in the characterization of past actors in the sense that descriptions we now give can't be true of past actors prior to our descriptions, since they could not have conceived of themselves as we now categorize them. Roth's position, then, involves a kind of Kuhnian Incommensurability of Historical Kinds. Roth understands this as a major threat to realism: because past actors would not understand themselves in terms of our categories, it follows that statements about past actors are not true or false, but only true or false relative to a historico-conceptual model, which we are not entitled to think of as even an approximately true description of the past.
The first thing to say in response is that there are many plausible counterexamples to the thesis of historical incommensurability. Consider a rather armchair counterexample: Let's say we were, through some technological marvel, able to resurrect some early human, and find some means of communicating information to them concerning the work of historians and archaeologists on primitive economic behavior. While we would certainly be missing quite a bit of information on cultural context and economic milieu, it seems entirely plausible that such a person would be able to discriminate between explanatory models that possess more descriptive adequacy from those having considerably less. This wouldn't involve anything much different than, say, unpacking the concepts philosophers use when talking about theories of knowledge with first year undergraduates. At first the concepts seem foreign, as if in another language. But, through careful dissection, a clearer picture emerges over time, and we can then position ourselves to have substantive discussion and disagreement over which ones have epistemic purchase. We see no reason why this should not also be true in hypothetical cases involving past actors and imagined conversations with them. 59 Although for an interesting development of Hacking see: J. Laimann, "Capricious kinds", The British Journal for the Philosophy of Science, 71(3) (2020), 1043-1068.
More pressing, though, are cases where historians do seem to create facts. Historians' traditions are often built upon falsehoods. Is it not true that the concept of the barter economy is crucial for understanding the history of economics (and how modern economies have been shaped) even though it was based on a falsehood? The realist responds: yes, it is. But the relevant historical event here is not the existence or otherwise of barter economies, but the previous claims and interpretations of folks like Smith. Similarly, but more subtly, historical disagreement often turns on the inaptness of previous models: although they get many of the facts right, they problematically over-emphasize some factors over others, that is, they get the significance wrong. To see this play out in realist terms, let's dip our toes into a final case, taken from the history of philosophy.
The Rationalist-Empiricist Distinction, although still extremely common as a framing device in pedagogical contexts,60 is increasingly either abandoned or significantly complexified by historians of philosophy.61 In a simplistic form, the distinction frames the early modern period as characterized by a canon of works and figures: Hume, Locke and Berkeley on the Empiricist side, and Descartes, Spinoza and Leibnitz on the Rationalist side. Further, disputes are understood as centered on the foundation of knowledge: whether it depends on experience or not. Historians of philosophy have pointed out that the actors at the time would not have recognized the dispute along those lines, in fact the narrative and the canon was constructed by Kant's students in order to emphasize Kant's importance as resolving the dispute, a narrative that was only really codified and accepted in the late 19th Century. 62 How might a realist characterize this dispute?
The realist can happily say that, for instance, Descartes' Meditations is significant because it forms part of the Rationalist canon. This is a narrative sentence: later philosophers' construction of the canon and its becoming solidified through the 20th Century completed a dynamic process, adding a new property to that work. Kant's followers, indeed, created a new fact. However, there is nothing metaphysically mysterious about this. Just as the speciation event added a new fact, so did the Kantian interpretation of the Early Modern Period: historians, after all, are part of unfolding causal history just as much as species are. But now historians challenge this orthodoxy (as Humphrey and others did for barter). Aspects of their arguments are non-empirical: the canon reinforces particular conceptions of philosophy that ought to be challenged (that, for instance, epistemology should start with questions of knowledge's foundations). But much of the arguments turn on interpretations of the original texts and their historical context: they turn on facts of the matter about Early Modern debates. That there are such facts to be had -independently of historical practice -makes sense of this reflexivity on the part of historians.
We can say that understanding 20th Century philosophy requires understanding the significance of, say, Meditations. But it might be that 21st Century philosophy takes a different turn, emphasizing different aspects of philosophy's history as the canon fragments, expands and is perhaps abandoned. And those events might well take on new properties -become significant in new ways -as past works and figures take on new significance: history's dynamism is in principle open-ended. But understanding these processes requires seeing that past facts are not only multitudinous, complex and sensitive to description, but also independent of us in a critical sense. The fact that Meditations is significant for 20th Century philosophy is indifferent to what 21st century philosophers think; that a speciation has occurred does not depend on biologists knowing about it; that barter economies did not predate monetary economies is independent of economic historians; the fact that the 40th President of the United States hosted television shows cares not a whit about how future historians characterize the event.
Only a minimally realist take on history, then, can accommodate the substantive and reflexive nature of historical practice. | 13,608 | 2021-09-20T00:00:00.000 | [
"Philosophy",
"History"
] |
A New Approach for Satellite-Based Probabilistic Solar Forecasting with Cloud Motion Vectors
: Probabilistic solar forecasting is an issue of growing relevance for the integration of photovoltaic (PV) energy. However, for short-term applications, estimating the forecast uncertainty is challenging and usually delegated to statistical models. To address this limitation, the present work proposes an approach which combines physical and statistical foundations and leverages on satellite-derived clear-sky index ( k c ) and cloud motion vectors (CMV), both traditionally used for deterministic forecasting. The forecast uncertainty is estimated by using the CMV in a different way than the one generally used by standard CMV-based forecasting approach and by implementing an ensemble approach based on a Gaussian noise-adding step to both the k c and the CMV estimations. Using 15-min average ground-measured Global Horizontal Irradiance (GHI) data for two locations in France as reference, the proposed model shows to largely surpass the baseline probabilistic forecast Complete History Persistence Ensemble (CH-PeEn), reducing the Continuous Ranked Probability Score (CRPS) between 37% and 62%, depending on the forecast horizon. Results also show that this is mainly driven by improving the model’s sharpness, which was measured using the Prediction Interval Normalized Average Width (PINAW) metric.
Introduction
The effect of the variability of renewable power production increases with the growing penetration of photovoltaic (PV) generations. This raises challenges to power systems operators, increasing their operational costs [1]. As a result, PV power generation forecasting has become an active field of research to mitigate the effects of this variability [2].
The gains from integrating deterministic solar forecasts in different contexts have been shown [3][4][5][6]. Nonetheless, generating information on the uncertainty associated with forecasts has been acknowledged to increase the value of forecasting itself [7,8]. Examples of this include electricity trading on power markets [9], resource estimation for the financing of new PV power plants [10], smart grids or microgrids operation [11], or unit commitment for the balancing of power grids [12]. The relevance of this topic has also motivated the publication of several benchmark [13][14][15] and verification works [16].
However, characterizing this uncertainty is a difficult task. The variability of PV production is strongly related to that of surface solar irradiance (SSI), which is present at different time and space scales and caused by complex underlying physical phenomena [17]. Short-term variability, i.e., a few minutes to a few hours, is mainly driven by the formation and motion of clouds, which both belong to the most challenging issues in meteorological forecast. Thus, most research efforts on solar forecasting in this time range address this source of variability [18]. The sources of observation used to forecast irradiance are largely dependent on the considered forecast horizon. Review works commonly highlight how Numerical Weather Predictions (NWP) are used for hours to days ahead forecasts, satellite imagery for a few minutes to a few hours ahead forecasts, ground measurements, and sky imaging for minutes to one-hour ahead forecasts [18][19][20]. Probabilistic forecasting of both PV power and irradiance mainly began with hours and day-ahead forecasts, but an increasing share of research now focuses on minutes to hours ahead probabilistic forecasting [21].
However, short-term probabilistic solar forecasts present some challenges. The methods that are the most commonly used in this range of time horizon rely on historical data, but ground-based measurements and instantaneous maps of irradiance derived from satellite images or all-sky pictures cannot grasp the uncertainty beyond the tendency observed in the latest records. In such cases, the uncertainty estimation is delegated to statistical data-driven models, generally by learning through the model a relationship between the current observed tendency and existing patterns observed in similar conditions in the past.
More precisely, various approaches have been proposed to relate the uncertainty of an observed situation to those of similar situation observed in the past. On one hand, non-parametric approaches either model the forecast distribution by a non-parametric bootstrap of errors conditional to the Sun position [22] or rely on experimentally observed correlations between the derivative of the SSI and the forecast errors [23]. Other works explore statistical methods with in-situ records. For example, in Reference [24], the authors used quantile regression solved with an Extreme Learning Machine (ELM) with past measurements of PV power, wind speed, and temperature. Other authors address autoregressive models [25,26], only considering past values of the forecasted variable. More recently, new methods have appeared, such as Markov-chain mixture distribution models [27]. Another approach consists of leveraging spatially distributed solar time series to better grasp weather dynamics. In Reference [28], the authors explore, for an hourly time scale, a small but significant cross-correlation between the irradiance forecast errors of three different sites. Integrating past PV measurements from neighboring PV systems has also been shown to improve forecasting performance, by using them as input for a quantile regression with an L 1 regularization [29] or a gradient boosting model [30]. Another possible source of such data is satellite imagery, with various authors directly ingesting satellite-derived irradiance values into a statistical model. In Reference [31], the authors used satellite derived irradiance as one of the inputs for an Artificial Neural Network (ANN). Similarly, in Reference [32], consecutive satellite images were used as input of an ANN. In Reference [33], the authors used satellite images as a conditioning feature of a k-Nearest-Neighbor searches to provide probabilistic forecasts at a single location. Quantile regression combining ground-based measurements, a 10-min running variability estimation based on standard deviation and satellite-derived albedo was used in Reference [34]. On the other hand, in Reference [35], a Gaussian Process model is used with a reduced version of consecutive satellite images to forecast PV power, as to lower the dimensionality of the problem. However, none of these approaches leverage on the cloud motion vectors (CMV), which can be extracted from a sequence of images [36,37]. CMV is the foundation for advective forecasting methods [38,39], where motion vectors are derived for all pixels of an image, and a forecast is produced by extrapolating the latest image accordingly. However, the results of such methods are often deterministic, making it difficult to derive the uncertainty associated with the forecast. The aim of this work is to propose a CMV-based probabilistic forecasting method of global horizontal irradiance (GHI). This is achieved by adding Gaussian noise on the CMVs. This noise is meant to account for the uncertainty in the estimation of the norm and direction of the cloud motion vectors.
Proposed CMV-Based Probabilistic Approach
In this section, we present the method we developed to obtain probabilistic forecasts of Global Horizontal Irradiance (GHI) at a given location. This method consists of four steps:
1.
calculation of the clear-sky index k c for each pixel of a given satellite sub-image, centered at the location of interest with a radius of typically 50 km to 100 km; n this paper, a radius of roughly 50 km is used; 2.
estimation of the Cloud Motion Vectors (CMV) using an approach of Optical Flow (OF); 3.
identification of the pixels that are likely to advect to the location of interest for different time horizons (named hereinafter upcoming or converging k c pixels or values); and 4.
building of the Probability Distribution Function (PDF) of the upcoming k c based on the candidate pixels.
Calculation of the Clear-Sky Index k c
The first step is to estimate the clear-sky index k c from images of upwelling radiance acquired every 15 min by the sensor SEVIRI on the geostationary satellite Meteosat Second Generation at longitude 0 • . The clear-sky index is defined as: where GH I cls is the GHI under clear-sky conditions. Thus, the k c varies between 1 when the sky is perfectly clear to values close to 0 in overcast conditions. To assess this clear-sky index from the satellite images, we use the Heliosat-2 method [40].
Identification of the Cloud Motion Vectors
To obtain the motion vectors of the pixels of the transformed satellite images, we used an OF technique. This technique refers to the apparent motion of patterns in a sequence of images that results from the individual motions moving objects in the observed scene. This means that 3D movements are summarized to 2D flows that can be evaluated using image processing techniques. OF methods use two consecutive images to derive the motion vectors, and have been in use for a long time, with a wealth of related publications [41].
The basic assumption of the OF method is the brightness constancy assumption. Assuming that the picture is a field of intensity values for each pixel located with coordinates (x, y) at a time t, i.e., I(x, y, t), the brightness constancy states that: I(x + dx, y + dy, t + dt) = I(x, y, t). ( This assumption means that the intensity structures will remain the same between two consecutive images, which can be decomposed in two points: (i) that there is no change in lighting between two images (thus, k c is used instead of GHI, which can also be derived from the Heliosat-2 method but is more sensitive to the Sun relative position); and (ii) that no clouds are supposed to appear, disappear or transform between two consecutive images. Using a first-order Taylor expansion, this assumption writes: The elements that we want to identify are the pixel velocities v x = dx dt and v y = dy dt . Rewriting the brightness constancy assumption gives: The x-and y-derivatives ∂I ∂x and ∂I ∂y can be estimated from the current picture, and the temporal derivative ∂I ∂t can be estimated from two consecutive pictures. There are then two unknowns, v x and v y , and one equation, so that one additional constraint is required to derive the motion vectors (ill-posed problem).
There are various methods used to obtain a well-designed system to eventually derive the motion vectors, notably the use of a regularization term [41]. In this paper, we pursued the work of Reference [42] and used an efficient method from Reference [43] which combines a Gaussian-pyramid-based coarse-to-fine approach with Iteratively Reweighted Least Squares (IRLS).
In addition, many works showed that smoothing is beneficial to CMV approaches using OF, as it reduces spatial uncertainty [44]. In the typical spatial smoothing, all pixels in the neighborhood are typically used to smooth a forecast field. It results in a very smooth forecast that will minimize the risk of high error values. However, in our approach, we do not consider all pixels in the neighborhood but, rather, selected pixels based on the characteristics of the optical flow. A second difference is that, while the smoothing approach takes the average values, our algorithm provides the probability distribution of the forecast value, which contains more information. If we take the ensemble mean, our approach would be very close to the well-known smoothing approach. From this perspective, our approach can be thought of as a generalization of the smoothing approach, which contains more information, so we did not used standard spatial smoothing.
Identification of the Candidate Pixels
Once we derived the clear-sky index k c for each pixel, along with the motion vectors v x and v y , the next step is to identify the pixels that can potentially have an impact on the location of interest. To do so, we define a monitored perimeter around this location of interest. Then, we compute for each pixel if the motion indicates that it will enter this perimeter, and at what time. It is important to note that to perform this computation, we use an Eulerian approach where each pixel is assumed to have a constant motion vector and to move in a straight line. Many works using CMV approaches consider, rather, a Lagrangian approach, where each pixel moves in the motion field and, thus, follows a trajectory that can be different from a straight line. However, Eulerian approach is simpler and more efficient to compute; since we are working on forecast horizons shorter than 1 h, no significant difference is to be expected between the two approaches.
For each forecast horizon, we can determine the pixels that can potentially have an impact for that forecast horizon, and we consider that the k c values identified for these pixels are plausible values for the upcoming k c values at the location of interest.
Standard CMV-based forecasting approaches generally proceeds in a different way. Indeed, as described by the recent paper from Reference [38], a standard CMV-based forecasting approach uses CMV to "extrapolate the cloud index map" from the time when the forecast is initiated to the leading time. This extrapolation is performed with a simple 2D resampling, generally followed by the application of a smoothing kernel.
This "forward" extrapolation approach presents the advantage of providing forecasted cloud-index maps for each pixel at the same time. However, this approach does not handle CMV situations when different locations appears to "converge" for the same time of horizon to the monitored perimeter. Depending on the implementation of the resampling, this procedure will, at best, do an average on the converging k c values and more plausible a "random choice" of the k c value eventually selected.
The proposed approach is a "backward" one, i.e., on the contrary, centered to a given monitored perimeter of interest and is enabled to handle explicitly situations of convergence from different locations as superposition of possible states as an ensemble of plausible clear-sky indexes, from which will be derived empirical Cumulative Distribution Function (CDF).
Considering a cloud on a pixel with coordinates (x, y) and motion components v x and v y , we can estimate the distance at time t between the cloud and the location of interest at coordinates (0, 0) with: By equating the derivative of this distance with respect to time to 0, we can find the time for which this distance is the smallest with: Then, the distance between the cloud and the sensor at that time is: So, the candidate pixels that can potentially impact at the location of interest between future times t 1 and t 2 are identified with the condition: where R m is the radius of the monitoring perimeter.
To improve the calibration properties of the probabilistic forecasts, we followed a Monte-Carlo procedure by adding a centered Gaussian noise to the norm and direction of the motion vectors. We consider that the noise realization is the same over each image. For a given image, an error on the motion vector direction and norm dθ and dr are drawn from a centered Gaussian distribution. Then, the new motion vectors can be estimated as: Thus, to obtain the candidate pixels, we identify the pixels that satisfy the conditions from Equation (9) not only for the motion vector map calculated with the OF method but also for an arbitrary number of maps N mc obtained by drawing the noise of the motion vectors N mc times.
This Monte-Carlo procedure improves the calibration properties of the model. Since we draw the error components from centered Gaussian distribution i.e., dr ∼ N (0, σ r ), dθ ∼ N (0, σ θ ), the application of the Monte-Carlo procedure requires the estimation of two additional parameters, σ r and σ θ .
We found that standard quadratic error measures, such as the Continuous Ranked Probability Score (CRPS; see definition in Section 3) tend to favor smooth forecasts that avoid large errors. Using an optimization loop with such criteria to obtain optimized parameters would then result in very large parameters, so the noise added would be very important. In the end, forecasts would simply be the average of the satellite pictures, and all information about the cloud motions would be lost. A similar trade-off has been reported for deterministic settings when performance assessment relies in quadratic error measures (such as CRPS) [45]. Such metrics tend to favor smooth forecasts as to avoid large errors. Devising an adequate error measure is a research task on its own, and it is out of the scope of this work.
Thus, we adopted a trial-and-error approach to estimate the parameters, while having the physical mechanisms at play in consideration: • a standard deviation of 2 km/h for the Gaussian noise added to the cloud speed (σ r ), which is roughly one order of magnitude lower than the typical wind speed in the lower atmosphere (a few dozen km/h); • a standard deviation of π/12 radians for the Gaussian noise added to the cloud direction (σ theta ); significantly higher values were considered, as to represent the difficulty in accurately estimating a CMV and the fact that it varies over time (which the Eulerian approach here used disregards), but larger values resulted in the loss of all information regarding trajectories (i.e., considerably worse sharpness values); and • a monitoring radius of 1 km, intentionally lower than the 3 km satellite resolution; larger values increase the number of selected candidates (with the new ones being farther from the sensor location), which would then describe a variability that is less similar to that of a point sensor.
Regarding the monitoring perimeter, it is important to emphasize that these parameters consider not the original position of a given neighboring pixel but, instead, where it would be after considering a given CMV and time interval. Thus, it should be seen as giving some flexibility to the candidate selection process, as it is very unlikely that any advected pixel overlaps perfectly with the sensor. At the same time, it accounts for CMV uncertainty as a perimeter deems a range of vectors as suitable candidates (i.e., compliant with this selection criterion).
Additionally, we chose a single set of parameters which lead to good performance and reliability for the two test locations as to avoid overfitting and further validate the model (and its parameter choice). Although extremely high or low values result in bad quality forecasts, we found that, once reasonable values were found, the resulting forecasts were not much sensitive to these parameters. This explains why the same parameters could be used for two different locations with similar forecasting performance.
Building of the Empirical Distribution of the Clear-Sky Index k c
Once we obtained a set of candidate pixels, we consider that the corresponding k c values are plausible for the location of interest at a given horizon. However, to improve the calibration of the forecasting system, we add a Heliosat-2 k c estimation error component. To do so, estimation errors are collected at the location of interest based on the GHI measurements and the corresponding Heliosat-2 k c estimations on a training set. The CDF of these errors is then estimated conditionally to the Sun's elevation and the level of k c . To do so, both the Sun's elevation and the k c level were binned into 5 intervals of equal population (i.e., intervals defined by the 20th, 40th, 60th, 80th, and 100th quantiles). This results in 25 possible CDFs. When an additional noise is drawn for the k c at a given pixel, it is drawn from the appropriate CDF, depending on the k c level of that pixel and the Sun's elevation at the time when the image was acquired. This constitutes a second loop of Monte-Carlo drawings.
The k c values of the candidate pixels calibrated with an estimation error component define a set of candidate clear-sky indexes k c candidates . We consider that the closest the cloud passes from the location of interest, the more plausible its k c value is. Thus, we construct the CDF as the empirical CDF of the set k c candidates , each k c being weighted by the inverse of d candidates , the minimal distance at which the cloud passes, computed by Equation (7).
In other words, the CDF is computed as: This is the same as modeling the PDF as a sum of Dirac distributions with weights w i : Finally, we can obtain the CDF of the irradiance by multiplying the clear-sky index of the CDF by the clear-sky GHI which was estimated using the McClear model, notably accounting for water vapor and aerosol effects as forcasted by Copernicus Atmospheric Monitoring Services (CAMS) [46]. For illustration purposes, Figure 1 presents a "snapshot" of the CMV-based probabilistic forecasting for one of the considered test locations, described in more detail in a later section, for an arbitrary date (11 July 2016). at 12:00 UT and 1-h ahead horizon. An overall flowchart summarizing all the components of the method is proposed in Figure 2.
Performance Assessment
The proposed probabilistic forecasting approach is evaluated in both a deterministic and a probabilistic setting. We used the following acronyms in the evaluation section: • The proposed probabilistic CMV approach is noted Pr-CMV. • We can also provide deterministic forecasts using the Pr-CMV, by taking the median of the forecast distribution as the deterministic forecast. This method of obtaining deterministic forecasts is noted Pr-CMV-Det.
We also used several baseline models to compare with our approach. These are: • A standard CMV approach with no Monte-Carlo procedure noted St-CMV. With this method, the motion vectors are propagated assuming an Eulerian approach to estimate the upcoming GHI map for the next time step. • Two baseline models that make no use of satellite information: the Smart Persistence model noted persistence, for deterministic evaluation, and the Complete-History Persistence Ensemble noted CH-PeEn, for probabilistic evaluation. These models are defined in Section 3.2.
The deterministic evaluation is essentially performed as a preliminary validation step. As the median motion vectors of the probabilistic approach are identical to a traditional CMV, using the Pr-CMV-Det approach is expected to perform similarly to the deterministic St-CMV.
Metrics
Three classical normalized indicators of performance for the deterministic evaluation were considered: the normalized bias (nBI AS); the normalized mean absolute error (nMAE); and the normalized root mean squared error (nRMSE). These are notably defined in Reference [47], and, in this paper, the normalization is performed relative to the mean value of the observations of daylight period (i.e., when the sun elevation is larger than 10 • .
Evaluating the performance of probabilistic forecasts is considerably more complex than deterministic forecasts as there are numerous desirable properties which can be contradictory to each other. In this paper, we follow the evaluation paradigm presented in Reference [48], focusing on the reliability and sharpness of a forecasting model.
Reliability is defined as the consistency between the forecast probability of a given event and its observed frequency. This is assessed using reliability diagrams, where the empirical frequency of the quantiles obtained on the testing set are plotted against their nominal quantile levels. For example, for the quantile of level 25%, we quantify the frequency at which the ground measurement fall below the 25% level quantile and plot it against the value 0.25. For a perfectly reliable forecast, the measurement should fall below the forecast quantile of level α exactly α% of the time. Visually, the diagram of a perfectly reliable system corresponds to an identity line, whereas large deviations from this indicate a lack of calibration. Deviations from the diagonal line can also occur because of the finite size of the testing set. To address this, we use Reference [49] to draw intervals that show the range in which a perfectly calibrated system could be located due to the finite size of the evaluation set with a α = 5% confidence level. This means that any deviations outside the interval allow us to reject the null hypothesis that the probabilistic CMV is perfectly calibrated, with a 5% significance level.
To numerically quantify the reliability property, we use the Mean Reliability Deviation (MRD), which is the mean of the absolute deviations between the diagonal line and the actual reliability diagram. For a set of N quantile q α of levels α and their actual frequencieŝ α, the MRD is defined as: While the reliability property is highly desirable, it does not guarantee a model is of practical use. For example, a climatological model that forecasts each quantile of the upcoming distribution as the empirical quantile observed on a given training set is perfectly reliable, although it contains no predictive information besides the averages of the training set. Thus, it is important to assess a model's sharpness, as it quantifies the typical width of the forecast distribution. While a climatological system has a perfect reliability, it has a very low sharpness since it considers all observations from the training set equally, so that the forecast interval is large. On the other hand, a model that has forecasts conditional to some predictive exogenous inputs is generally sharper.
Sharpness is quantified using the Prediction Interval Normalized Average Width (PINAW), which is the average width of the interquantile for a given level of confidence. For a confidence level β, and a number N f of quantile forecastsq, along with the GHI measurements y i , the PINAW is defined as: In our evaluation, we will also use the Mean Prediction Interval Normalized Average Width (MPI N AW), which is the average PI N AW over all possible β values, given the available forecast quantiles: Both concepts are often contradictory: improving the reliability of a model can worsen its sharpness and vice-versa. However, forecasts can always be re-calibrated to have a good reliability, while sharpness is usually an inherent property of the forecasting model that can not be modified. Thus, in Reference [48], it is stated that reliability should be a requisite for a model, while sharpness should be quantified after verifying that the model is reliable. Thus, we will first evaluate the reliability of the probabilistic CMV, and then we assess its sharpness.
Finally, the Continuous Rank Probabilistic Score (CRPS), a composite metric which considers both reliability and sharpness, is calculated. For a forecast CDF F and a verification value y, the CDF is defined as: The CRPS is then derived for each individual forecast, and the average value is considered. In this work, the CRPS is presented in %, with respect to the mean value of GHI during the daylight periods, over the whole dataset of reference.
Baseline Models
As a deterministic baseline, the smart persistence (Persistence) model was considered, which can be defined as follows: For probabilistic forecasting, the Complete-History Persistence Ensemble (CH-PeEn) [50] was considered. The CH-PeEn consists of building an ensemble of k c values by taking all the k c in the history that have the same forecast time, independently of the lead time (i.e., no dependency with respect the time horizon). For example, to perform a forecast for a given day at 09:00 a.m., all the available k c values measured at the same time of day are used to build the ensemble. The quantiles of the distribution are then computed as the empirical quantiles of the ensemble. Thus, the forecast distribution depends only on the time of the day.
Data
To perform our probabilistic forecasts, we used data from two sites located in France. For each case, we have ground-based GHI measurements from pyranometers, as well as satellite-derived GHI maps on radius larger than 50 km around the locations of the sensors. The characteristics of both sites and the period for which data is available are summarized in Table 1. The satellite-derived data used are time-series of GHI maps with a native time step of 15 min and a spatial resolution of approximately 3 × 4.5 km. This satellite dataset has been extracted from the database HelioClim-3 version 5, based on the Heliosat-2 method applied on images from the sensor SEVIRI of Meteosat Second Generation (0 • Service). This database available online at http://www.soda-pro.com (last access: 23 June 2021) and is used both for research and industrial developments. It can be used for historical analysis, starting at February 2014 up to the current day, and is also available at near real-time with a time lag of less than few minutes. Numerous validation and performance analyses of HelioClim-3 for different regions and climates have been performed [51][52][53][54] and demonstrate the good reliability of this database to capture the spatial and temporal variability of the surface solar irradiance.
The ground-based measurements of GHI used as reference are first quality-checked at their native time resolution: 1-min and 5-min time steps, respectively, for Carpentras and Signes. This quality check first uses an automatic procedure based on upper and lower limits for extremely rare intervals, as described by Reference [55], and then on a visual inspection. These time-series of GHI are then averaged to 15-min to be consistent with the satellite data acquired every 15-min. Only data corresponding to a sun elevation angle larger than 10 • is considered both for ground measurements and satellite estimations.
Due to the native time resolution of the considered data, forecasts were completed for 15-min intervals, up to a 1-h horizon. However, it is important to note that, just as in the deterministic CMV-based models, the Pr-CMV is very flexible: without being constrained by the resolution of the data, it can be adapted to any choice of time horizons.
The CDF of the Heliosat-2 estimation errors are obtained based on a whole year used for the training period (2015), different for the whole year 2016 used for the performance analyses, in order to have a dataset representative for the four seasons. The values of the different parameters required in the simulation are reported in Table 2.
Deterministic Performance
As discussed in Section 3, a preliminary validation of the proposed approach is performed by comparing its deterministic performance to the St-CMV model. All the indicators are computed and then reported in Table 3. They are reported depending on forecast horizon and class of day variability.
This classification is based on a simple threshold for the mean of absolute k c variations over a given day. To estimate the threshold, we realized a scatter plot of the mean k c variation against the irradiation level. We found a threshold of 0.075, which we believe to separate well the days when persistence is most efficient (either very cloudy or sunny days, i.e., low or high mean daily k c ), from the more variable days. Thus, low variability days were considered those with a mean k c variation below 0.075. The results over all forecast horizons are summarized in Table 3. On average, persistence-based forecasts perform better for low variability days and for horizons shorter than 30 min, while the CMV-based forecasts perform better for high variability days and horizons longer than 30 min. This is in accordance with literature results that recommend the use of in-situ measurements for better forecast performance in the very short term, as well as satellite imagery for larger horizon ranges.
All the proposed models have very good calibration properties with low nBIAS values. However, the persistence model seems to have the best calibration properties, with a nBIAS always lower than 1%, while it can go up to 2% for the CMV-based forecasts. There is also a trend that the persistence generally performs better in terms of nMAE, while the CMV has a lower spread and performs better in terms of RMSE.
Finally, the Pr-CMV-Det behaves as expected: the difference between the two satellitebased approaches is minimal, so there is no drawback to using the Pr-CMV in a deterministic setting, as the performance is very similar to a standard one.
Probabilistic Performance
We derived the reliability diagrams, the PINAW, and the CRPS of the Pr-CMV and CH-PeEn models. The results are presented in Table 3, and reliability, sharpness, and CRPS, with the two latter being presented by time horizon every 15 min up to 1 h, for the two sites, which are presented, respectively, by Results show that the calibration of the CH-PeEn is better, on average, with very low MRD values. However, when comparing the models depending on the day variability, the reliability of the CH-PeEn decreases, while the reliability of the Pr-CMV remains similar. In any case, both models show sufficient reliability properties, with deviations from the diagonal line lower than 10%, in the worst case.
Plotting the reliability diagrams reveals further that, for Carpentras, there is a global tendency of underestimating the forecasts. For the Signes power plant, the lower-level quantiles of the CDF are slightly overestimated, while the higher-level quantiles are underestimated. This results in a forecast distribution that is globally too narrow. These issues could be corrected with a fine tuning of the parameters σ r , σ θ , and R m .
On the other hand, the forecasts from the Pr-CMV are much sharper than the CH-PeEn, which seems natural as the CH-PeEn uses all the history for computing the forecasts independently on the state of the atmosphere, which results in very wide forecast intervals, as for a climatological forecast. However, an increase in sharpness is not necessarily an improvement, as it can be obtained at the cost of decreasing the reliability. By computing the CRPS, we can assess if the forecast from the Pr-CMV is closer to the true distribution than the CH-PeEn. The CRPS values are indeed lower for the Pr-CMV for any time horizon and any class of days than the CH-PeEn, which ultimately proves that the cloud motion modeling step that we added in the Pr-CMV method effectively has an added value compared to a naive method, such as the CH-PeEn.
Conclusions
This paper addresses the difficulty in estimating the uncertainty in the upcoming irradiance for forecast horizons shorter than one hour. In this horizon range, forecasts are typically obtained by using in-situ measurements, which contain few predictive information on the future state of the atmosphere. This ultimately results in forecast intervals that are too narrow and do not accurately represent the upcoming uncertainty.
Several approaches use satellite information to incorporate knowledge of the future state of the atmosphere, thanks to its spatial and temporal perspectives. However, they provide deterministic forecasts, while most decision-making problems in the energy sector require estimation of the uncertainty.
In this paper, we propose a probabilistic forecast model that uses satellite information and generates probabilistic forecasts. We showed that this model performs well compared to standard CMV approaches in a deterministic setting. Besides, it has good probabilistic properties, so the uncertainty information it provides can be reliably used in a decisionmaking problem. The paper does not intend to propose a definitive method but, rather, aims at opening a new way of using CMV for short-term probabilistic forecasting using satellite as a source of real-time observation. Acknowledgments: Satellite data from the HelioClim-3 database has been provided by Transvalor SoDA at www.soda-pro.com (last access: 23 June 2021).
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, nor in the decision to publish the results.
Abbreviations
The following abbreviations are used in this manuscript: | 8,129.8 | 2021-08-12T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Physics"
] |
Mechanics of wedge turns in alpine skiing
A simple approximate theory of snow machining is applied to modelling successive wedge turns of alpine skiing. The model involves predefined control functions describing skier’s control over the turns via angle of attack, edge angle and loading of the skis. To demonstrate the model’s potential, reasonable control functions with a small number of free parameters are designed and used in attempt to reproduce the data obtained in a previous field study by other researches. The results are in semi-quantitative agreement with the data. In particular, the model explains the nature of the abnormally high values for the “coefficient of friction” deduced in that study. Future field studies of wedge turns should aim at measuring the angle of attack, edge angle and loading of the skis. This will allow to determine the control functions from the experimental data and hence to conduct a more stringent verification of the model.
Introduction
A wedge turn is the first type of turns introduced to students in most schools of alpine skiing. In this turn, skis form a wedge pointing in the direction of motion (see Fig. 1). This increases the braking power of skis and hence allows to keep the speed of descent as low as needed for the students to feel safe and able to focus on improving their technique. The turning action is achieved by preferential loading one of the skis, which leads to turning in the opposite direction. For example, if the skier needs to turn right, then the left ski has to be loaded more than the right ski.
During a wedge turn, both skis are set at an angle to the direction of motion (the angle of attack). As a result, the motion of each ski is a combination of motions along the ski longitudinal axis and perpendicular to it. Moreover, the skis bases are set at an angle to the local slope surface (the edge angle) and the perpendicular motion involves removal of a top layer of snow. 1 Overall, the ski motion becomes very different from simple gliding over the snow surface.
This snow-removal action is similar to the removal of material in the manufacturing process of machining. In this process, the cutting tool is subject to a reaction force from the machined material, which can be considered as a combination of friction and pressure. The pressure force arises mostly at the rake face of the tool, where it pushes the chip out. Its component tangent to the machined surface is called the cutting force. In skiing, the role of the rake face is played by the ski base and hence the cutting force is perpendicular to the ski. It has a component which is opposed to the direction of motion and hence promotes braking. Moreover, it has a component which is perpendicular to the direction of motion, and this component is the reason behind the turning action of skis set at non-vanishing angle of attack.
The machining of snow and ice has been studied in laboratory [1][2][3][4][5]. These authors also derived empirical expressions for the snow reaction force and used them to model skidded ski turns. Moreover, Brown [6] applied to skiing the theory of metal cutting developed in [7] for the case of continuous (type-2) chips. However, snow and ice are highly brittle materials and instead of continuous chip their machining normally results in a spray of ice particles.
Recently, the work of Brown [6] was extended by Komissarov [8], where a simpler theory of snow machining was developed. The main assumptions of this theory are (1) the ski edge is perfectly sharp and (2) the Coulomb friction between the ski and the cut snow is negligibly small. The main advantage is the very simple analytical expressions for the turning and braking components of the cutting force, which in turn allow for rather simple mathematical models of ski manoeuvres involving side-slipping (skidding). In [9], this theory was applied to the side-slipping down the fall line, hockey stop and skidded traversing. The results agreed with skiing practice. Moreover, they allowed to explain the earlier results of the experimental study of traversing by Kaps et al. [10] without invoking of abnormally high, when compared to gliding, values of the kinetic coefficient of friction.
Even higher values for the coefficient of Coulomb friction were obtained by Sahashi and Ichino [11] in their investigation of skidded ski turns. Among several different types of such turns, they explored the wedge turn. This seems to be the only study of wedge turns published so far. Here we apply the theory of snow machining developed in [8] to wedge turns. Our main aim is to investigate whether this model can explain the data by Sahashi and Ichino [11] without resorting to abnormally strong Coulomb's friction.
Basic equations
The key forces acting on a skier during a ski run are the gravity force, the snow friction, the snow reaction force, and the aerodynamic drag force. The snow friction is usually low, thanks to the slippery nature of ice, meltwater lubrication, and waxes. Hence, we will assume that the braking action due to the snow cutting component of the snow reaction force is much more important and will ignore the usual friction altogether. The aerodynamic drag force is also low because wedge turns are executed only at rather low speeds. Hence we will ignore the aerodynamic drag as well. As the result of these simplifications, the equation of motion reduces to where M is the skier mass, v is the skiers velocity, g is the gravitational acceleration vector, and F r is the snow reaction force due to its machining by skis. Accounting for the contributions from both skis, we write where "(l)" and "(r)" stand for the left and the right skis respectively. According to our theory of snow machining, where N (i) = N (i)k is the normal component of the snow reaction force, k is the outgoing unit vector normal to the plane of the ski slope, n (i) s is the unit vector in the plane of the slope which is normal to the edge of the i-th ski and points to the side opposite to the direction of side-slipping, and Ψ (i) is the edge angle of ith ski [8]. For simplicity, we assume that both the skis and the skier centre of mass (CM) move with the same velocity and denote as m the unit vector in the direction of motion. Hence where ŝ (i) is the unit vector aligned with the i-th ski. Using Cartesian coordinates with the basis vectors k (normal to the ski slope), ̂i (parallel to the fall line), and ̂j (perpendicular to the other two), we can write where is the slope inclination angle, is the angle of traverse, and is the angle of attack. Here we assume that both and are measured in the counter-clockwise direction as seen from above the slope (see Fig. 2). The reference direction for is ̂i and the reference direction for is m . Hence sgn (r) = +1 and sgn (l) = −1 (see Fig. 2).
If we ignore the up-and-down motion of skier's CM, then k ⋅ dv∕dt = 0 , and projecting Eq. (1) on the direction of k we find 1 g The skier trajectory is governed by the equations and So far, we have four scalar equations (10, 12, 13, and 14) determining the evolution of four dynamic variables: V(t), (t) , x(t), and y(t). However, in addition to these variables, the equations involve six more parameters, A (i) N , (i) , and Ψ (i) , which also vary during ski runs. Their evolution is determined not so much by the laws of mechanics but rather by conscious actions of skiers, and for this reason they can be called control variables.
In the experiments run by Sahashi and Ichino [11], the skier was given a task of executing repetitive and symmetric turns. To achieve such turns, a skier has to aim at synchronising their actions with the variation of in general, and at terminating turns at some chosen extreme values of the angle of traverse ± m in particular. Hence the control variables can be defined as functions of . It also is important to differentiate between left and right turns. For example, when turning, a skier puts more weight on their outside ski, which is the left ski for a right turn and the right ski for a left turn. Hence, at the same angle of the traverse, Hence, there should be two sets of control functions, one per each turn direction.
This analysis invites to consider using as an independent variable, instead of t, because this simplifies the procedure of switching between the two sets of control functions. The required substitution can be done using Eq. (12) and it yields the equations, During a left turn, these equations are integrated into the positive direction of (towards the transition point + m ) using the control functions for left turns. At the transition point, the direction of integration is reversed, and the control functions are replaced with those for right turns. The values of V, t, x, and y found at the end of the previous turn become the initial conditions for the next turn. Hence the integration continues towards the transition at − m , where the direction of integration and the control functions are swapped again, and so on until the desired number of turns is completed.
At the transition between turns, the ski/skier trajectory exhibits a smooth inflection [11]. Accordingly, we adopt the condition d ∕dt = 0 at = ± m and hence This implies a singularity in Eqs. (15-18) at = ± m . Provided ± m is not a stationary (equilibrium) point and can be reached in finite time (which can be considered as a condition imposed on the control functions), this is only a minor technical disadvantage. The singularity can be avoided by starting and terminating the integration slightly off these extreme values of .
Control functions
As we have already discussed, the control functions close the dynamical equations of wedge turns by introducing the conscious actions of skiers aimed at controlling their descent down the ski slope. Hence they cannot be specified uniquely based on some general principles and may vary a lot from run to run and even during individual runs. Comprehensive experimental studies can establish these functions for the specific runs under investigation, and thereafter they can be entered into the model to see how well it captures these runs. In the absence of such data, which is the case here, we are forced to invent them. In this process, we are guided by the well established elements of the wedge turn technique.
According to Eq. (12), the turning action of ith ski is proportional to Hence, it increases with the load of the ski A (i) N , its angle of attack (i) , and its edge angle Ψ (i) . For the left ski, (i) < 0 and hence it promotes turning to the right (to position with smaller ). On the contrary, for the right ski (i) > 0 and hence it promotes turning to the left. This conflict between the turning actions of left and right skis is the main difficulty of wedge turns, which has to be mitigated for better performance. The only way to do this is via reduction of the turning action of the inside ski (obstructing the currently executed turn), which can be done by reducing its load, angle of attack, and edge angle. Indeed, this seems to be a common feature of the wedge-turn technique as supported by openly available on YouTube video lessons by qualified ski instructors. Moreover, students are explicitly instructed to keep more weight on the outside ski. The lower angle of attack of the inside ski is also manifest in the data of Sahashi and Ichino [11]. These observations will be used below for designing suitable control functions.
Angle of attack
Throughout the wedge turn, the wedge angle between the skis, w = (r) − (l) , is usually kept approximately the same [e.g. 11]. Hence, it makes sense to put where the functions A (l) and A (r) satisfy the constraint According to the study of Sahashi and Ichino [11], the angle of attack of the inside ski is close to zero (in agreement with the analyses above) and hence the angle of attack of the outside ski is close to ± w , with the sign + for the right ski, and the sign − for the left ski. Moreover, the transition phase between turns is rather quick. Based on these observations, we adopt the following simple model: for left turns, and for right turns. One can see that (l) = (r) = w ∕2 for = ± m , and hence at the transition point the ski wedge is symmetric with respect to their direction of motion. The graph of A (l) ( ) is shown in the left panel of Fig. 3.
Edge angle
In wedge turns, the angle between skier's legs, Ψ m , is more or less constant. The basic geometrical consideration shows that, for straight legs, flat ski slope, and small wedge angle, Ψ m ≈ Ψ (l) + Ψ (r) (see the left panel of Fig. 4). Hence we may adopt the following model for the ski edging where Unfortunately, Sahashi and Ichino [11] did not provide any information on the ski edging that could be used to specify the edging functions A (i) Ψ ( ) . However, our analysis of the conflict between the turning actions of the inside and outside skis and the ways of its mitigation shows that the mitigation is most efficient when the attack and edge angles vary in unison. Hence we simply put In this model, (1) Ψ (l) = Ψ (r) = Ψ m ∕2 at the transition point between turns (at = ± m ), and (2) in the middle of the turn (at = 0 ) Ψ = Ψ m for the outside ski and Ψ = 0 for the inside ski.
To determine a reasonable range for Ψ m , let us suppose that ski's head section (from the boot mounting point to the tip) is of the same length as skier's leg. In the experiments by Sahashi and Ichino [11], the skis were 180 cm long, and the photograph of the skier shown in Figure 1 suggests that this assumption is quite reasonable. If we further assume that the ski tips touch each other, then the geometry of the problem implies Ψ m ≃ w (see the middle and right panels of Fig. 4). In reality, the tips are usually kept somewhat apart (see Fig. 1 in [11]), resulting in a larger Ψ m compared to w . Hence one can use as a guide where ≳ 1.
Loading factor
Unfortunately, Sahashi and Ichino [11] did not provide any information on the ski loading too. When searching for suitable load distribution functions, one has to take into account that at the transition between turns ( = ± m ), the angle of traverse function (t) takes extreme values and hence d ∕dt = 0 . Combining this condition with equations (8,12), one finds where These two equations are constraints of the load distribution function. More constraints appear if we want to control the load distribution elsewhere in the turn. For example, one can chose it to be the relative loading of the outside ski at = 0 , the point where the skier moves in the direction of the fall line In general, three constraints allow to fully specify functions with three parameters, which suggest to approximate the loading function using quadratic polynomials, A (l) N ( ) = a 2 + b + c . After solving the constraint equations for the coefficients of such polynomial, one finds for left turns, and for right turns. This function was used in the simulations described in the next section. It is illustrated in the right panel of Fig. 3.
Suitability of control functions
In principle, a skier may decide to stop turning and continue their motion with a straight traverse. During such a traverse, the angle of traverse is constant and hence all its time derivatives vanish. Hence it is possible to choose such control functions that = m is a stationary solution. In this case, other solutions will be able to reach m only asymptotically as t → ∞.
The evolution equation for (t) (Eq. 12) has the form where K( ) = gF( )∕V( ) and hence vanishes at m . If K( ) was infinitely differentiable at m then all the higher order derivatives would vanish as well, implying that = m is a stationary solution. For example, if K � ( m ) is finite. Hence for = m to be a point of transition between turns, K( ) must not be infinitely differentiable at this point (The same applies to = − m .). In fact, such non-differentiability of K( ) is ensured by our choice of the control functions for the angle of attack (Eqs. 24 and 25, and the left panel of Fig. 3). It is easy to see that their derivative diverges at ± m .
(32) A (l) N (0) = A 0 . (33) Expanding K( ) in the vicinity of m in powers of = 1 − 2 ∕ 2 m leads to where A ≠ 0 is a constant. Integrating this equation with the initial condition (t m ) = m , one finds the asymptotic solution which confirms that m is indeed not a stationary point and can be reached in finite time.
Results
In this section, it is investigated how well the above model of wedge turns can reproduce the results of the field study by Sahashi and Ichino [11]. Their experimental data are presented in the form of plots which show the trajectory of skis and the variation of some key parameters with the distance down the fall line-the angle of traverse for the midpoint of the ski wedge, the angles of attack for both the left and the right skis, the speed of the midpoint, "the curvature radius of ski track", and the effective coefficient of snow friction. These data describe two consecutive turns of a single run. The effective coefficient of friction was obtained on the basis of the observed acceleration of the midpoint and the model where it is attributed to the competition between the component of the gravity force along the direction of motion and the Coulomb friction. The way the radius of curvature is calculated is not described. Given the fact that in the wedge turn, skis leave a rather wide trail, this introduces a significant degree of uncertainty about this parameter.
The ski slope used in the study had the inclination = 7 • . Based on the plots, the maximum angle of traverse m = 40 • , the wedge angle w = 20 • and the skier speed V ≈ 3m/s. Given our selection of as an independent variable, the computations of the whole ski run split into computations of individual turns. During left turns, increases from − m to + m and during the right turns it decreases from + m to − m . Because F(± m ) = 0 , the integrated Eqs. (15)-(18) are singular at the turning points. To avoid the singularity, the numerical integrations of these equations begins and terminates at = ±0.9999 m . The parameters found at the end of the previous turn determine the initial conditions for the next turn. The initial conditions for the whole run are = −0.9999 m , V = V 0 = 3m/s, x = 0 , and y = 0. With m , w , and V 0 immediately determined by the experimental data, the only other two model parameters which remain free are the relative loading of the outside ski at the fall line A 0 and the maximum edge angle as represented by the factor in Eq. (29). The realistic ranges for this parameters are A 0 ∈ (0.5, 1) and ∈ (1, 2) . Hence we explored this region of the parameter space, looking for a close similarity between the model and the experimental data. As a first step, the loading parameter was set to A 0 = 0.8 , and was varied until there was no strong systematic variation of the skier speed V with x, like seen in the experimental data (after the short initial phase of skiing down the fall line during which the speed was growing). At the same point, the distance between two points corresponding to the same turn phase reached approximately the same value as in the experimental data, ≈ 4.5 meters. This is rather surprising because, one would not expect a model to fit two independent sets of data by adjusting only one free model parameter. The obtained = 1.67 corresponds to the reasonable ski edge angle Ψ m = 34 • . Figure 5 shows the variation of several kinematic parameters with the distance down the fall line obtained in the model with A 0 = 0.8 and = 1.67 . All these parameters were measured in the experiment and their actual variation is shown in Figure 3 of [11].
In the context of this study, the most important of these parameters is the effective coefficient of friction, defined as Using this definition, one can write Eq. (10) as which looks exactly the same as in the model where the snow reaction force is replaced with Coulomb's friction with the coefficient eff . Like in the experimental plots, eff varies between 0.05 and 0.2. In the model, eff increases on approach to the fall line ( = 0 • ) and decreases after passing it. However, in the experimental data the peak is reached after the fall line.
The variation of the angle of traverse is similar to what is seen for the first turn in [11] but the transition between the first and the second turn of the experimental run is noticeably sharper. The speed plot shows qualitatively the same evolution as in the experiment, with maxima just before the fall line around the phase where the effective coefficient of friction is about half way between its extreme values. However, the amplitude of the speed variation is about half of that in the experimental data.
The largest difference between the model and the data is in the values of the curvature radius of the trajectory (38) dV dt = g(sin cos − eff cos ), where l is the distance along the trajectory. While the shape and position of the theoretical curve are very similar to what is seen in the experimental curve, we find twice as lower radii at the minima. This is somewhat in conflict with the apparent similarity between our trajectory of the CM and the ski trajectories presented in [11]. To illustrate this point, we scanned their plot and superimposed it on our trajectory plot. The result is shown in Fig. 6. One can see that theoretical and the experimental trajectories agree. In fact, the theoretical curve traces the inside ski, as indeed is expected in the case where it runs flat, and hence the CM is located right above it (with the respect to the normal of the ski slope).
Presumably, the explanation of the apparent conflict between the computed and measured curvature radii lies in the difference between their definitions. Unfortunately, Sahashi and Ichino [11] do not describe their procedure for measuring R.
Overall, the computed run is similar to the ski run presented in [11]. Without tabulated data, we cannot do a proper fitting of the theoretical model. Moreover, the experimental data describe only two turns, which are not identical, and hence the outcome of such a fitting would not provide any useful statistical information. Hence we opted not to proceed in this direction. Instead, we simply repeated our procedure with somewhat different values of A 0 to see if this would lead to a noticeably different outcome. This way we have found that for a higher A 0 a similar outcome can be obtained with lower value of , and the other way around. For example, the solutions corresponding to A 0 = 0.7 and = 1.78 , and A 0 = 0.9 and = 1.57 are not much different from the reference solution described above.
We finish this section, by considering a simpler manoeuvre involving wedged skis that can also be used for experimental verification of our model. Namely we analyse the motion down the fall line. In this case, = 0 , A (l) N = A (r) N = 1∕2 , (l) = (r) = w ∕2 , Ψ (l) = Ψ (r) = Ψ m ∕2 , and Eq. (10) reads
Discussion
The complex movements of skier's body performed during skiing are designed mostly to achieve the desired interaction between their skis and snow. This interaction allows skiers to make turns and control speed. Hence, a clear understanding of the ski-snow interaction is a primary objective for the theory of skiing. Here I focused on the case when skis are set on edge and sideslip during their motion on hard snow. This results in the removal of the top layer of snow, which makes the ski-snow interaction similar to the process of machining in manufacturing. The key force in this process is the snow reaction force normal to the ski base, which has both the braking and turning components 2 .
In the theory of snow machining, the key parameters determining the turning and braking forces acting on the skis are their loading, angle of attack, and edge angle. How exactly skiers control these parameters is a topic of sports biomechanics and it is not addressed in this paper. Instead, these parameters are simply described as functions of the turn phase (the angle of traverse). This choice is convenient for modelling of repetitive ski runs (periodic or quasi-periodic solutions), which simplify studying the turn dynamics.
The theory requires proper verification via comparing its predictions with skiing practice. The main aim of this paper was to build a model that could be used in experimental studies of wedge turns. I used as a guide the study by Sahashi and Ichino [11], the only field study of wedge turns so far. Moreover, I wanted to explore if the strong braking force reported in this study could be attributed not to the abnormally high and variable coefficient of Coulomb friction, as done in [11], but to the cutting force of machining.
Unfortunately, the study gives no information on the ski loading and edging, and only rather limited graphic data on the angle of attack. Hence, I specified the control functions using as a guide the video recordings of wedge turns available on YouTube and my personal skiing experience. Admittedly, the videos reveal significant stylistic variety and hence the choice of control functions is far from unique.
To simplify the comparison between the model and the experiment, the control functions had only two free parameters, one adjusting the loading of the outside ski and another adjusting its edging. Although this reduced the flexibility of the model, it still succeeded in reproducing quite closely the wedge turns studied in [11], including the wedge angle, skier's trajectory, and speed. Finally, the evolution of the effective coefficient of snow friction with the turn phase was also very similar to the one observed in the experiment, including the abnormally high values near the peak.
Unfortunately, Sahashi and Ichino [11] did not provide any information of the load distribution between the inside and outside skis and the skis edge angles. So, there remains a possibility of disagreement between the theory and experiment with regard to these parameters. Future experiments should aim at measuring them as well. In relation to this, the modelling has revealed a degeneracy in the plane of the parameters used to adjust the loading and edging functions-models with higher loading and lower edge angle of the outside ski yield similar results to models with lower loading and higher edge angle.
One small problem with experiments focused on repetitive ski runs is that in reality every individual turn is still somewhat different from other turns. Human errors and variations of terrain contribute to the inconsistency. However, the results of such imperfect runs can be analysed using a statistical approach.
The simpler is a ski manoeuvre, the easier it is to model and to analyse, and the more reliable can be conclusions on the ski-snow interaction. In this regard, skiing down the fall line in symmetric wedge configuration can be worthy of attention. In the theory, this motion is described by just one simple differential equation (40), which can be integrated analytically. In experiment, the skier is not required to repeat their movements turn after turn, but only to keep the position of their body and skis unchanged. Although such experiment cannot be used for studying the turning action of skidded skis, it is very suitable for investigation of their braking action.
The simplified theory of snow machining used in this study treats skis as flat plank with straight edge. In reality, most modern skis are shaped and do not have straight edge anyway. Moreover, due to their finite longitudinal and torsional stiffness, skis will bend and twist when interacting with snow [e.g. 2,12]. As a result, the angle of attack becomes a function of the position along the ski edge [e.g. 2,13]. For turns with large mean angle of attack, like the wedge turn, this is likely to be a secondary effect. However, for the aim of experimental verification of the basic machining theory, it is still preferable to use stiff skis with large sidecut radius.
For turns with small angle of attack, approaching pure carving, the bending and twisting of skis may become crucial for determining their braking power. Even a relatively small angle of attack at the skidding front section of the ski may be sufficient for the snow cutting force originated at this section to dominate Coulomb's friction. Further investigation involving more complex models and computer simulations is needed to clarify this effect.
Conclusion
This paper described the first attempt to model consecutive wedge turns using a simplified theory of snow machining. The results are at least in a semi-quantitative agreement with previously published experimental data and are hence encouraging. In particular, the snow machining mechanism allows to explain the abnormally high and dependent on the turn phase values of the friction coefficient found in that field study. However, additional experiments, providing statistically sufficient information for a larger set of parameters, are required to make further advances in this area. | 7,233.8 | 2022-03-25T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
A new model of multi-visceral and bone metastatic prostate cancer with perivascular niche targeting by a novel endothelial specific adenoviral vector
While modern therapies for metastatic prostate cancer (PCa) have improved survival they are associated with an increasingly prevalent entity, aggressive variant PCa (AVPCa), lacking androgen receptor (AR) expression, enriched for cancer stem cells (CSCs), and evidencing epithelial-mesenchymal plasticity with a varying extent of neuroendocrine transdifferentiation. Parallel work revealed that endothelial cells (ECs) create a perivascular CSC niche mediated by juxtacrine and membrane tethered signaling. There is increasing interest in pharmacological metastatic niche targeting, however, targeted access has been impossible. Here, we discovered that the Gleason 7 derived, androgen receptor negative, IGR-CaP1 cell line possessed some but not all of the molecular features of AVPCa. Intracardiac injection into NOD/SCID/IL2Rg -/− (NSG) mice produced a completely penetrant bone, liver, adrenal, and brain metastatic phenotype; noninvasively and histologically detectable at 2 weeks, and necessitating sacrifice 4-5 weeks post injection. Bone metastases were osteoblastic, and osteolytic. IGR-CaP1 cells expressed the neuroendocrine marker synaptophysin, near equivalent levels of vimentin and e-cadherin, all of the EMT transcription factors, and activation of NOTCH and WNT pathways. In parallel, we created a new triple-targeted adenoviral vector containing a fiber knob RGD peptide, a hexon mutation, and an EC specific ROBO4 promoter (Ad.RGD.H5/3.ROBO4). This vector was expressed in metastatic microvessels tightly juxtaposed to IGR-CaP1 cells in bone and visceral niches. Thus, the combination of IGR-CaP1 cells and NSG mice produces a completely penetrant metastatic PCa model emulating end-stage human disease. In addition, the metastatic niche access provided by our novel Ad vector could be therapeutically leveraged for future disease control or cure.
INTRODUCTION
Despite enormous strides in therapeutic development, metastatic prostate cancer remains fatal. Treatment with abiraterone an inhibitor of cytochrome P450 17A1 (CYP17A1 17α-hydroxylase/17,20 lyase)mediated androgen synthesis, or enzalutamide, which inhibits three androgen receptor (AR) functions; ligand binding, nuclear translocation, and DNA binding, have increased quality of life and life span. However, these more potent drugs and the newer taxanes, alone or in combination, also appear to foster an increasingly evident clinical and pathological entity, "aggressive variant prostate cancer (AVPCa)" [1][2][3]. AVPCa is accompanied by visceral (liver, adrenal, and brain) in addition to osseous metastases [4,5]. AVPCa metastases possess a range of histological phenotypes, lack androgen receptor protein expression, and express a variable number of neuroendocrine markers [1,6,7]. The combination of visceral with bone metastases confers a poor prognosis and patients die soon after diagnosis [8][9][10].
While the proximate cause for AVPCa development is selection under AR signaling inhibition [3,11], the target cell for this selection is likely the cancer stem/tumor initiating cell (CSC) [12]. This cell population is maintained by a collection of host cells termed the niche [13]. Malignant cell proliferative quiescence appears to require continuous suppression. Suppression is achieved by niche cell secretion of molecules and cell surface integrin display [14]. Metastatic niche signaling is predominantly short range. That is, molecules secreted by niche cells tend to either be membrane tethered, or stromal matrix bound. As niche signaling requires intimate interactions between malignant cells and host niche components it is not surprising that lineage tracing or immunofluorescence has revealed tumor-niche juxtaposition [15]. Niche cellular component predominance appears to vary, in part, in different host organs. In the bone, osteoblasts and mesenchymal cells are prominent [16]. However, one near universal niche component that appears to be to the principal arbiter of the proliferatively quiescent metastatic cell population is the vascular endothelial cell (EC). In bone marrow, ECs are one major component of the hematopoietic stem cell niche [17]. Perivascular niches also appear to play significant roles in metastatic cell quiescence in brain glioblastomas, breast metastases, and in hematological malignancies [15]. The relatively recent availability of a reliable endothelial targeted Cre recombinase transgenic mouse has enabled the genetic discovery of short range signaling ligands secreted by endothelial cells in response to damage, during development, and in malignant niches in both primary and metastatic cancer [18]. This short range signaling has been termed "angiocrine" [19]. Angiocrine functions have been shown as necessary for hematopoietic recovery following sublethal radiation or 5-fluorouracil chemotherapy in the bone marrow [19]. Endothelial secretion of a soluble form of the NOTCH ligand Jagged-1 was required for growth of colon liver metastases, and for lymphoma maintenance [20]. Given its potential importance in metastatic persistence and therapeutic recalcitrance the tumor-perivascular niche is an ideal candidate for therapeutic manipulation. Unfortunately, discrete access to this tissue compartment has been impossible other than in genetic mouse models.
Here, we have used a recently described PCa cell line, IGR-CaP1, that forms osteoblastic metastases following intratibial or systemic arterial injection [21]. We have discovered that IGR-CaP1 cells closely emulate aggressive PCa. Highly immunodeficient NOD/SCID/ ILR2 γ -/-(NSG) mice [22] evidence a 100% incidence of bone, liver, and adrenal experimental metastases, detectable by 2 weeks post intracardiac injection. IGR-CaP1 cells lack AR protein expression, while upregulating neuroendocrine markers. The cells appear to be in dynamic equilibrium between epithelial and mesenchymal fates, and express stem cell markers. They also appear to have extensive DNA damage. In parallel with model creation and molecular interrogation, we have constructed a novel endothelial-targeted adenoviral (Ad) vector that we genetically modified for enhanced peripheral uptake, diminution of liver hepatocyte sequestration, and enhanced tumor-associated endothelial expression. Systemic injection of this vector produces transgene reporter expression in the microvasculature immediately juxtaposed to metastatic cells in bone and visceral metastases. Thus, we now have both a mouse model and unprecedented access for testing the necessity for a growing list of signaling pathways thought to be crucial metastatic niche maintenance. These vectors could potentially be used as new standalone therapies or to pinpoint drug targets that could enable control or cure for otherwise lethal metastatic PCa.
Development of a completely penetrant, rapid onset, model of experimental bone and multivisceral organ metastatic prostate cancer
In the original reports of the IGR-CaP1 cell line, nude (nu/nu) mice were the recipient immunodeficient hosts [21,23]. However, the time for metastases to reach appreciable size, 7-9 weeks, and the bone metastatic incidence, 55%, motivated us to test mouse strains with greater degrees of immunodeficiency. Inspired by purported enhanced human cell receptivity of highly immunodeficient NSG mice, we tested IGR-CaP1 bone and visceral experimental metastatic frequencies following intracardiac injection [24]. To compare our work with the original studies of the IGR-CaP1 cell line, we injected the same number of tumor cells, 5X10 5 , into the left ventricle www.impactjournals.com/oncotarget [21,23]. To facilitate both noninvasive and fluorescence detection of microscopic metastatic foci, we created a new IGR-CaP1 cell line expressing both click beetle red luciferase and mCherry. BLI signals were detected at 2 weeks, increasing in intensity by 4 weeks post tumor cell injection ( Figures 1B, C). Mice required sacrifice at 5 weeks post injection. Histopathology and fluorescence revealed that 97-100% of NSG mice evidenced liver, adrenal, and bone metastases ( Figures 1A, D-H). Brain and kidney metastases were detected in 77 and 50% of mice respectively ( Figure 1A, F, and kidney not shown). Bone metastases were detected in the most common regions affecting prostate cancer patients including the tibia, vertebral column, femur, humerus, maxilla and mandible ( Figure 1B, C, G, H, and data not shown).
More detailed histopathological analysis revealed that IGR-CaP1 liver, kidney, and adrenal metastases were relatively well-circumscribed nodules of poorly differentiated carcinoma growing in sheets and nests with scant stroma and only focal gland-like spaces ( Figure 1D, E, and Supplementary Figure 1). Occasional cells with more dense, eosinophilic cytoplasm, smudgy nucleoli and prominent cherry-red nucleoli also were observed, mostly distributed toward the periphery of tumor cell nests. Mitotic activity was brisk and included atypical forms. In the brain, groups of well differentiated metastatic cells surrounded microvessels, which is a structure similar to perivascular pseudo rosettes that are detectable in brain ependymomas ( Figure 1F) [25]. At the time of sacrifice, 5 weeks post injection, metastatic liver and adrenal tumors were extensive replacing large areas of liver parenchyma, and nearly all of the adrenal gland. These "late stage" tumors evidenced multifocal single-cell as well as central "comedo" necrosis (Supplementary Figure 1B-1C) [26]. In some areas, degeneration and necrosis in the tumor cell nests imparted a pseudopapillary architecture, though these areas lacked true papillae with fibrovascular cores (Supplementary Figure 1A, arrow). In addition, focal tumor gland formation was also detected (Supplementary Figure 1A, arrowhead). Extensive hepatic and adrenal metastases likely underlain the necessity for mouse sacrifice at this time point. Collectively, the IGR-CaP1/NSG experimental metastasis model was markedly accelerated compared the original reports wherein nu/nu mice were used as hosts [21,23]. As genetic drift in our IGR-CaP1 cell stock could be one explanation for our metastatic frequency differential compared to the original report, we had STR chromosomal marker analysis done by an outside collaborator. Our stock evidenced the same markers as the prior study [23].
IGR-CaP1 cells stimulate osteoblastogenesis, consistently forming mixed osteoblastic/ osteolytic tumors
To further explore the metastatic biology and the tumor-host niche interactions of experimental NSG host IGR-CaP1 metastatic tumors we used a combination of histopathology and molecular immunofluorescence analysis. Skeletal metastases were profoundly osteoblastic with marked new bone formation, particularly evident using Masson-trichrome staining (Figure 2A and 2B). To further investigate osteoblastogenesis, osteocalcin immunofluorescence was performed ( Figure 2C and 2D). There were an increased number of osteocalcin positive cells with an osteoblastic morphology ( Figure 2C and 2D, arrowhead and Latin cross). Metastatic IGR-CaP1 cells were detected in close juxtaposition to the osteocalcin positive cells. Scattered individual bone metastatic IGR-CaP1 cells also expressed osteocalcin ( Figure 2C, arrow, magnified in 2D), suggesting the initial stages of osteomimicry, although IGR-CaP1 failed to mineralize in bone forming media in culture (data not shown). As both human data and prior work demonstrated that IGR-CaP1 bone metastases also stimulated osteoclastogenesis, we tested for tartrate-resistant acid phosphatase (TRAP) activity levels in our NSG-based model. In contrast to previous work [21], there was low-level increase in TRAP activity compared to normal bone, which was markedly decreased compared to the osteoclastogenesis and osteolytic activity of 786-O renal carcinoma cells ( Figure 2E-2G).
To globally investigate IGR-CaP1 cell alterations of bone morphology we used microCT analysis of tumor bearing (n=2 mice) versus non-tumor bearing (n=1) femur, tibia, and spine ( Figure 3). 3D rendering of the entire femurs and tibias revealed multiple sites of cortical discontinuity particularly within or immediately adjacent to the metaphyseal regions of both bones at the knee ( Figure 3B and 3C, white and black asterisks). Cross sectional analysis of the metaphyseal regions using transaxial 3D imaging revealed increases in metaphysial bone volume in both femur and tibia in the tumor bearing compared to the non-tumor bearing mice ( Figure 3A and 3B). A circular discontinuity in the distal diaphysis of one of the femurs was also detected ( Figure 3C). Histological step sectioning revealed that the cortical defect was filled, and likely caused by, a juxtacortical tumor mass ( Figure 3D). These data indicative of osteolysis were surprising given the paucity of TRAP activity ( Figure 2F). Transient focal increases in osteoclast activity could be one explanation for their infrequency in areas of cortical disruption ( Figure 3D). In contrast to the femur and tibia metaphysial bone remodeling, there were no detectable abnormalities in the proximal femur at the hip, in the distal tibia at the ankle, and in the spine.
IGR-CaP1 cells express a subset of molecules associated with aggressive PCa
The combination of dual bone and visceral organ metastases led us to question whether IGR-CaP1 cells modeled the emerging clinical entity, aggressive PCa at the molecular level [7]. Aggressive PCa fails to express androgen receptor (AR), displays varying degrees of neuroendocrine transdifferentiation, possesses distinctive oncogenic and tumor suppressor gene amplification or loss of function mutations, activates the DNA damage response, and upregulates signaling pathways stimulating proliferation [7]. First, we tested for IGR-CaP1 cell expression of AR and one of its target genes, prostate specific membrane antigen (PSMA), in a representative PCa cell line panel composed of known AR(+) cell lines (LNCaP, and its derivative, C4-2B), compared to AR(-) (DU145 and PC3) cell lines. Similar to the original report IGR-CaP1 cells were AR protein negative ( Figure 4A) [23]. Testing for synaptophysin expression, a neuroendocrine marker, revealed data diametrically opposed to that of AR; that is the AR (+) cell lines, were synaptophysin negative, whereas AR(-) cell lines, including IGR-CaP1 cells, expressed synaptophysin ( Figure 4A). Confocal microscopy revealed synaptophysin localization in multiple vesicles dispersed throughout the cytoplasm in both IGR-CaP1, and PC3 cells, which were the positive controls in this experiment ( Figure 5). Further immunoblotting for neuroendocrine markers revealed that CD56/NCAM was solely detectable in IGR-CaP1 compared to all other interrogated PCa cell lines including PC3 and DU145 ( Figure 4A). In contrast, IGR-CaP1 cells did not detectably express other markers of neuroendocrine transdifferentiation, such as N-Myc (data not shown), or N-Cadherin, the latter was expressed in PC3 cells ( Figure 4A). In addition to neuroendocrine transdifferentiation, other molecular features such as apoptosis resistance, loss of retinoblastoma protein (pRB), and gene amplification of Aurora, polo-like kinases (PLK1), and c-Myc, also coordinate the aggressive PCa phenotype [7,27]. IGR-CaP1 cells possessed one of these attributes, overexpression of the anti-apoptotic BclxL protein ( Figure 4A). However, IGR-CaP1, and the remainder of our AR(+) and AR(-) cell lines did not overexpress c-Myc, Aurora A, or PLK1, and they retained retinoblastoma protein expression ( Figure 4A). IGR-CaP1 immunoblotting and ICC revealed constitutive high level, nuclear localized p53 expression consistent with the original characterization of these cells ( Figures 3A and 4) [23]. ICC also suggested heterogeneity with cells containing high level nuclear p53 expression and other cells wherein p53 protein was solely detected in nuclear foci ( Figure 5). LNCaP cells displayed even greater p53 protein heterogeneity with many cells lacking detectable protein expression. As the p53 mutation reported for IGR-CaP1 cells, Y126C [23], has been shown to lack transcriptional activity [28], it was surprising that the p53 target, p21, was coordinately upregulated in IGR-CaP1 cells ( Figure 4A). However, further investigation using ICC revealed that p21 was localized to the cytoplasm in the majority of IGR-CaP1 cells (only 19/394 cells positive for nuclear p21 expression) ( Figure 5, representative high magnification confocal image). One explanation for these data is enhanced phosphorylation of p21 at threonine 145, which sterically hinders p21 nuclear translocation [29]. Indeed, T145 phosphorylation was present in IGR-CaP1 cell protein extracts ( Figure 4A). In contrast, p21 was predominantly localized to the nucleus of LNCaP cells, which also expressed high levels of p53 and p21, but low levels of phosphorylated p21 T145 (Figures 4A and . Proliferative quiescence could be one explanation for p21 cytoplasmic retention. However, the fact that most IGR-CaP1 cells expressed nuclear Ki67 (Supplementary Figure 2) obviated that explanation. The loss of cell cycle control suggested by the Ki67 and p21 expression levels and localization suggested that IGR-CaP1 cells were under oncogenic stress [30]. Elevated levels of gamma-H2A. X pS139 detected by immunoblotting, and its nuclear foci localization by ICC, supported that hypothesis ( Figures 4A and 5).
Given the hyperproliferative and oncogenic stress phenotype of the IGR-CaP1 cells, we interrogated expression of proteins belonging to signaling pathways stimulating or regulating proliferation, and additionally associated with AVPCa ( Figure 4B). One of these signaling modules, HGF-c-MET, is frequently overexpressed in AVPCa and AR inhibitor treated cancers [31]. HGF was expressed at similar levels in all our interrogated PCa cell lines. In contrast, detectable c-MET protein expression was restricted to AR(-) cells, with a marked differential elevation in IGR-CaP1 cells compared to other AR(-) counterparts. In addition, phosphorylated c-MET Y1234/1235 , the initial site activated following ligandinduced oligomerization and an activator of receptor kinase activity, was solely detectable in the IGR-CaP1 cells, suggesting that the marked receptor overexpression sensitized these cells to (autocrine) growth factor stimulation. Additional support for elevated c-METmediated signaling in IGR-CaP1 cells, was provided by the differentially elevated JAK-STAT3, and markedly elevated c-RAF/MAPK kinase pathway phosphorylation, both downstream of c-MET receptor activation ( Figure 3B). While other growth factors and RTKs could stimulate the MAPK kinase pathway, we did not detect differential EGFR or PDGFRβ phosphorylation (data not shown). Collectively, these data suggest that IGR-CaP1 cells possessed some but not all of the molecular attributes of AVPCa.
IGR-CaP1 cells possess an EMT transition cell phenotype
As IGR-CaP1 cells were reported to be enriched for cancer stem cell (CSC) activity and marker expression [23], and since CSCs presumably underlie the development of pan-therapeutic resistance in AVPCa [32], we explored this further in our NSG mouse model. Epithelial-mesenchymal transition (EMT) has been shown to be a CSC function [33]. Therefore, we tested for evidence of EMT using E-cadherin and vimentin immunofluorescence ( Figure 6). In liver and brain metastases, distinct clusters of IGR-CaP1 cells were is not evident in LNCaP counterparts. p53 protein is nuclear localized and heterogeneously expressed IGR-CaP1 and LNCaP cells but absent in p53 counterparts. High level p21 expression is evident in IGR-CaP1 cells but is nuclear excluded in most cells. In contrast p21 is predominantly nuclear localized in LNCaP counterparts. Both IGR-CaP1 and LNCaP cells express high level, nuclear foci localized H2A. X S139 . Green: each target protein, Red: TOPRO3. Bar: 20 µm. www.impactjournals.com/oncotarget positive for either E-cadherin or vimentin, while cells expressing both proteins were sporadically detectable (Figure 6Aa and 6Ab). In contrast, IGR-CaP1 bone metastases were mainly comprised of cell possessing both plasma membrane-localized E-cadherin and cytoplasmic vimentin (Figures 6Ac and 5Ad), suggesting that the bone microenvironment might promote an EMT transition phenotype [34]. To test for molecular evidence of EMT, we immunoblotted our PCa cell line panel for expression of E-Cadherin, vimentin and a collection of EMT coordinating transcription factors. All of our AR(-) and AR(+) PCa cell lines expressed E-cadherin protein, with greater expression levels in the latter compared to the former ( Figure 6B). Vimentin was solely detectable in AR(-) PCa cell lines ( Figure 6B). Both DU145 and PC3 cells expressed massive levels of vimentin compared to E-cadherin. In contrast, the expression ratio of these two proteins in IGR-CaP1 cells was approximately equivalent, with a slight predominance of vimentin ( Figure 6B). Expression of the master EMT transcriptional regulator, ZEB1, was abundant and restricted to AR(-) cell lines. Slug (SNAIL2) expression was also differentially elevated in AR(-) cell lines compared to low-level expression in AR(+) lines, while Twist was equivalently expressed in each cell line independent of AR status ( Figure 6B). While this work validates the IGR-CaP1 cell EMT transition phenotype, additional experiments will need to be done to delineate the mechanisms maintaining this intermediary state in these cells.
Activation of CSC-related developmental pathways in IGR-CaP1 cells
As EMT can be a CSC precursor, IGR-CaP1 metastases were interrogated for molecules and signaling pathways known to regulate CSC niche maintenance and adherence. As both NOTCH and WNT have been shown to maintain the CSC phenotype in several types of cancer [35], including prostate [36,37], we examined these pathways in cultured IGR-CaP1 cells. We discovered that each of the AR(+) or negative cell lines evidenced NOTCH1-3 receptor expression, with differentially elevated levels of the gamma secretase cleaved functional NOTCH intracellular domain transcription factor expression in the AR(-) cell lines ( Figure 7A). In contrast, marked Jagged-1 ligand expression was uniquely detected in IGR-CaP1 cells compared to the other cell lines in our panel. At least three WNT ligands, WNT3a, WNT5a/b, and WNT2, were expressed in IGR-CaP1 cells ( Figure 7B). Of interest, the non-canonical ligand, WNT5a/b, was expressed at the highest level in IGR-CaP1 cells. To further explore WNT pathway activity, we tested the responsiveness of IGR-CaP1 cells and the cell line most closely mimicking their bone metastatic phenotype, the LNCaP derived C4-2B cells, to either WNT3a or R-spondin-1 (RSPO1) stimulation; the latter molecule reported to be enriched in bone marrow stroma ( Figure 7C) [25]. Both WNT3a and RSPO1 elevated Jagged-1 expression in IGR-CaP1 cells. In contrast, Jagged1 expression was not upregulated by either WNT3a or RSPO1 in C4-2B cells. As Jagged1 is a validated β-catenin target, these data are consistent with WNT pathway hyperresponsiveness in the IGR-CaP1 cells [38]. Finally, to further investigate expression of molecules associated with CSCs, we tested for CD44, CXCR4, and SDF1 expression in tissue sections from experimental IGR-CaP1 metastases. Both CD44 and CXCR4 were upregulated and plasma membrane localized in IGR-CaP1 metastases in skeletal and visceral organs (Supplementary Figure 3). SDF1 expression was markedly induced specifically in the tumor cells within each of the organs in our analysis. In bone marrow, the cell type specific expression pattern was more complex, SDF1 was differentially overexpressed in tumor cells, but also detectable in sinusoidal arteries and reticular stromal cells (Supplementary Figure 3).
RGD.H5/3.ROBO4 Ad vector is endothelial cell specific with a tumor endothelial versus host organ expression bias
As AVPCa is the therapeutic terminus of widespread metastatic disease, new approaches to targeting resistance fostering niches are desperately needed for this increasingly frequent patient cohort [3]. The crucial contribution of endothelial cells (ECs) to metastatic niche maintenance, reported in other types of malignancies [20], led us to expand our prior work on endothelial transductional and transcriptional targeting of Ad vectors [39], with the goal of first testing for differential localization of vector expression in metastatic PCa as opposed to host vasculature targeting.
Previously, we had created and tested an endothelial cell (EC)-targeted Ad vector containing 3 kb of the ROBO4 enhancer/promoter [39]. While transcriptionally targeted to vascular endothelium with a tumor microvessel bias, this vector required warfarin depletion of the coagulation Factor X for significant tumor vascular delivery [39]. As warfarin could be contraindicated in metastatic PCa in general, and particularly in aggressive disease with liver metastasis, we created a new vector that would be "detargeted" from hepatic sequestration independent of pharmacological coagulation factor depletion ( Figure 8A). As in previous work, we swapped the wild type hexon Factor X binding site amino acid sequences for those from Ad serotype 3 [40]. In addition, prior work has repeatedly demonstrated the infection (transductional) tropism of a cyclized RGD-4C peptide fiber/knob HI loop addition for either tumor cells (direct injection) or endothelial cells (systemic injection) [41]. As such, we created our final Ad vector, RGD.H5/3.ROBO4 that uniquely incorporates three crucial facets of: enhanced tumor EC adhesion (fiber/knob RGD display), augmented extrahepatic gene payload delivery (capsid hexon serotype swap), and tumor microenvironment-induced transcriptional upregulation (EC-specific ROBO4 enhancer/promoter) ( Figure 8A).
As the major clinical challenge is systemic control or cure of multi-organ metastatic disease, we tested the RGD.H5/3.ROBO4-EGFP reporter vector administered intravenously to NSG mice bearing IGR-CaP1 experimental bone and visceral metastases 4 weeks post intracardiac administration. For these experiments, we created an IGR-CaP1 cell line constitutively expressing a histone 2B-red fluorescent protein (H2B-RFP) reporter. Intense RFP fluorescence, mediated by chromatin condensation, facilitated single cell metastatic detection (Figures 8B-8G). Expression of our triple-targeted Ad vector was evident in most of the microvessels adjacent to and within tumor metastases in liver, kidney, adrenal, brain, and bone ( Figure 8B-8G). Single metastatic IGR-CaP1 cells were discovered intimately associated with the abluminal surface of RGD.H5/3.ROBO4-EGFP vector expressing ECs, particularly in the kidney, adrenal, brain and in the bone marrow ( Figures 8C-8G). We also performed a comparative analysis of the expression extent of the RGD.ROBO4 Ad vector in a host organ panel in nontumor bearing mice (Supplementary Figure 4). Expression was detected in liver, adrenal, lung, and throughout normal bone marrow. Notably, skin, heart, kidney cortex, brain, intestinal vasculature did not evidence detectable Ad vector expression. Our genetic strategy and data as described above motivated us to designate this new vector as being "triple targeted" because its three genetic alterations enhance tumor endothelial uptake, evade liver sequestration, and augment tumor endothelial expression.
DISCUSSION
There are three crucial aspects of our study. Using the combination of the highly immunodeficient NSG mouse with the IGR-CaP1 cell line we have created a new and highly penetrant preclinical model of metastatic prostate cancer. Both the concomitant visceral and osseous metastases and our protein expression profiling of this model strongly suggested that it emulated some, but not all the biological and molecular features of AVPCa, which is increasingly the final stage of disease progression in the modern era of potent androgen blockade and chemotherapy [1,2]. Pan-therapeutic resistance of AV PCa and its increasing frequency demand new treatment strategies [42]. As such, our creation of a "triple-targeted" Ad vector enabling access to metastatic niches via EC tropism, offers new possibilities to therapeutically manipulate the perivascular microenvironment either to eliminate malignant cells when used alone, or to break pan-resistance and reestablish responsiveness in combination with conventional treatments.
A particularly outstanding feature of the model was exquisite osteotropism of the IGR-CaP1 cells in NSG hosts following intracardiac injection. Bone metastatic modeling can also be produced using direct intratibial injection [43]. While this approach preserves the opposite limb enabling a contralateral control in the same mouse, it induces an injection-activated wound reaction, fails to model circulating tumor cell implantation, and lacks extraosseous disease. Moreover, the extensive osteoblastic phenotype of the IGR-CaP1 cells, also reported in the prior work with these cells was striking [21]. However, in contrast to the previous studies, our approach of whole tissue histological imaging and fluorescence based marked delineation, provided a definitive picture of the extent of this process, and the alterations these cells induced in the bone marrow. Osteoblastic metastatic disease was thought to be rare in mouse models and certainly spontaneous bone metastases are rare in genetically engineered mice (GEM) [44]. However, starting with the derivation of the LNCaP subline, C4-2B [45], an increasing number of patient cell lines have been reported to evoke osteoblastic metastases in mice [46,47]. In fact, the extensive intracavitary new bone formation we detected was similar to the patient derived MDA-PCa2b cell line and the more recent serially transplantable MDA118b xenograft line, both of which were created by the same group [46,48]. While the MDA-PCa2b cells retain AR expression (albeit mutant AR), the MDA118b xenografts lack AR expression similar to IGR-CaP1 cells. One surprising feature of the IGR-CaP1/NSG model was the modest microCT evidence for osteosclerosis despite extensive histological new bone formation. Most likely, the extensive metastatic tumor replacement of liver and adrenal glands is responsible for the rapid lethal progression of the model, which prevents sufficient new bone mineralization in metastatic tumors. Future work will focus on derivation of new IGR-CaP1 cell lines isolated from bone metastases, as described in other PCa cell line models, with the goal of extending survival (see below) to achieve greater bone remodeling versus solid organ metastatic tumor growth [45].
While the accelerated metastatic growth of the IGR-CaP1/NSG model offers the advantages of rapid phenotypic screening for genetic manipulations, it fails to recreate the usual pace of the slow progression of human metastatic prostate cancer. IGR-CaP1 cells were derived from an intermediate stage, Gleason 7 primary prostate cancer. During the serial cell passaging necessary for cell line establishment, they obviously were selected for loss of AR expression. AR expression loss could be due to outgrowth of an AR negative cell present in the primary cancer [49,50], or loss of AR expression during cell culture in androgen-depleted medium [51]. Moreover, our study has shown that they evidence both a transition EMT molecular and immunofluorescent profile elements of epithelial plasticity. Of interest, epithelial plasticity has been shown to facilitate bone metastases in general [34,52], and in PCa in particular [53]. Collectively, the IGR-CaP1/NSG mouse model appears to closely emulate the increasingly evident, treatment failure related, clinical entity, AVPCa) [7,27,54,55]. Similar to IGR-CaP1/NSG mice, AVPCa patients suffer from both osseous and multivisceral metastases [5]. The addition of visceral organ spread has been shown to be rapidly lethal in patients [8,9]. Thus, the rapid time course of IGR-CaP1/NSG mouse experimental metastases is entirely consistent with the clinical time course of AVPCa.
The other compelling facet of IGR-CaP1 cells is that they shared some, but not all, of the molecular attributes of AVPCa. Detection of an IGR-CaP1 neuroendocrine marker subset had not been reported in the prior work with these cells [23]. The elevated levels of nuclear localized p53, c-MET and activation of its downstream signaling outputs were also consistent with aggressive PCa, and loss of AR function [7]. The combination of enhanced cell cycle activity, evidenced by near universal Ki67 expression with the predominant frequency of p21 nuclear exclusion suggested that these cells possessed considerable cell cycle dysregulation. As loss of cell cycle control produces oncogenic stress [56], it was not surprising that IGR-CaP1 cells evidenced gamma-H2A.X upregulation consistent with extensive DNA double strand breaks [57]. In addition, the apparent recruitment of the NOTCH and WNT developmental pathways was also consistent with expression profiling of end-stage metastatic disease [58,59]. However, other molecular attributes of AVPCa, in particular overexpression of c-Myc, Aurora A, N-Myc, and PLK1 and loss of retinoblastoma expression were not evident in IGR-CaP1 cells [7,27].
Another aggressive disease hallmark is EMT [58]. IGR-CaP1 metastases appeared to be on the cusp of this transition with some tumors displaying an epithelial while other deposits a mesenchymal phenotype as evidenced by E-cadherin versus vimentin expression. Individual in transit cells expressing both molecules, albeit in distinct plasma membrane versus intracytoplasmic compartments were also prominent in bone metastases. This "hybrid" epithelial/mesenchymal phenotype has been described in both breast and prostate models [34]. Overexpression of the master EMT transcription factor, ZEB1, along with Slug, Snail, and Twist were additional support for an EMT program in IGR-CaP1 cells [61]. Intriguingly, and consistent with the IF images, the near equivalent E-cadherin and vimentin levels reinforced their transition status. Importantly, hybrid EMT cells appear to impart cancer stem cell plasticity thus facilitating a continuous generation of therapy resistant metastatic cell populations [62]. Collectively, the visceral and bone target organ proclivity, protein expression, signaling pathway, and hybrid EMT data, are consistent with cells and tumors that have crossed the aggressive disease threshold, but have not fully attained all of its features [7]. The enhanced likelihood of concomitant host toxicities mediated by systemic targeting of multiple cell cycle regulatory or stem cell maintenance pathways, provides compelling rationale for our efforts at EC-focused metastatic niche targeting.
The induction of multiple cell signaling and fate determination pathways evident in aggressive PCa highlights its associated therapeutic challenges. While small molecule and chemotherapeutic cocktails have successfully inhibited growth of subcutaneous xenografts, the clinical application of this approach could be fraught with toxicities particularly in susceptible organs with rapid cell turnover such as intestine, skin, and bone marrow. While the target repertoire of small molecule inhibitors is increasingly being narrowed, targeted delivery of therapeutics to specific cellular components of the metastatic microenvironment could obviate host toxicity, but enhance growth inhibitory efficacy. One approach is manipulation of vasculature to usurp EC angiocrine function [39]. This could be achieved via viral vector gene therapy. Disease-specific vascular endothelial cell targeting following systemic vector administration has been a long sought after goal in gene therapy [63]. The endothelium is the first contact cell layer during intravenous injection, offering the opportunity for "first pass" cell infection. However, endothelial cells express low to undetectable levels of the principal adhesive receptor for serotype 5 Ad vectors, Coxsackie adenovirus receptor (CAR) that is the gene therapy "workhorse". Specificity for diseased versus normal host endothelium has been a challenge. One solution has been insertion of candidate or phage display selected peptides onto the fiber knob [64]. These insertions have been enabled by the presence of the HI loop in the knob protein structure [64]. The HI loop projects perpendicular to the fiber knob and is of sufficient length to accommodate peptide insertion. The endothelial selectins have been one peptide class inserted into the fiber knob, motivated by upregulation of this molecule in vessels in response to inflammatory environments both in benign diseases and in tumors [65]. The other commonly used peptide is the αvβ3 or αvβ5 integrin binding fragment, asparagine (R), glycine (G), aspartate (D). Cyclization of this peptide has been shown to markedly increase binding and peptide stability and that is the form inserted into the Ad vector HI loop [66]. Many tumor histotypes also upregulate these integrins, and RGD-displaying Ad vectors have been used for direct intratumoral injection, with recent impressive antitumor responses [67]. The other strategy is transcriptional targeting using enhancer/promoter elements activated in tumor endothelium [68]. There has been a plethora of DNA regulatory elements used in these vectors. Similar to transductional peptides, the focus has been on enhancer/promoters activated in tumor endothelial cells. The two most intensively studied have been a human VEGR2 promoter fragment or a composite, modular, preproendothelin enhancer promoter (PPE-1-3x) [68,69]. Both elements, but particularly the PPE-1-3x promoter, are induced by hypoxia commonly present within most tumor microenvironments. This latter vector, now named VB-111, has been tested in Phase I trials [69]. In all cases, the goal of this work has been microvessel ablation. One challenge to this exhaustively investigated field has been a paucity of data on the multiplicity of vector expressing tumor endothelial cells. Moreover, stringent efforts to detect the distribution of number of nontumor bearing host organs expressing vascular targeted vectors have been limited.
Here, we created a new adenovirus incorporating three genetic modifications designed to address three challenges of systemic vector administration; first pass target organ infection, hepatocyte sequestration, and cell type specific gene expression. The cyclized RGD peptide was used for enhanced tumor vessel infection/ transduction. A serotype 3 domain was swapped into hexon replacing the native serotype 5 correspondent to obviate coagulation Factor X binding mediating hepatocyte sequestration. Biased tumor vascular endothelial expression was achieved with the use of the human ROBO4 promoter [39]. This promoter is both hypoxia responsive, and contains an ETS binding element that likely facilitates transgene expression in tumoractivated endothelium [70]. This RGD.H5/3.ROBO4 vector produced widespread intratumoral vascular expression. While host vessel expression was still evident in a delimited organ set, this vector was universally expressed in metastatic tumor niches, strikingly so in the bone marrow. Residual host vessel expression could be this vector's Achilles heel. However, host toxicity likely rests on the targets of the vector payloads. Our present focus will be on expression of secreted protein traps for ligands maintaining the metastatic niche. There is evidence for differential sensitivities of tumor versus host stem cells for small molecular niche mobilizing drugs [71]. Whether this will also be true for our vectors remains to be investigated. That said, we are also constructing next generation vectors, based on the RGD.H5/3 platform, containing enhancer/promoter elements that potentially possess greater tumor vascular specificity. As systemic vector administration has been repeatedly demonstrated to be safe in humans, the field of vascular targeting is being rejuvenated. The strategy of vector-mediated perivascular niche eviction now offers the exciting promise, still unproven, for therapy of the most recalcitrant and lethal form of PCa malignancy.
Adenoviral vector construction
Replication incompetent RGD.H5/3.ROBO4-EGFP adenovirus was created using a two-plasmid rescue method, as described previously [39]. Details about vector construct are provided in Supplementary Methods.
Cell culture
Human prostate IGR-CaP1 cells were a generous gift from Anne Chauchereau, Institut Gustave Roussy (Villejuif, F-94805, France). STR analysis by an independent laboratory confirmed their maintenance of the originally reported profile (data not shown), thus serving as validation of this cell line. LNCaP, PC3, DU145 cell lines were obtained directly from ATCC. The LNCaP derivative C4-2B cells were obtained from Christopher Maher at WUSTL. Details of cell line propagation and RSPO or WNT3a stimulation experiments are available in Supplementary Data.
Mouse model
Experimental procedures involving mice were carried out under a protocol approved by the Washington University Animal Studies Committee. Immunodeficient NOD.Cg-Prkdc scid Il2rg tm1Wjl /SzJ (NSG) mice (The Jackson Laboratory, Stock No: 005557) were inbred in Washington University School of Medicine aseptic barrier mouse facility. To establish experimental tumor metastasis, NSG mice were anesthetized and injected with 5x10 5 parental, H2B-RFP-labeled, or CBR-Luciferase/mCherry-labeled IGR-CaP1 cells in 50 μl of PBS into the left cardiac ventricle using 30G needles. Tumor growth necessitated mouse sacrifice 4.5-5.5 weeks post injection. Further details of organ harvest and processing are presented in Supplementary Methods.
Bioluminescence imaging
In vivo bioluminescence imaging (BLI) was performed on the weeks indicated on an IVIS Lumina (PerkinElmer, Waltham, MA; Living Image 3.2, 1min or 1sec exposure, bin8, FOV12.5cm, f/stop1, open filter). Mice were injected intraperitoneally with D-luciferin (150mg/kg in PBS; Gold Biotechnology, St. Louis, MO) and both dorsal and ventral sides were imaged 10min later using isoflurane anesthesia (2% vaporized in O 2 ). Total photon flux (photons/sec) was measured from fixed regions of interest (RIOs) over the entire mouse using Living Image 2.6.
Tissue harvest and section preparation
Four-five weeks post tumor and 72 hour post Ad vector intravenous injection, mice were anesthetized with 2.5% 2, 2, 2-tribromoethanol (Avertin, Sigma-Aldrich, St. Louis, MO), perfused via the left ventricle with phosphatebuffered saline (PBS) followed by 10% neutral buffered formalin. Bones and organs were harvested and processed as detailed further in Supplementary Methods.
Histochemical and immunofluorescence staining
Details regarding immunofluorescence are presented in Supplementary Methods.
MicroCT
Methods and details of bone processing and imaging for microCT are described in Supplementary Methods.
Immunoblotting
Overall methods of protein extract preparation were similar to previous work [39] and provided in detail in Supplementary Methods.
Imaging/microscopy techniques and microscope/ objective specification
Fluorescence and bright field microscope images were collected using a DP80 dual color/monochrome sensor CCD camera (Olympus America, Center Valley, PA) with CellSens Dimension software (Olympus Soft Imaging Solutions) with Extended Focal Imaging (EFI) function. Wide-filed images were also collected using defined scanning area mode with multiple image alignment (MIA) algorithm. Imaging experiments were repeated at least three times on independent sets of vector-injected mice. Confocal fluorescence microscope images were collected using an Olympus FV1000 confocal microscope equipped with an UPlanApo 100×/1.35 numerical aperture oil immersion objective and analyzed with Fluoview version 1.7a software (Olympus, Center Valley, PA). Collected images were processed into standard tagged image file (TIF) format using CellSens Dimension software (Olympus Soft Imaging Solutions) with Extended Focal Imaging (EFI) function.
Further Materials and Methods details are provided in the Supplementary Information. | 8,874.6 | 2017-01-17T00:00:00.000 | [
"Biology"
] |
Research on Accelerating Single-Frequency Precise Point Positioning Convergence with Atmospheric Constraint
: An increasing number of researchers have conducted in-depth research on the advantages of low-cost single-frequency (SF) receivers, which can e ff ectively use ionospheric information when compared to dual-frequency ionospheric-free combination. However, SF observations are bound to increase the unknown parameters and prolong the convergence time. It is desirable if the convergence time can be reduced by external information constraints, for example atmospheric constraints, which include ionosphere- or troposphere constraints. In this study, ionospheric delay constraints, tropospheric delay constraints, and their dual constraints were considered. Additionally, a total of 18,720 test experiments were performed. First, the nearest-neighbor extrapolation (NENE), bilinear-(BILI), bicubic- (BICU), and Junkins weighted-interpolation (JUNK) method of Global Ionospheric Map (GIM) grid products were analyzed. The statistically verified BILI in the percentage of convergence time, average convergence time, and computation time consumption of them shows a good advantage. Next, the influences of global troposphere- and ionosphere-constrained on the convergence time of SF Precise Point Positioning (PPP) were analyzed. It is verified that the ionosphere-constrained (TIC2) has significant influence on the convergence time in the horizontal and vertical components, while the troposphere-constrained (TIC1) has better e ff ect on the convergence time in the vertical components within some thresholds. Of course, the dual constraint (TIC3) has the shortest average convergence time, which is at least 46.5% shorter in static mode and 5.4% in kinematic mode than standard SF PPP (TIC0).
Introduction
The concept and technology of Precise Point Positioning (PPP) were first proposed and implemented for the Global Positioning System (GPS) by the American Jet Propulsion Laboratory (JPL) in the late 1990s [1]. PPP has attracted significant interest over the intervening years due to its high accuracy without needing a specific reference station, providing correctional information, simple operations, and cost effectiveness due to reductions in labor and equipment costs. Therefore, it has been extensively used in the areas, SF PPP cannot be combined or differenced to be eliminated or attenuated by a part of the error, like dual-frequency PPP. If there are some information related to the parameters, and this information is accurate enough, the precision of traditional SF PPP can be largely improved and the convergence time can be shortened. Zhang et al. [31] studied real-time GIM and its application in SF positioning, Aggrey and Bisnath [32] studied the effect of atmospheric-constrained on the convergence time of dual-and triple-frequency PPP, and Gao et al. [33] applied the Inertial Navigation System (INS) to the ionosphere-constrained PPP to overcome the drawbacks that accompany unexpected and unavoidable substandard observation environments.
This study uses the GIM products and the tropospheric zenith path delays from the IGS as the constrained information for SF PPP. The organization of this study is as follows: the next step details the methods for mathematical models of standard SF GPS PPP, troposphere-constrained SF GPS PPP, and ionosphere-constrained SF GPS PPP, and the details of the four interpolation methods of GIM products. Afterwards, in Section 3 we compare and analysis the influence of interpolation methods and constraint methods on SF GPS PPP convergence time, and finally in Section 4 we draw conclusions. Later, we will study ionospheric delay and tropospheric delay prediction models to provide virtual atmospheric delay observations for real-time PPP and also provide a priori information for the constraint processing.
Methods
The GPS PPP observation models are first derived. Afterwards, different interpolation methods for GIM products are introduced.
GPS PPP Observation Models
This study uses an undifferenced and uncombined GPS PPP model as compared with the traditionally used IF-PPP model. These models include a standard SF GPS PPP model, troposphere-constrained SF GPS PPP models, ionosphere-constrained SF GPS PPP models, and SF GPS PPP models with troposphere-and ionosphere-constrained.
Standard Single-frequency GPS PPP Observation Model
The distance from the satellite to the receiver can be measured while using pseudorange and carrier phase observations, with the following expression [34]: where, P s r and L s r are the original pseudorange and the carrier phase Φ s r is multiplied by wavelength λ 1 for the specific receiver r and satellite s; ρ s r is the geometric distance from the satellite to the receiver; c is the speed of light in vacuum; dt r and dt s are the clock error of the receiver and satellite; I s r is the slant ionospheric delay on the first frequency f 1 ; M w is the wet mapping function; Z w is the zenith wet delay; M h is the hydrostatic mapping function; Z h is the zenith hydrostatic delay; d r is the frequency-dependent receiver uncalibrated code delay (UCD) with respect to satellite s; d s is the frequency-dependent satellite UCD; N s r is the integer phase ambiguity; b r and b s are the uncalibrated phase delays (UPDs) for the receiver and satellite, which is frequency-dependent; and, ε s r and ξ s r are the sum of measurement noise caused by the pseudorange and carrier phase observations and the error caused by the multipath effect. Other errors have been modeled in advance.
The original pseudorange and carrier phase observations equations are linearized, and the receiver clock offset only absorbs common part of frequency dependent receiver UCDs. Supposing that m satellites are tracked simultaneously at a certain epoch by the receiver r, the standard SF PPP model can be written, as follows [35]: with where µ r is the unit vector of the coordinate component between the receiver and the satellite; x is the vector of the receiver position increments relative to the priori position; and, I t is a unit vector of 2 · m rows and one column, corresponding to the receiver clock parameter dt r . In matrix K, the elements for the corresponding p s r and l s r are 1 and −1, respectively, corresponding to the ionospheric parameter I s r . R is the matrix corresponding to the ambiguity parameters N s r , and the elements for the corresponding p s r and l s r are 0 and 1, respectively. Q L is the stochastic model of the observed minus computed (OMC) observables. IGS precise satellite clock was calculated by the IF combination observables and the IF combination of satellite UCDs was absorbed by the satellite clock offsets, as follows [2]:
Troposphere-Constrained Single-Frequency GPS PPP Observation Model
According to the tropospheric delay product published by IGS, a virtual observation is used in the observation equation, and a troposphere-constrained SF GPS PPP is then constructed. The observation equation is expressed, as follows: where I T is a unit vector of m rows and one column, corresponding to the zenith wet delay parameter Z w ; ∧ T r is derived from external tropospheric products, the products for each IGS station at 5 min intervals provided by the IGS Analysis Center; ε r,trop is the corresponding noise; Q T denotes the stochastic model of virtual troposphere observables; and, 0 is a zero matrix. The ionosphere-constrained SF GPS PPP can add the GIM product as a virtual observation to constrain the ionospheric parameters as compared with the standard SF GPS PPP. The constraint equation for the observation is as follows [35]: where ∧ I s r is derived from external ionospheric products with the corresponding noise ε r,ion ; I I is a unit vector of m rows and one column; and, Q I denotes the stochastic model of virtual ionospheric observables.
Troposphere-and Ionosphere-Constrained Single-Frequency GPS PPP Observation Model
If the external troposphere and ionospheric products are both used as virtual observations to constrain the observation equation, the following observation equation is obtained:
Different Interpolation Methods for GIM Products
It is necessary to interpolate the ionospheric grid point from GIM products to obtain the observation station VTEC before performing the ionosphere-constrained modeling. In this study, a comparative analysis of four interpolation methods is performed. These methods include nearest-neighbor extrapolation [36], bilinear interpolation [18], Junkins weighted interpolation, and bicubic interpolation. The principles of the Junkins weighted and bicubic interpolation methods are described below. Figure 1 shows the schematic of Junkins interpolation. In Figure 1, the latitude and longitude of the interpolation point are b and l, that is, p(b, l), and the values of the four GIM grid points adjacent to it are p 11 (b 1 , l 1 ), p 12 (b 1 , l 2 ), p 21 (b 2 , l 1 ), and p 22 (b 2 , l 2 ). It is assumed that the TEC values at p 11 , p 12 , p 21 , and p 22 are TEC 11 , TEC 12 , TEC 21 , and TEC 22 , respectively. The VTEC of point p can be interpolated according to the VTEC of the surrounding four grid points. The interpolation formula used is as follows:
Junkins Weighted Interpolation
The weighted function w n (x, y), n = 1, 2, 3, 4 is as follows: where We can use nearest-neighbor extrapolation when the p point only has two proximate GIM grid points.
Appl. Sci. 2019, 9, x FOR PEER REVIEW 7 of 20 We can use nearest-neighbor extrapolation when the p point only has two proximate GIM grid points.
Bicubic Interpolation
The bicubic interpolation [37] method uses 16 adjacent points for interpolation ( Figure 2). Bicubic interpolation is an extension of cubic interpolation in a two-dimensional space. The cubic interpolation kernel is an approximation of the convolution interpolation of the ideal sampling sinc function at [−2,2] that is based on the cubic polynomial [38]. Typically, cubic interpolation produces interpolation coefficients that are based on a third-order polynomial: where q represents the distance between the interpolation point and the reference point, w is a tunable parameter, and the best result is obtained by verifying 0.5 w = - [38] with a large amount of data. Under these conditions, Equation (18) is simplified to:
Bicubic Interpolation
The bicubic interpolation [37] method uses 16 adjacent points for interpolation ( Figure 2). Bicubic interpolation is an extension of cubic interpolation in a two-dimensional space. The cubic interpolation kernel is an approximation of the convolution interpolation of the ideal sampling sinc function at [−2,2] that is based on the cubic polynomial [38]. Typically, cubic interpolation produces interpolation coefficients that are based on a third-order polynomial: where q represents the distance between the interpolation point and the reference point, w is a tunable parameter, and the best result is obtained by verifying w = −0.5 [38] with a large amount of data. Under these conditions, Equation (18) is simplified to: Bicubic interpolation is similar to bilinear interpolation. It is also decomposed into two one-dimensional interpolations: horizontal interpolation and vertical interpolation. Figure 2 shows the interpolation process. First, vertical interpolation is performed to obtain four virtual values, and then the values of the p point are obtained via interpolating based on the four virtual values. The calculation process is as follows: namely where We can use bilinear interpolation when the number of GIM grid points around point p is less than 16.
Tropospheric delay interpolation is similar to inverse distance-weighted interpolation [19], and it is performed according to the relationship between the tropospheric delay value and time in the tropospheric products.
Experimental Data Set and Analysis
The PPP results based on raw observations are evaluated with 78 global IGS stations for the whole month of September 2017 to verify the improved troposphere-and ionosphere-constrained for GPS SF PPP performance. For every station, the 24-h observations were divided into eight 3-h sessions to evaluate the performance of PPP. Therefore, a total of 18720 experiments were tested and compared. Figure 3 shows the distribution of these stations in the world. Figure 4 show a flowchart of our study procedure, while Table 1 lists the models and strategies that were used in this study.
The PPP performance in terms of convergence time in the horizontal and vertical components is evaluated at a different confidence level (the 68% and 95%) in two modes (static and kinematic). The criterion for judging convergence is when the positioning error in the horizontal and vertical components is less than 0.5 m (95% level) and 0.3 m (68% level), respectively [11]. We can use bilinear interpolation when the number of GIM grid points around point p is less than 16.
Tropospheric delay interpolation is similar to inverse distance-weighted interpolation [19], and it is performed according to the relationship between the tropospheric delay value and time in the tropospheric products.
Experimental Data Set and Analysis
The PPP results based on raw observations are evaluated with 78 global IGS stations for the whole month of September 2017 to verify the improved troposphere-and ionosphere-constrained for GPS SF PPP performance. For every station, the 24-h observations were divided into eight 3-h sessions to evaluate the performance of PPP. Therefore, a total of 18720 experiments were tested and compared. Figure 3 shows the distribution of these stations in the world. Figure 4 show a flowchart of our study procedure, while Table 1 lists the models and strategies that were used in this study.
The PPP performance in terms of convergence time in the horizontal and vertical components is evaluated at a different confidence level (the 68% and 95%) in two modes (static and kinematic). The criterion for judging convergence is when the positioning error in the horizontal and vertical components is less than 0.5 m (95% level) and 0.3 m (68% level), respectively [11]. We can use bilinear interpolation when the number of GIM grid points around point p is less than 16.
Tropospheric delay interpolation is similar to inverse distance-weighted interpolation [19], and it is performed according to the relationship between the tropospheric delay value and time in the tropospheric products.
Experimental Data Set and Analysis
The PPP results based on raw observations are evaluated with 78 global IGS stations for the whole month of September 2017 to verify the improved troposphere-and ionosphere-constrained for GPS SF PPP performance. For every station, the 24-h observations were divided into eight 3-h sessions to evaluate the performance of PPP. Therefore, a total of 18720 experiments were tested and compared. Figure 3 shows the distribution of these stations in the world. Figure 4 show a flowchart of our study procedure, while Table 1 lists the models and strategies that were used in this study.
The PPP performance in terms of convergence time in the horizontal and vertical components is evaluated at a different confidence level (the 68% and 95%) in two modes (static and kinematic). The criterion for judging convergence is when the positioning error in the horizontal and vertical components is less than 0.5 m (95% level) and 0.3 m (68% level), respectively [11].
The Effect of Interpolation Method on Ionosphere-constrained GPS Single-frequency PPP
For the convenience of description, Table 2 shows the abbreviations, according to the four interpolation methods that were introduced above. For GIM grid products, this study discuss the For the convenience of description, Table 2 shows the abbreviations, according to the four interpolation methods that were introduced above. For GIM grid products, this study discuss the convergence time of the four interpolation methods under ionosphere-constrained and the average time of calculation of 78 stations. Figures 5 and 6 show the percentage of convergence time. From these two pictures, whether in the 95% or 68% level, and whether it is static mode or kinematic mode, the convergence time is less than 10 min, the percentage of the BILI method calculation result is slightly lower than the other three methods, and in the subsequent convergence time less than 20 min, 30 min, etc., the percentage of BILI method calculation results has been higher than the other three methods, as shown in the green histogram. That is to say, the convergence time of the BILI interpolation method is the shortest when compared to the other three methods. Figure 7 shows the statistical results of the root mean square (RMS) error in the horizontal and vertical components in the static and kinematic mode, respectively. The RMS errors are based on the statistics of the last 15 min of the position solution error [44]. In the static mode, the percentage of BILI method calculations exceeds 40% in the horizontal and vertical components when the RMS is less than 5 cm, and the other three methods are lower than 40% and, in the subsequent statistical stages, the percentage of the BILI method is higher than the other three methods to reach 100%. In the kinematic mode, the percentage of BILI method calculation results in the horizontal component is 60% when the RMS is less than 15 cm, and the other three methods are lower than this value; while, in the vertical component, the BILI method exceeds 40%, and the other three methods not reaching 40%, and the percentage of BILI methods in other statistical stages is higher than the other three methods. In summary, the BILI method has higher final positioning accuracy than the other three methods in both the static and kinematic modes in the horizontal or vertical component. 6 show the percentage of convergence time. From these two pictures, whether in the 95% or 68% level, and whether it is static mode or kinematic mode, the convergence time is less than 10 min., the percentage of the BILI method calculation result is slightly lower than the other three methods, and in the subsequent convergence time less than 20 min., 30 min., etc., the percentage of BILI method calculation results has been higher than the other three methods, as shown in the green histogram. That is to say, the convergence time of the BILI interpolation method is the shortest when compared to the other three methods. Figure 7 shows the statistical results of the root mean square (RMS) error in the horizontal and vertical components in the static and kinematic mode, respectively. The RMS errors are based on the statistics of the last 15 min. of the position solution error [44]. In the static mode, the percentage of BILI method calculations exceeds 40% in the horizontal and vertical components when the RMS is less than 5 cm, and the other three methods are lower than 40% and, in the subsequent statistical stages, the percentage of the BILI method is higher than the other three methods to reach 100%. In the kinematic mode, the percentage of BILI method calculation results in the horizontal component is 60% when the RMS is less than 15 cm, and the other three methods are lower than this value; while, in the vertical component, the BILI method exceeds 40%, and the other three methods not reaching 40%, and the percentage of BILI methods in other statistical stages is higher than the other three methods. In summary, the BILI method has higher final positioning accuracy than the other three methods in both the static and kinematic modes in the horizontal or vertical component. The average convergence time (removed the data that the convergence time is equal to 180 min., that is, the data that does not convergence within three hours) is shown in Table 3, as compared to the JUNK method with the longest average convergence time, the average convergence time of BILI method at the 95% level is improved from 3.87 to 3.14 min. (18.9%) in horizontal (H(95%)) and by from 19.44 to 15.18 min. (21.9%) in the vertical components (V(95%)) in the static mode, while, in the The average convergence time (removed the data that the convergence time is equal to 180 min., that is, the data that does not convergence within three hours) is shown in Table 3, as compared to the JUNK method with the longest average convergence time, the average convergence time of BILI method at the 95% level is improved from 3.87 to 3.14 min. The average convergence time (removed the data that the convergence time is equal to 180 min, that is, the data that does not convergence within three hours) is shown in Table 3, as compared to the JUNK method with the longest average convergence time, the average convergence time of BILI method at the 95% level is improved from 3.87 to 3.14 min (18.9%) in horizontal (H(95%)) and by from 19.44 to 15.18 min (21.9%) in the vertical components (V(95%)) in the static mode, while, in the kinematic mode, it increased by 15.3% in the horizontal component and 13.2% in the vertical component. At the 68% level, the average convergence time in the horizontal component (H(68%)) significantly reduced by 17.9% from 17.55 to 14.40 min and by 23.6% from 42.92 to 32.81 min in the vertical component (V(68%)) in the static mode, and in the kinematic mode, lessened by 6.8% and 2.3% in the horizontal component and vertical component, respectively. The calculation time of these 18720 tests was normalized to FLRS station (Table 4). We used a Lenovo computer, which is configured with Windows 8.1 professional computer with an i5-3470 Central Processing Unit (CPU) and 4GB of installed Random Access Memory (RAM). From Table 4, it can be concluded that the BILI interpolation method calculation time is almost the same as the Junkins weighted interpolation method, which takes the shortest time. According to all of the above analysis, the interpolation method that is adopted for GIM products in the following ionosphere-constrained model is bilinear interpolation.
Single-Frequency PPP with the Constraints
In this section, we analyse the effects of single-frequency GPS on unconstrained, troposphereconstrained, ionosphere-constrained, and their dual constrains on convergence time. Table 5 summarizes the four constraint schemes.
TIC0
Standard single-frequency PPP model TIC1 Troposphere-constrained single-frequency PPP model TIC2 Ionosphere-constrained single-frequency PPP model TIC3 Troposphere-and ionosphere-constrained single-frequency PPP model Figures 8 and 9 show the percentage statistics of the effects of different constraint methods on the convergence time of GPS. It can be concluded from these two figures that in the horizontal component, the convergence time of the TIC3 has the largest percentage in the same range, and the TIC1 has less influence on the convergence time. In the vertical component, the percentage of convergence time of the TIC3 in the same range is still the largest, but the influence of TIC1 on the convergence time is higher than the percentage of TIC2 in a certain range. On the whole, the convergence performance of TIC0 is worse in both components when not using GIM products or tropospheric products as constraints. The convergence performance is improved as compared to TIC0 after considering TIC1 or (and) TIC2 in GPS SF PPP processing. At the same time, Table 6 shows that TIC1 performs a notably better than TIC0 in the vertical component and marginally better than TIC0 in the horizontal component. TIC2 performs notably better than TIC0 and TIC1 in the horizontal component, while, in the vertical component, TIC2 is better than TIC0 and worse than TIC1 only in 68% level static mode. TIC3 performs best among the handling schemes. Compared with TIC0, the average convergence time of TIC1 at the 95% level is improved from 27.17 to 26.07 min (4.0%) in horizontal and by from 23.21 to 18.17 min (21.7%) in the vertical components. At the 68% level, the convergence time in the horizontal component reduced by 4.3% from 42.11 to 40.29 min and by 28.8% from 39.34 to 27.98 min in the vertical component. However, in kinematic mode, the average convergence time of TIC1 is not much higher than that of TIC0, with a maximum increase of 11.7% and a minimum increase of 2.1%. TIC2 has a larger boost when compared with TIC0. In static mode, it is increased by 16.6% in the vertical component by 68% level, and the maximum increase is 88.4% in the horizontal component of 95% level; in kinematic mode, they are 2.5% and 73.9%, respectively. TIC3 has the largest boost as compared with the other two methods. In static mode, it reaches 88.8% from 27.17 to 3.04 min and the lowest is 46.5% from 39.34 to 21.05 min In the kinematic mode, the minimum increase is 5.4%, and the maximum increase is 74.5%.
Three stations (FLRS, TRO1, and ZECK station) were randomly selected from 78 IGS stations to measure RMS error in the horizontal component, vertical component, and three-dimensional. It can be seen from Figures 10 and 11 that the final positioning error reaches the sub-decimetre level in the kinematic mode, and it reaches the centimetre level in the static mode. However, the final positioning errors are substantially close in the four schemes. At the same time, Table 6 shows that TIC1 performs a notably better than TIC0 in the vertical component and marginally better than TIC0 in the horizontal component. TIC2 performs notably better than TIC0 and TIC1 in the horizontal component, while, in the vertical component, TIC2 is better than TIC0 and worse than TIC1 only in 68% level static mode. TIC3 performs best among the handling schemes. Compared with TIC0, the average convergence time of TIC1 at the 95% level is improved from 27 However, in kinematic mode, the average convergence time of TIC1 is not much higher than that of TIC0, with a maximum increase of 11.7% and a minimum increase of 2.1%. TIC2 has a larger boost when compared with TIC0. In static mode, it is increased by 16.6% in the vertical component by 68% level, and the maximum increase is 88.4% in the horizontal component of 95% level; in kinematic mode, they are 2.5% and 73.9%, respectively. TIC3 has the largest boost as compared with the other two methods. In static mode, it reaches 88.8% from 27.17 to 3.04 min. and the lowest is 46.5% from 39.34 to 21.05 min. In the kinematic mode, the minimum increase is 5.4%, and the maximum increase is 74.5%.
Three stations (FLRS, TRO1, and ZECK station) were randomly selected from 78 IGS stations to measure RMS error in the horizontal component, vertical component, and three-dimensional. It can be seen from Figures 10 and 11 that the final positioning error reaches the sub-decimetre level in the kinematic mode, and it reaches the centimetre level in the static mode. However, the final positioning errors are substantially close in the four schemes.
Conclusions
In this study, we discussed the effects of four methods, including NENE, BILI, BICU, and JUNK, for the percentage of convergence time, the average convergence time and the calculation time of single station of SF TIC2 PPP. The numerical results show that: (1) the percentage of the convergence time of the estimated parameters through the BILI method is the shortest in every condition; and, (2) in the static mode, when the RMS is less than 5 cm, only the percentage of the BILI method calculation exceeds 40%. While in the kinematic mode, when the RMS is less than 15 cm, only the BILI method is more than 60% and 40% in the horizontal and vertical component, respectively. Under the other identical thresholds of the RMS, the percentage of the convergence time of the estimated parameters through BILI method is higher than the other three methods; (3) in the static mode, the average convergence time of the estimated parameters through BILI method is reduced by at least 2.2% when compared to the other three methods and the maximum is shortened by 23.6%; while, in the kinematic mode, they are 1.9% and 16.6%, respectively; and, (4) for the average computation time of a station, the BILI method has 0.121 s more than the shortest calculation time in the static mode and 0.038 s in the kinematic mode. Therefore, we chose the BILI method for calculating the TEC at the required time when using the TIC2 or TIC3 calculation based on the GIM grid products.
In order to verify the different constraint methods, a total of 18,720 tests were performed with 78 stations. The experimental results revealed the following findings: (1) the TIC1 convergence time
Conclusions
In this study, we discussed the effects of four methods, including NENE, BILI, BICU, and JUNK, for the percentage of convergence time, the average convergence time and the calculation time of single station of SF TIC2 PPP. The numerical results show that: (1) the percentage of the convergence time of the estimated parameters through the BILI method is the shortest in every condition; and, (2) in the static mode, when the RMS is less than 5 cm, only the percentage of the BILI method calculation exceeds 40%. While in the kinematic mode, when the RMS is less than 15 cm, only the BILI method is more than 60% and 40% in the horizontal and vertical component, respectively. Under the other identical thresholds of the RMS, the percentage of the convergence time of the estimated parameters through BILI method is higher than the other three methods; (3) in the static mode, the average convergence time of the estimated parameters through BILI method is reduced by at least 2.2% when compared to the other three methods and the maximum is shortened by 23.6%; while, in the kinematic mode, they are 1.9% and 16.6%, respectively; and, (4) for the average computation time of a station, the BILI method has 0.121 s more than the shortest calculation time in the static mode and 0.038 s in the kinematic mode. Therefore, we chose the BILI method for calculating the TEC at the required time when using the TIC2 or TIC3 calculation based on the GIM grid products.
In order to verify the different constraint methods, a total of 18,720 tests were performed with 78 stations. The experimental results revealed the following findings: (1) the TIC1 convergence time percentage is slightly higher than the TIC0 in each convergence time threshold, while the TIC2 is larger and the TIC3 is the largest. That is to say, the convergence can be accelerated under the constraint conditions; (2) as compared with TIC0 method, the average convergence time of the TIC1 method is shortened by at least 4.0% and, at most, 28.9% in static mode. The TIC2 method has a larger percentage of shortening, at least up to 16.6%, the maximum up to 88.4%. However, in the vertical component, the TIC1 method is larger than the TIC2 method at the 68% level. Compared with TIC0, the TIC3 method is shortened by 46.5% at least and 88.8% at most. Additionally TIC3 method is better than TIC1 method and TIC2 method. In kinematic mode, the percentage of time that is shortened after the constraint is less than in static mode. The average convergence time of the TIC1 method is at least 2.1% shorter than the TIC0 method, and the maximum is shortened by 11.7%. The TIC2 method has a larger percentage of shortening compared to the TIC1 method, with a minimum of 2.5% and a maximum of 73.9%. The TIC3 method is superior to the TIC1 method and the TIC2 method, with a minimum of 5.4% reduction and a maximum reduction of 74.5% as compared to TIC0; and, (3) through comparative analysis, it is concluded that, the final positioning error of TIC0, TIC1, TIC2 and TIC3 methods are basically the same, no method is better. After the above analysis and conclusions, under the constraints of atmospheric error products, convergence can be accelerated, and there is no impact on the final accuracy improvement.
Author Contributions: R.W., J.G., N.Z. and Z.L. conceived and designed the experiments; R.W. and Y.Y. performed the experiments and analyzed the data; L.Z. drew pictures; R.W. and Y.W. wrote the paper; all authors reviewed the paper. | 7,324.8 | 2019-12-10T00:00:00.000 | [
"Physics"
] |
RETURNS TO SCALE AND SCALE ELASTICITY IN TWO-STAGE DEA
AbstractData Envelopment Analysis (DEA) provides a method to evaluate the relative efficiency of peer Decision Making Units (DMUs) that have multiple inputs and outputs. Production process in two-stage DEA is performed in the two consecutive phases and DMUs have intermediate measures, in addition to their inputs and outputs. A unique feature of the intermediate measures is that the outputs in the first stage are being treated as inputs in the second stage. The aim of this paper is to determine the returns to scale (RTS) classification and scale elasticity (SE) in two-stage DEA. Therefore an approach is introduced for estimating the RTS situation of DMUs with two-stage structure based on the consideration of SE quantity in each of the individual stages. The utilization of the proposed approach is demonstrated with a real data set.
INTRODUCTION
Data envelopment analysis is a scientific method for the performance analysis of peer decision making units, in the presence of multiple inputs and outputs.In the recent years, a number of DEA studies have focused on measuring the relative efficiency of DMUs with a two-stage structure (e.g.see [1][2][3][4][5][6]).In the two-stage DEA, DMUs have a two-stage structure and intermediate measures exist between two consecutive stages.Namely, the first stage uses the inputs to generate intermediate measures and later on the second stage uses them to produce outputs.Consequently, the intermediate measures which were determined by the first stage are all of the second stage inputs.
Meanwhile returns to scale and scale elasticity are two important topics in the production theory and since the beginning of DEA research, RTS has been widely discussed as an important economic implication of DEA efficiencies.These two concepts can determine the optimal size of efficient DMUs under variable returns to scale technology.Most of the previous attempts to deal with the two-stage DEA have only addressed measuring the performance of such two-stage processes.Therefore, this research attempts to measure RTS in the analytical framework of two-stage DEA.To achieve this goal, the production space in two-stage DEA under variable returns to scale (VRS) technology is investigated and a method for measuring RTS quality in this field, regarding the SE quantity and RTS classification in each of the individual stages, is proposed.
The structure of this research unfolds as follows: In section 2, the two-stage DEA under variable returns to scale (VRS) technology is introduced and also the cost minimization model is applied for evaluating each DMU with two-stage structure.The proposed method for estimating RTS and SE in two-stage DEA is discussed in section 3. Section 4 includes an application of the proposed method to 26 branches of an Iranian commercial bank.Finally, the concluding remarks are provided in section 5.
TWO-STAGE DEA UNDER VARIABLE RETURNS TO SCALE
It is assumed that there are n DMUs which their production activities are performed in two phases.In the first stage, each , in order to produce D outputs, , in the second stage.Therefore, .Figure 1 visually describes this production process for j DMU .The Production Possibility Set (PPS) under variable returns to scale (VRS) technology for each stage can be defined by the following DEA formulations: . If are two row vectors of dual variables related to the first and second sets of constraints in model (1).Similarly, are those related to the third and fourth sets of constraints in model (1).Finally, the dual variables, is an optimal solution of model ( 2), then the following lemmas hold.
Lemma1. For each feasible solution
of (1), we have: is a feasible solution of model ( 1) therefore the following inequalities are conducted: Considering the third sets of constraints in (2) and It follows from ( 3) and ( 5) that Similarly, it is obtained from the fourth sets of constraints in (2), and the relation Mathematically, the complementary slackness conditions can be specified as follows: Now, it follows from ( 6), ( 7), (10) Also, from (8), ( 9) and (11) Remark2.The cost minimization model is a non-radial DEA model.Therefore a problem of multiple projections can be found and this issue is an effective factor on RTS measurement.Sueyoshi et al. [9] have investigated how to solve this difficulty.In this study, it is assumed that the projection is unique.
Remark3.Scale elasticity (SE) is an important topic in performance analysis related to (RTS) .In fact, it represents the quantitative part of (RTS) which is the proportional change in outputs resulting from the equi-proportionate change in inputs.Considering the optimal solution of model ( 2), the scale elasticity of each stage is simply determined as follows [9,10] A problem associated with the RTS measurement is that sometimes a supporting hyper-plane of each stage could not be uniquely determined.In other words, it is necessary to consider an occurrence of multiple optimal solutions on * 1 and * 2 , in model (2).For dealing with this problem see [9].In this paper it is assumed that model (2) has unique optimal solution.
MESUREMENT OF RTS AND SE IN TWO-STAGE DEA
To introduce a new approach for RTS measurement in two-stage DEA, consider the following brief description on the relation among a supporting hyper-plane, RTS and SE in DEA [8,9,11].
The Case 2) If SE 1 =1 and SE 2 <1, then the overall RTS will depend on the existence or nonexistence of contraction possibility in stage 1.Consequently, we have proposed the following model to investigate this possibility.
Case 3) If SE 1 =1 and SE 2 >1, then the existence or non-existence of expansion possibility in stage 1 is the recognition criterion of RTS measurement.In this case, model (17) examines this possibility.
. 0 , 0 , 0 Similarly, the type of RTS at The discussion similar above is held when SE 1 <1 and SE 2 >1.
APPLICATION
In this section the proposed method is applied to 26 branches of an Iranian commercial bank that each branch has a two-stage structure.The two inputs to the first stage are personnel privilege and interest on deposit.The two intermediate measures (or the outputs from the first stage) are the total sum of four deposits and bank commission.The three outputs from the second stage are facilities, bank interest and other resources.Table 1 reports the data set and the last row of it indicates the unit costs associated with the inputs and intermediate measures.
Note that the first input i.e. the personnel privilege is composed of some effective factors on quality of the personnel including the record, university degree, educational major, skills, salary, and so on.This input is obtained after normalizing these factors and therefore it is a non-dimensional quantity.Moreover, dollar has been considered as the unit of the other inputs and outputs.The interest on deposit is an amount that each branch pays to the clients for long-term and short-term deposit accounts.The first intermediate measure contains the sum of four deposits which are opened by the clients in each branch.These deposits are long-term and short-term deposit accounts and current and savings accounts.Since each branch allocates 20 percent of each deposit as for determining the RTS classification of the units which they have the same RTS situation in their two stages.the interest to the depositor, therefore $0.2 is considered as the unit cost associated with the first intermediate measure.The bank commission is an amount that each branch receives from the clients for providing different services.The facilities are the loans and other credit facilities that each branch pays to the clients.The second output i.e., bank interest is the amount of interest that each branch receives from the customers for providing facilities.The last output i.e., other resources are the revenues that each branch makes by investing on different projects.
Table 2 reports the results from models (1), ( 2), ( 15) and (17).The optimal inputs and intermediate measures from model (1) for each branch are reported under columns 2, 3, 4 and 5.The cost efficiency corresponding to each branch appears in 6 th column.Using model (2), the columns 7 and 8 of table 2 represent the SE quantity for stages 1 and 2, respectively.The 8 th column reports the value of * for RTS measurement of some branches based upon models (15) or (17).Note that each branch needs the value of * for improving their size according to relations (16) and (18).The last column of table 2 reports the state of overall RTS for cost efficient projection of each branch.
CONCLUDING REMARKS
The current paper discusses the problems of SE and RTS measurement in twostage DEA.In a two-stage process, the first stage outputs which are the intermediate measures, will serve as the inputs of the second stage.In fact, this research has extended the RTS concept from classical DEA to the two-stage DEA.The proposed method determines the SE quantity and the type of overall RTS for the two-stage process considering the SE quantity in each stage.
Congestion indicates an economic state where inputs are overly invested.In other words, congestion is identified when an increase in one or some inputs causes the worsening of one or more outputs.From an economic theory, the issues of RTS and congestion are closely interrelated.Therefore investigating the congestion concept in two-stage DEA can be the future research issue.
jDMU is characterized by the two consecutive production stages, the first stage from data set related to inputs, intermediate measures and outputs, respectively.Semi-positive vectors of weights, connect inputs and intermediate measures of n DMUs in stage 1 and intermediate measures and outputs of n DMUs in stage 2, respectively.e is a row vector with all elements equal to 1. 0 is a zero vector, whose dimension depends upon its corresponding vector comparison.
Figure 1 -
Figure 1-The production process of two column vectors of inputs cost in stage 1 and intermediate measures cost in stage 2, respectively.Based on an optimal solution this model, the cost efficiency can be defined by o as the cost efficient projection of o DMU .For introducing a common * z between stage 1 and stage 2, the values of slack variables have been ignored in the cost efficient projection.The dual model of (1) is expressed by
1 and 2
, are due to the fifth and sixth constraints of model (1), respectively.If The proof is not difficult regarding the constraints of model (1) and the complementary slackness conditions.Remark1.Note that of the points on the efficiency frontier of stages 1 and 2, respectively, and we want to measure the RTS at two-stage DEA.
Therefore for each real number the following equations hold:
The relation among SE 1, * 1
and the type of RTS in stage 1 at * * ,z x is as follows:
2 Case 1 )
similar relation exists among SE 2 , * and the type of RTS in stage 2 at consideration of SE quantity in stages 1 and 2, 5 different cases must be perused.If SE 1 =1 and SE 2 =1, then the two-stage overall RTS will be considered as CRS and O SE =1; ( O SE denotes the overall SE of two-stage).
will be no contraction possibility in stage 1 and CRS with O SE =1 will be considered as the two-stage overall RTS.
)
will not be any expansion possibility in stage 1, so CRS with O SE =1 will be considered as the two-stage overall RTS.(ii) If 0 * then the two-stage overall RTS will be IRS with O SE = SE 1 × SE 2 at
Case 4 )Case 5 )
If SE 1 <1 and SE 2 <1 (SE 1 >1 and SE 2 >1) then DRS (IRS) with O SE =SE 1 × SE 2 will be considered as the two-stage overall RTS and for resizing the ) (model(17)) must be solved.If SE 1 >1 and SE 2 <1, three possibility for overall RTS can occur.In this case, first the value of k is calculated according to the following equation: k = SE 1 × SE 2 (19) Therefore, three options may happen: (a) If k=1 then CRS with O SE =1 will be considered as an overall RTS.(b) If k<1 then the type of RTS will be determined by solving model (15), like case2.(c) If k>1 then the type of RTS will be determined by solving model (17), similar case3.
Table 1 .
Data Set | 2,882.6 | 2012-12-01T00:00:00.000 | [
"Economics"
] |
Minimum $L^q$-distance estimators for non-normalized parametric models
We propose and investigate a new estimation method for the parameters of models consisting of smooth density functions on the positive half axis. The procedure is based on a recently introduced characterization result for the respective probability distributions, and is to be classified as a minimum distance estimator, incorporating as a distance function the $L^q$-norm. Throughout, we deal rigorously with issues of existence and measurability of these implicitly defined estimators. Moreover, we provide consistency results in a common asymptotic setting, and compare our new method with classical estimators for the exponential-, the Rayleigh-, and the Burr Type XII distribution in Monte Carlo simulation studies. We also assess the performance of different estimators for non-normalized models in the context of an exponential-polynomial family.
Introduction
One of the most classical problems in statistics is the estimation of the parameter vector of a parametrized family of probability distributions. It presents itself in a significant share of applications because parametric models often contribute a reasonable compromise between flexibility in the shape of the statistical model and meaningfulness of the conclusions that can be drawn from the model. As a consequence, all kinds of professions are confronted with the issue of parameter estimation, be it meteorologists, engineers or biologists. Throughout the last decades, a vast amount of highly focused estimation procedures for all kinds of situations have been provided, but the procedure that is arguably used most often remains the maximum likelihood estimator. Apart from its (asymptotic) optimality properties, its popularity is presumably in direct relation with its universality: For the professions mentioned above, and many more, whose prime interest is not the study of sophisticated statistical procedures, it is essential to have at hand a method that is both, easily communicated and applicable to a wide range of model assumptions. A second class of methods incorporates the idea of using as an estimator the value that minimizes some goodness-of-fit measure. To implement this type of estimators, the empirical distribution, quantile or characteristic function is compared to its theoretical counterpart from the underlying parametric model in a suitable distance, and the term is minimized over the parameter space, see Wolfowitz (1957), or Parr (1981) for an early bibliography. These procedures provide some freedom in adapting the estimation method to the intended inferences from the model and they regularly possess good robustness properties [see Parr and Schucany (1980) as well as Millar (1981)]. An example which was discussed recently, and which goes by the name of minimum CRPM estimation, see Gneiting et al. (2005), is tailored to the practice of issuing forecasts: As argued by , a good probabilistic forecast minimizes a (strictly) proper scoring rule such as the 'CRPM' ], and after constructing a suitable model it appears somewhat more natural to use as an estimator the one that minimizes the scoring rule instead of a classical estimation method like maximum likelihood [for a comparison see Gebetsberger et al. (2018)]. As it happens, these rather universal procedures listed above easily run into computational hardships. Just consider that even for 'basic' models, density functions can take complicated forms, and distribution or characteristic functions, or even normalization constants, may be nowhere near to an explicit formula. This is where we want to tie on. In a recent work, Betsch and Ebner (2019a) established distributional characterizations that, from a practical point of view, are comparable to the characterization of a probability distribution through its distribution function. Their results, which are given in terms of the derivative of a density function and the density itself, provide explicit formulae that simplify the dependence of the terms on the parameters (even for rather complicated models), and extend characterizations via the zero-bias-or equilibrium transformation [Goldstein and Reinert (1997), Peköz and Röllin (2011), respectively] that arise in the context of Stein's method, cf. Chen et al. (2011). The aim of this work is to investigate these characterizations, which where already used to construct goodness-of-fit tests [see Betsch and Ebner (2020), Betsch and Ebner (2019b)], more closely in the context of parameter estimation. An advantage of the resulting estimators lies in the way the density function of the underlying model appears in the characterization, and thus also in the estimation method. When considering for some (positive) density function p the quotient p ′ p , the term no longer depends on the integration constant which ensures that the function integrates to one, but only on the functional form of the density. As indicated before, our estimators depend on the underlying model precisely via this quotient, so they are applicable in cases where the normalization constant is unknown. Models of this type occur (though often in discrete settings) in such applied areas as image modeling [using Markov random fields, see Li (2009)] and machine learning, or in any other area where models are complex enough to render the calculation of the normalization constant impractical. For more specific discussions of such applications, we refer to the introduction of the work by Uehara et al. (2019a). The problem was already addressed by Hyvärinen (2005), who set out to find an estimation method which only takes into account the functional form of a density. The approach introduced there goes by the name of 'score matching', and the estimation method involves terms of the form p ′′ p − 1 2 p ′ p 2 and hence does not depend on the normalization constant either. In the univariate case we discuss here, our method provides a good supplement as it contains no second derivatives and may thus be applicable to cases where other methods fail. Also note that several other approaches by Pihlaja et al. (2010), Matsuda and Hyvärinen (2019), and Uehara et al. (2019b), are available. Later on we also discuss noise-contrastive estimation, a concept introduced by Gutmann and Hyvärinen (2010). All these references indicate that statistical inference for non-normalized models is a topic of very recent investigation that also interests researcher in machine learning, a fact which we further allude to at the end of the following section.
In Section 2 we introduce this new class of parameter estimators that are comparable, in their range of applicability in the given setting, to the maximum likelihood and minimum Cramér-von Mises distance estimators [as discussed by Parr and Schucany (1980) or Parr and De Wet (1981)]. We rigorously deal with the existence and measurability of our estimators in Section 3. In Section 4 we provide results on consistency. Thereafter, we give as (normalized) examples the exponential-(Section 5), the Rayleigh-(Section 7), and the Burr Type XII distribution (Section 8). For each of the three parametric models we compare our new method to classical methods like the maximum likelihood and minimum Cramér-von Mises distance estimator in competitive Monte Carlo simulation studies. The Burr distribution [cf. Burr (1942), Rodriguez (1977), Tadikamalla (1980), Section 6.2 of Kleiber and Kotz (2003), or Kumar (2017)] as a model is relevant in econometrics, initiated by Singh and Maddala (1976) [see also Schmittlein (1983)], and other areas like engineering, hydrology, and quality assurance, see Shah and Gokhale (1993) for corresponding references. However, the parameter estimation is non-trivial and can even cause computational issues. Thus, providing a new estimation method could prove useful in applications. In Section 9 we discuss an exponential-polynomial model for which the normalization constant is intractable, and we compare the new estimators with the score matching and noise-contrastive estimation approaches.
The new estimators
To be specific, recall that the problem of parameter estimation for continuous, univariate probability distributions presents itself as follows. Consider for Θ ⊂ R d a parametric family of probability density functions P Θ = p ϑ | ϑ ∈ Θ , and let X 1 , . . . , X n be a sample consisting of independent real-valued random variables with a distribution from P Θ , that is, there exists some ϑ 0 ∈ Θ such that X i has density function p ϑ0 (X i ∼ p ϑ0 , for short) for i = 1, . . . , n. Denote with P ϑ the distribution function corresponding to p ϑ . The task is to construct an estimator of the unknown ϑ 0 based on X 1 , . . . , X n .
For the construction of our new estimation method, we first recall in a non-technical fashion a famous distributional characterization that can be traced back to Charles Stein, see Chapter VI of Stein (1986). In the more elaborated version of Ley and Swan (2013) it establishes that, given a suitable probability density function p, the distribution of a real-valued random variable X is given through the density function at hand if, and only if, for a large enough class of suitably chosen test functions f . Motivated by the well-known zero-bias distribution, Betsch and Ebner (2019a) used the above characterization in a recent publication to derive explicit identities which retain the essence of the characterizing property. Indeed, they were able to derive from the Stein characterization that, for a suitable density function p on the positive axis with few technical assumptions (which we adopt below), the distribution of a positive random variable X (satisfying a weak integrability property) is given through p if, and only if, the distribution function F X corresponding to X satisfies As we intent to use this result as a foundation for our estimation method in parametric models for non-negative quantities, assume that the support of each density function in P Θ is (0, ∞). In particular, suppose that each p ϑ is positive and continuously differentiable on (0, ∞). Also assume that p ϑ (x) = 0. These presumptions where made by Betsch and Ebner (2019a) to derive the characterization given above, and they are straight forward to check for most common density functions. Particularly the last condition is exhaustively discussed in Proposition 3.7 of Döbler (2015). Let X be a positive random variable with and define the function for (t, ϑ) ∈ (0, ∞) × Θ. Then, the characterization of Betsch and Ebner (2019a), as built up in Equation (1) and as given in their Corollary 3, states that X has density function p ϑ if, and only if, η(t, ϑ) = 0 for every t > 0. Therefore, if we assume initially that X ∼ p ϑ0 [note that (2) is satisfied by requirement on p ϑ ], then η(· , ϑ) L q = 0 if, and only if, ϑ = ϑ 0 .
Here, L q = L q (0, ∞), B(0, ∞), w(t) dt , 1 ≤ q < ∞, denote the usual L q -spaces over (0, ∞), w is a positive and integrable weight function, and for f ∈ L q , g ∈ L q ′ (1/q + 1/q ′ = 1) are the usual norm and duality in L q . Thus, with an empirical version of η, based on a sample of independent and identically distributed (i.i.d.) random variables X 1 , . . . , X n with X 1 ∼ p ϑ0 , a reasonable estimator for the unknown ϑ 0 is ϑ n,q = arg min η n (· , ϑ) L q | ϑ ∈ Θ = arg min η n (· , ϑ) q that is, we choose ϑ n,q such that η n (· , ϑ n,q ) L q ≤ η n (· , ϑ) L q for each ϑ ∈ Θ. Heuristically, η n (· , ϑ) L q approximates η(· , ϑ) L q , so ϑ n,q should provide an estimate for the minimum of ϑ → η(· , ϑ) L q which coincides with ϑ 0 , the (unique) zero of this function. At this point of course, there arise questions of existence and measurability of such an estimator, and we will handle these questions in full detail in Section 3. Intuitively, one might argue to replace F X and the empirical distribution function in the definition of η and η n , respectively, with the theoretical distribution function P ϑ . However, there is a bit of a technical point involved, and the characterizations by Betsch and Ebner (2019a) do not include results that give a rigorous handle for this slightly (yet decisively) different situation. There are, however, similar characterizations for univariate distribution with other supports than the positive half axis. We allude to that setting in Section 10. Note that the availability of the term p ′ ϑ p ϑ for the model in consideration is rather essential. If this term is not amenable explicitly, it might still be calculable using numerical differentiation (and so η n and the estimator could be calculated numerically), but it would make it hard to theoretically justify the validity of the conditions on p ϑ . In our experience, however, the term p ′ ϑ p ϑ is readily available whenever p ϑ can be differentiated explicitly, and this seems a manageable assumption.
As we have outlined above, our new estimators are eventually based on Stein characterizations which rely on some suitable class of test functions [for an overview in the univariate case, and a record of the vast amount of literature on these identities, see Ley et al. (2017b)]. The goal of Betsch and Ebner (2019a) was to derive from these Stein identities new characterizations that no longer involve the classes of test functions. While this approach apparently leads to feasible applications in statistics, other methods are based directly on the Stein characterizations, using Stein discrepancies which gradually become popular in machine learning. The idea in the context of parameter estimation, in heuristic terms, boils down to choosing as a parameter estimator the value which (approximately) minimizes where the supremum is over all test functions in consideration. By the Stein characterization detailed above, the expectation is 0 for every test function precisely when ϑ = ϑ 0 , as we assume that X ∼ p ϑ0 . However, it is not clear how to calculate the supremum in practice taking that the class of test functions is very large. The theory developed around Stein discrepancies has produced different formal methods to evaluate such terms. Other than the fact that they are based on the Stein characterization, the identities derived by Betsch and Ebner (2019a) are not related to the framework of Stein discrepancies, and so it is surprising that merely measuring the difference between the functions in (1) in an L q -norm, which is what we do to construct our estimators, leads back to so-called feature Stein discrepancies. Indeed, upon defining the 'feature' function Φ(x, t) = min{x, t}, x, t > 0, and considering the Langevin-Stein operators as applied to suitable functions f : (0, ∞) 2 → R, we obtain which is the right-hand side of Equation (1) in the paper by Huggins and Mackey (2018). So by retracing their calculation, where G Φ,q is the class of test functions as defined by Huggins and Mackey (2018) (the precise form of which is inessential at this point). This means that we can embed our setting into the framework of these feature Stein discrepancies, as the construction of our estimator cumulating in (4) corresponds to minimizing the quantity at the beginning of this paragraph which sought to motivate these discrepancies. Now, of course, the starting point of our estimation method being the characterizations by Betsch and Ebner (2019a), we already had our estimator at hand explicitly and could choose the feature function accordingly. Still, the fact that both the characterization of Betsch and Ebner (2019a) and the (feature) Stein discrepancy approach (for the above feature function), when translated into an estimation method, lead to the same procedure is remarkable and deems it worthwhile to study the method further, as we were assured that it can be rather hard to find explicit examples for which the Stein discrepancy approach is feasible in practice.
To complete this insightful tour into the realm of Stein discrepancies, we mention some contributions of various solutions to statistical problems based on those discrepancies. In particular, Chwialkowski et al. (2016), Liu et al. (2016), and Yang et al. (2018) construct tests of fit, Gorham and Mackey (2015) measure sample quality, and Barp et al. (2019) develop estimation methods for non-normalized statistical models.
Existence and measurability
We discuss the measurability properties of η n and derive an existence result for a measurable version of (approximate) estimators of the type in (4). The result that is central to us in this section can be found in Chapter III of Castaing and Valadier (1977) [see the references therein and Chapter 8 by Cohn (2013) for further background]. Before we summarize these results, recall that a Suslin space is a Hausdorff topological space which is the image of a separable, completely metrizable topological space under a continuous map [for an overview, consult Chapter II of Schwartz (1973)]. See also Remark A.1 in Appendix A for more information.
Theorem 3.1. Let (Ω, F , P) be a complete probability space and (S, O S ) a Suslin topological space with Borel-σfield B(S). Assume that Γ maps Ω into the non-empty subsets of S, and that Then, there exists an F , B(S) -measurable map ϑ : Ω → S such that ϑ(ω) ∈ Γ(ω) for every ω ∈ Ω. Additionally, Here, (R, B) denotes the extended real line with its usual σ-field, and we write ⊗ for the product of σ-fields.
To apply Theorem 3.1, we first have to investigate the measurability properties of η n . In the setting of Section 2, assume the following regularity condition.
Let (Ω, F , P) be a complete probability space, which is assumed to underlie all random quantities of the previous and subsequent sections. Notice that the function η n defined in (3) depends on the random variables X 1 , . . . , X n defined on Ω, hence η n (as a random quantity) can be understood as a map Ω × (0, ∞) × Θ → R. Exploiting the structure of η n , we obtain the following lemma. The proof is simple, and the basic thoughts can be found in Appendix A.
Whenever we refer to an estimator that satisfies (4), we mean precisely such an (approximate) measurable version. This settles the existence problem and for our asymptotic studies we have measurability of ϑ n,q at hand.
Consistency
In this section, we investigate the asymptotic behavior of our estimators. Unfortunately, we cannot apply the general results for minimum distance estimators given by Millar (1984), since a major assumption in that work is that the term in the norm is differentiable (with respect to ϑ) with derivative not depending on ω, that is, in a sense, the parameter and the 'uncertainty' have to be separated, which is clearly not the case in our setting. Thus, we need to deal with the empirical process involved.
Assume the setting from Section 2. For brevity, we keep the notation ψ n,q (ϑ) = ψ n,q (ω, ϑ) = η n (ω, · , ϑ) L q and set ψ q (ϑ) = η(· , ϑ) L q . Recall from the construction that ϑ n,q (approximately) minimizes ψ n,q [see (6)], and ϑ 0 is the unique minimum of ψ q . The heuristic of the consistency statement proven in this section is as follows. If ψ n,q converges to ψ q in a suitable function space, then the random minimal points ϑ n,q converge to ϑ 0 . In order to establish convergence of ψ n,q , we need the functions to be sufficiently smooth in ϑ. In most applications the mapping ϑ → p ϑ (x) will be continuously differentiable for every x > 0, which can often be used to derive the following regularity condition.
Now, let K = ∅ be an arbitrary compact subset of Θ. Then on Ω and for ϑ (1) , ϑ (2) ∈ K, we have with H and α as in (R3). In particular, K ∋ ϑ → ψ n,q (ω, ϑ) is continuous for every ω ∈ Ω, and, by Lemma 3.2, it constitutes a product measurable map. This already implies that ϑ → ψ n,q (ϑ) is a random element of C(K) + [see Lemma 3.1 of Kallenberg (2002)], the space of continuous functions from K to [0, ∞) which is a complete, separable metric space (endowed with the usual metric that induces the uniform topology). From (R3) it also follows that K ∋ ϑ → ψ q (ϑ) is an element of C(K) + . We can now state the convergence results for ψ n,q that are essential for our consistency proof.
The proof of this lemma is rather technical and deferred to Appendix B. Note that the term inf ϑ ∈ F ψ n,q (ϑ) is a random variable by Theorem 3.1 (cf. the measurability of m n,q in the previous section). The following theorem uses the above lemma to establish consistency. In the second statement, we assume that the parameter space Θ is compact, thus rendering Lemma 4.1 applicable on the whole of Θ, which will turn out essential to prove strong consistency. For most practical purposes this is sufficient, when parameters relevant for modeling in applications can be taken to stem from some (huge) compact set. Note that with this compactness assumption we actually do not need the ε n -term in (6) since ψ n,q is lower semi-continuous by (R1) and Fatou's lemma, and thus attains its minimum in Θ. The first statement of the following theorem shows that if the sequence ϑ n,q is already known to be tight, no compactness assumption is needed, but we can only expect weak consistency in general, thus denoting by ' P −→' convergence in probability. After the proof of the theorem, we provide an insight in which cases this is possible (Remark 4.3).
Proof. In the proof of (i) we follow Theorem 3.2.2 from van der Vaart and Wellner (2000), but we adapt the reasoning to our setting, using the measurability properties we established in Section 3, and Lemma 4.1. For completeness, as well as to prepare the proof of the second result, we give a full proof. We start with a preliminary observation, establishing that the minimum at ϑ 0 is (locally) well separated. If K is a compact subset of Θ, and O an open subset of R d which contains ϑ 0 , then Indeed, if this is not the case, we find a sequence denotes the open ball in R d of radius ε around ϑ 0 . Applying Lemma 4.1 and (7) to K and F , together with (6) and the Portmanteau theorem [cf. Theorem 2.1 of Billingsley (1968 Note that if F = ∅, the inequality holds trivially. Since both ε and δ were arbitrary, the claim follows. For this first part of the proof, we only needed the convergences provided by Lemma 4.1 to be valid in probability. For the following proof of (ii), we rely on the stronger result. The arguments we use are scattered over Section 3 of the work by Sahler (1970). For reasons alluded to in Remark A.1, and since that work contains some typos, we provide the adapted arguments. Let ε > 0 and define β ε = inf ϑ ∈ Θ\Bε(ϑ0) ψ q (ϑ). By (7), we have β ε > 0. Using the well-known equivalent criterion for almost sure convergence, Lemma 4.1 gives By definition of β ε this implies Moreover, ψ n,q (ϑ 0 ) + ε n −→ ψ q (ϑ 0 ) = 0 P-a.s., as n → ∞, and thus Putting everything together, that is, ϑ n,q −→ ϑ 0 P-a.s., as n → ∞.
Remark 4.3. [A priori tightness of the sequence of estimators]
We provide a tool for proving tightness of the estimators before having established consistency, which we can use in Theorem 4.2 to get consistency even for unbounded parameter spaces. The statement essentially yields that if ψ n,q is strictly convex, ϑ n,q n ∈ N is tight. More precisely, suppose that conditions (R1) -(R3) hold. Let Θ be convex with ϑ 0 ∈ Θ • , the interior of Θ. Further, let ψ n,q be strictly convex (almost surely). Then the sequence of estimators ϑ n,q is tight in Θ. The proof is straight-forward and some hints are given in exercise problem 4 in Section 3.2 of van der Vaart and Wellner (2000) (for more details, find the proof in Appendix B).
5 Example: The exponential distribution This trivially is an admissible class of density functions. Moreover, let ϑ 0 ∈ Θ, X ∼ p ϑ0 , and take a sample X 1 , . . . , X n of i.i.d. copies of X. An easy calculation gives which nicely illustrates that ϑ 0 is indeed the unique zero of this functions. For the particular choice of weight w(t) = exp(−at), t > 0, with some tuning parameter a > 0, and in the case q = 2, straight-forward calculations give and X (1) < . . . < X (n) is the ordered sample. Using that e −aX (k) < e −aX (j) P-a.s. for j < k, we obtain and since 1 + aX (j) < e aX (j) P-a.s., we have Ψ (1) n > 0 almost surely. Therefore, ψ n,2 2 is strictly convex (almost surely), and has a unique minimum. By Remark 4.3 and Theorem 4.2 (i), the estimator is consistent for ϑ 0 (over the whole of Θ). Note that we have not made the dependence of Ψ (1) n , and Ψ n on 'a' explicit to prevent overloading the notation. With a similar argument as above, we may show that Ψ (2) n < 0 almost surely, thus we can calculate ϑ To provide insight on the performance of this estimator, we compare it with the maximum likelihood estimator and the minimizer of the mean squared error (for n ≥ 3) which are given as respectively, as well as with the minimum Cramér-von Mises distance estimator discussed in the introduction, namely where P ϑ (x) = 1 − exp(−ϑx), x > 0, denotes the distribution function of the exponential distribution, and where F n is the empirical distribution function of X 1 , . . . , X n . For this comparison we simulate (for fixed values of n and ϑ 0 ) D = 100, 000 samples of size n from an exponential distribution with parameter ϑ 0 , calculate the values of the estimator for each sample yielding values ϑ 1 , . . . , ϑ D , and approximate the bias and mean squared error (MSE) via for each of the above estimators. We perform all simulations with Python 3.7.2 (as provided by the Python Software Foundation, https://www.python.org, accessed 28 August 2019). For the minimization required to calculate the minimum Cramér-von Mises distance estimator, we choose as initial value the maximum likelihood estimator and use a sequential least squares programming method ('SLSQP') [cf. Kraft (1988)] implemented in the 'optimize.minimize' function of the Python module 'scipy', see Jones et al. (2001). The Tables 1 and 2 below contain the results for the bias and MSE values. Table 1: Approximated biases calculated with 100,000 exponentially distributed Monte Carlo samples.
As for the biases, the maximum likelihood estimator and the minimum MSE estimator perform almost identically in terms of the absolute bias, and the minimum Cramér-von Mises distance estimator has a slight edge. Our new estimator outperforms all other methods (virtually) uniformly. More precisely, it seems as if for larger tuning parameters 'a' the bias decreases. We will show, however, that this observation is not correct in that generality. The results for the mean squared error reveal that the minimum MSE estimator is the best method with respect to this measure of quality, which is no surprise as it is constructed to minimize the MSE. For sample size n = 10 the superiority is particularly obvious, but for larger samples, the maximum likelihood estimator is only slightly worse. Our new estimator shows almost identical results (for a = 0.25) as the maximum likelihood estimator, undermining that the method is sound and powerful. In contrast to the observation with the bias values, the MSE appears to increase with 'a'. This nicely illustrates the variance-bias trade-off commonly observed in the context of estimation problems. 6 The case a → ∞ As discussed previously, the simulation results for the exponential distribution somewhat indicate that as the tuning parameter 'a' grows, the bias decreases while the MSE increases. Interestingly, we can lay observations for a → ∞ on a rigorous theoretical basis. To be precise, observe the following general result.
Theorem 6.1. Consider the setting from Section 2 with weight function w(t) = e −at , a > 0. For the quantity ψ n,q (ϑ, a) = ψ n,q (ϑ) = η n ( · , ϑ) L q from the end of Section 3, we make the dependence on the tuning parameter 'a' explicit. Then, on a set of measure one, where Γ(·) denotes the Gamma function.
The proof consists of an almost trivial application of an Abelian theorem for the Laplace transform, see p.182 of Widder (1959), or the work by Baringhaus et al. (2000). Since a, q > 0, the functions ψ n,q (ϑ) and a q+1 ψ n,q (ϑ) q attain their minimum in the same point. Thus, in the limit a → ∞, our procedure essentially yields as an estimators the minimizer of the quantity In the situation of the exponential distribution as discussed in Section 5, the result reduces to lim a → ∞ a 3 ψ n,2 (ϑ, a) 2 = 2ϑ 2 , so in the limit a → ∞, the procedure will choose ϑ = 0 / ∈ Θ as the estimator, which leads to a bias of −ϑ 0 and an MSE of ϑ 2 0 . The observation from the simulations is, therefore, not universal. An example for which the limit in Theorem 6.1 is less trivial is the Rayleigh distribution.
Example: Rayleigh distribution
Let Θ = (0, ∞) and take the density function of the Rayleigh distribution with parameter ϑ ∈ Θ, It is easy to check that the Rayleigh density satisfies all regularity conditions stated throughout the work, and that we have The limit in Theorem 6.1 thus takes the form where X 1 , . . . , X n are i.i.d. random variables which follow the Rayleigh law, X 1 ∼ p ϑ0 , for some unknown scale parameter ϑ 0 ∈ Θ. In the case q = 2, it is easy to calculate that the minimum of the above function over ϑ > 0 is given through Strikingly, this asymptotically derived moment-type estimator is itself consistent for ϑ 0 , as P-a.s., as n → ∞, where we used the law of large numbers, as well as the fact that X 1 , . . . , X n all follow the Rayleigh distribution with parameter ϑ 0 . We compare this estimator with other methods. Among them is our new estimator ϑ (a) n,2 = arg min ψ n,2 (ϑ) 2 ϑ > 0 = arg min ϑ −4 Ψ (1) a 2 e −aX (j) , and X (1) < . . . < X (n) denotes the ordered sample. It is easily seen that if both Ψ (1) n > 0 and Ψ (2) n < 0 P-a.s., then the minimum can be calculated explicitly as ϑ (a) n,2 = −
Ψ
(1) n Ψ (2) n , and indeed, using that e −aX (k) < e −aX (j) and 1 − e −aX (j) − aX (j) e −aX (j) > 0 P-a.s., we have and with similar thoughts, Ψ n < 0 P-a.s.. Additionally, we consider the maximum likelihood estimator and a moment estimator, which are given as respectively. Note in particular that the moment estimator is unbiased and we can expect it to outperform the other estimators in this regard. Finally, we include the minimum Cramér-von Mises distance estimator given through where we solve the minimization numerically via a sequential least squares programming method as in the case of the exponential distribution in Section 5, using as initial value the maximum likelihood estimator. The execution of the comparison is as in the example on the exponential distribution, and the results are displayed in Tables 3 and 4. Table 3: Approximated biases calculated with 100,000 Rayleigh-distributed Monte Carlo samples.
Apparently, the moment estimator ϑ Mom n outperforms the other estimators with respect to the bias values, while the maximum likelihood estimator ϑ ML n gets the smallest MSE. The estimator we obtained via the limit results from the previous section seems sound in itself but is completely negligible compared to the other methods. In terms of bias, the minimum Cramér-von Mises distance estimator is preferable to the maximum likelihood method, and both are outdone by our new estimator, which even keeps up with the unbiased moment estimator for the smaller values of the parameter ϑ 0 . Notice that the maximum likelihood and moment estimator tend to underestimate the parameter, while the other procedures tend to a slight overestimation. As for the MSE, the moment estimator and our new method perform similarly and follow the maximum likelihood estimator closely. The minimum Cramér-von Mises distance estimator is a bit behind. To summarize, the maximum likelihood and moment estimator for the Rayleigh parameter are both simple and very convincing, but the newly proposed method keeps up (for suitably chosen tuning parameter) and appears to find a good compromise between bias and MSE. The only graver weakness shows for the large parameter value ϑ 0 = 10 and small sample sizes n = 10, 25.
Example: The Burr Type XII distribution
Consider the density function p ϑ (x) = c k x c−1 1 + x c −k−1 , x > 0, where ϑ = (c, k) ∈ (0, ∞) 2 = Θ. It is not exactly trivial, but still straight-forward, to prove that this is an admissible distribution in terms of the setting in Section 2 [see also Betsch and Ebner (2019a)] and the conditions (R1) -(R3). With q = 2 and weight w(t) = e −at , where a > 0, the function ψ n,2 (ϑ) = η n ( · , ϑ) L 2 from Section 3 (see also Section 2) can be calculated explicitly as and where X (1) < . . . < X (n) denotes the ordered sample. Our estimator ϑ (4), can be calculated as the minimizer of the above function over Θ. We use the 'L-BFGS-B'-method [L-BFGS-B algorithm, see Byrd et al. (1995) and Zhu et al. (1997)] implemented in the 'optimize.minimize' function of 'scipy' to solve the minimization numerically, using (1, 1) as initial values. (Note that in preliminary simulations we have tried several other optimization routines, like a truncated Newton algorithm or the 'SLSQP' from previous sections, but the 'L-BFGS-B'-method appeared to be the most reliable for our purpose.) Table 5: Approximated biases calculated with 100,000 Burr-distributed Monte Carlo samples.
As competitors to our estimator we consider the maximum likelihood estimator with implementation as suggested by Shah and Gokhale (1993) [for a different algorithm, see Wingo (1983)]. More precisely we use the Newton-Raphson method (with initial value c = 1) to find the root 1 + X c (the minimization is solved numerically, similar to our new estimator). Note that there have been further contributions to the estimation of the Burr parameters [see Schmittlein (1983), Shah and Gokhale (1993), Wingo (1993), and Wang and Cheng (2010)]. Like for the exponential-and Rayleigh distribution, we approximate bias and MSE of these estimators and show the results in Tables 5 and 6. For each value of ϑ 0 and n, the first line corresponds to the bias/MSE of the estimator for the c-parameter, and the second line corresponds to the k-parameter. As before, it becomes evident that our new procedure outperforms the maximum likelihood and minimum Cramér-von Mises distance estimator in terms of the bias. Unlike for the exponential distribution, the dependence on the tuning parameter 'a' is less clear: For a great deal of parameter values and sample sizes, the estimator ϑ (3) n,2 yields the best result, but in some cases (mostly for the k-parameter) the estimator ϑ (0.25) n,2 , with tuning parameter from the other end of the spectrum, performs best. Also observe the oddity that in some cases the estimator fares noticeably worse for a = 0.5 than for both smaller and larger tuning parameters. Thus, if one seeks to minimize some measure of quality of the estimators, an optimal, data dependent choice of the tuning parameter would be useful (more on this in Section 10). In the light of our simulations, we suggest the use of ϑ (3) n,2 in practice as long as no adaptive tuning is available. Both in the bias and in the MSE simulation, the maximum likelihood estimator ran into computational issues for sample size n = 10. The minimum Cramér-von Mises distance estimator is more stable in this regard, but still a lot less so than our new estimators which show notably slighter outliers only for large values of the Burr parameters. Once samples get larger (n = 50+), the asymptotic optimality properties of the maximum likelihood estimator appear to kick in, as its performance stabilizes. Still for suitably chosen tuning parameter, our estimators are very close in virtually all instances. The small sample behavior of the maximum likelihood estimator poses a huge drawback for applications and the problem is well-known.
Example: Exponential-polynomial models
We now proceed to consider an example of a non-normalized parametric model, one of the major motivations to this work. In particular, let These density functions correspond to a so-called exponential-polynomial model, which constitutes a special type of exponential family. It is trivial to see that these density functions obey the regularity assumptions (R1) -(R3), and also not hard to verify that the regularity conditions stated by Betsch and Ebner (2019a) (as summarized in Section 2) are satisfied. Thus, we can first of all note, as a corollary to Theorem 3 of Betsch and Ebner (2019a), the following characterization Corollary 9.1. A positive random variable X with EX d < ∞ follows the exponential-polynomial model in (8) if, and only if, the distribution function F X of X satisfies This is the characterization which underlies our new estimation method as constructed in Section 2.
Notice that C(ϑ) cannot be written in a closed form, so maximum likelihood estimators are not readily available for the model in (8). Using the method of holonomic gradient descent, introduced by Nakayama et al. (2011), Hayakawa and Takemura (2016) identify a differential equation which allows to numerically calculate C(ϑ) and its derivatives, and thus to get an approximation of the ML estimator. In our simulations, however, we focus on methods that do not try to approximate C(ϑ) numerically, but get rid of the normalization constant altogether. Namely, we consider our new method and compare it to the well-known score matching approach of Hyvärinen (2007), in generalization of his method introduced in Hyvärinen (2005), as well as to the noise-contrastive estimation technique of Gutmann and Hyvärinen (2012). In the case of non-negative, univariate observations, the score matching approach boils down to finding the minimum of [see Section 3 of Hyvärinen (2007)], where X 1 , . . . , X n are i.i.d. random variable with X 1 ∼ p ϑ0 , for some unknown ϑ 0 ∈ Θ. Clearly, the quantity does not rely on C(ϑ). As for the estimator constructed in this paper, fixing q = 2 and the weight w(t) = e −at , where a > 0 is a tuning parameter, we may calculate ψ n,2 (ϑ) = η n ( · , ϑ) L 2 (see Sections where X (1) < . . . < X (n) are the ordered values X 1 , . . . , X n . This formula is notably more complicated than the one resulting from the score matching approach, but in the two-parameter setting we now turn to, both estimators can be calculated explicitly. More precisely, to keep the presentation clear, we intent to focus on a two parameter case, but in order not to end up with a Gaussian-type model, we consider d = 3 and fix ϑ 2 = 0, thus effectively considering the model . In this case, each time by solving a quadratic equation in ϑ 1 and ϑ 3 (which is obtained by simplifying the above quantities J N N and ψ n,2 further), we obtain the estimators explicitly. The score matching estimators for ϑ = (ϑ 1 , ϑ 3 ) are given as where m k = n j=1 X k j , and our new estimators are a 2 e −aX (j) − e −aX (k) Moreover, we consider the noise-contrastive estimators in the refined version of Gutmann and Hyvärinen (2012) that generalizes the initial results of Gutmann and Hyvärinen (2010). The idea is motivated by a binary classification problem and proceeds to consider the unknown normalization constant as an additional parameter to be estimated. The objective function is constructed in such a way that it ensures that the obtained estimator for the normalization constant truly provides (in numerical approximation) a normalized density without any further constraints on the optimization. Following Gutmann and Hyvärinen (2012), we implement this technique as follows. Given the sample X 1 , . . . , X n , choose the noise sample size T n = ν · n (for some fixed ν ∈ N, in our case ν = 10) and sample from the noise distribution (in our case, the exponential distribution with rate parameter λ n = n/ n j=1 X j ) to obtain values Y 1 , . . . , Y Tn . Then, minimize the objective function to obtain an estimator ϑ N C n for the unknown parameters (ϑ 1 , ϑ 3 ) as well as for the logarithm of the inverse of the normalization constant. In our simulations we used the 'L-BFGS-B'-method, which we have also applied in previous examples, for this optimization [with initial values (0, −0.1, 0) and with the second parameter constrained to the negative numbers]. 1996 20.723 19.8873 18.6499 17.9361 17.9169 911.2872 171.7483 206.1279 202.4045 195.468 It is immediate that our new estimator and the noise-contrastive estimator outperform the score matching method distinctively over all tuning parameters, sample sizes, and parameter values for both the bias and the MSE, with the only exception being the parameter vector (1, −0.05), for which the estimator ϑ (5) n,2 fares worse than the score matching approach in MSE terms. We propose as a very good compromise choice of the tuning parameter the use of ϑ (1) n,2 as an estimator. This particular estimator outperforms the score matching method by factors of (at least) 4 in terms of MSE and also fares notably better in terms of the bias. It also outperforms the noise-contrastive estimation method uniformly, except for four instances in the MSE values (in three of which our method still performs better when another tuning parameter is chosen). The simulation in this non-normalized models conforms with the observation from previous examples that the new method fares remarkably well bias-wise. We also note that all of the estimators admit a large mean squared error for very small sample sizes, a behavior to be expected. From our simulations we conclude that the new estimation method is to be preferred clearly over the other approaches in this univariate setting of the exponentialpolynomial models, but of course larger scale simulations involving different types of multi-parameter versions of the model would be needed to further strengthen this position [also, generalizations of the score matching technique, like Yu et al. (2019), could be taken into account]. One massive advantage of the score matching and noise-contrastive estimation approaches, however, is that they readily generalize to the multivariate situation, a generalization we were not (yet) able to establish for our approach (see the last paragraph of Section 10).
Remark 9.2. We observed in our simulations that the noise-contrastive estimators can run into computational problems when the exponentials in the objective function raise an overflow warning. A step by step analysis of the code suggests that for large noise sample sizes T n (that is, for large ν) one tends to obtain some large values in the sample Y 1 , . . . , Y Tn which are cubed in the exponential terms and thus become very (if not too) large. The behavior seems to appear more often for small parameter values ϑ (0) 3 , but it seems to affect only single evaluations of the objective function during the optimization routine. We believe that most values for the noise-contrastive estimation approach in the table are intact and they also replicated when we reran the whole simulation, with a bit of an exception in the case of the parameter vector (1, −0.05), where the values show a rather noticeable dependence on the initial value chosen for the optimization (though this does not happen for the other parameter values). Therefore, one possible ways to reduce the occurrence of overflows, which lies in choosing small initial values for the ϑ 3 -parameter in the optimization routine has an impact on the performance of the estimator. Another way out could be to adopt noise distributions with extremely short tails. It could prove useful to see if our observations replicate in other simulation studies. Note that no computational issues arise for the score matching and our new approach, where the estimators can be calculated explicitly.
Notes and comments
Note that there remain some problems for further research on our newly proposed estimators, the discussion or extension of which would be too extensive for this contribution. First, for all estimators we considered explicitly, we incorporate a tuning parameter 'a' on which the performance depends strongly. It would be beneficial to have an adaptive choice of this parameter [see Allison and Santana (2015), and the refinements by Tenreiro (2019), who discuss such a method in the context of goodness-of-fit testing problems], probably adaptable to which criterion (minimal bias etc.) the estimator should satisfy. In the context of deriving results for a → ∞, we obtained another consistent estimator for the Rayleigh parameter, and it would be interesting to see if such results can be derived for other distributions. Also, we have not used in practice the flexibility gained by providing all results for the general L q -spaces, but restricted our attention to the case q = 2, mostly because of the explicit formulae obtainable in that case. If no closed formula for ψ n,q is feasible, either because of the use of some q = 2 or because some advanced weight function w is chosen, the integral in ψ n,q has to be solved numerically which could lead to a computationally highly demanding procedure overall. As for the choice of a specific weight function w, to our best knowledge there exist no theoretical results which favor specific choices over others. Considering the vast amount of weighted L 2 -statistics put to use in goodness-of-fit testing problems, it seems we cannot hope for general results in that direction. As such, the choice of the weight function provides some flexibility, but without clear guidance to satisfy specific objectives other than ψ n,q being calculable explicitly.
We have proven in a quite usual setting the consistency of our estimators. Surely, a limit theorem of the type where s(n) −→ ∞, as n → ∞, and where P is some limit distribution (e.g. the normal distribution) is desirable. Such a result would pave the way for constructing confidence regions for the true parameter based on our method. The main hurdle in direct approaches of proving such a limit results, like some Taylor expansion or methods from empirical process theory, is that the terms involved in such calculations become too complicated and make the endeavor appear impractical to us. One hope is that, since Barp et al. (2019) provide limit results for special classes of Stein discrepancy-based estimators, the interpretation of our estimation method in terms of the feature Stein discrepancy might at some point lead to advances.
Moreover, a larger-scale simulation study, involving more underlying parameters, sample sizes, and distributions could provide further insight into the estimation method. Improvements from a numerical point of view would, of course, benefit the approach. From a theoretical perspective, an important step in this last direction is to study whether the minimization method that is used in cases where the estimators cannot be calculated explicitly will always find a global minimum, or if not, in which situations it is likely to get stuck in some local minimum.
Note that Betsch and Ebner (2019a) also give characterization results for density functions on bounded intervals or on the whole real line. These can be used to construct similar estimation methods in the corresponding cases. To sketch the idea in the case of parametric models on the whole real line, assume that the support of each density function p ϑ in P Θ is the whole real line (and that some mild regularity conditions hold). Let X be a real-valued random variable and consider for (t, ϑ) ∈ R × Θ. Then, similar to our elaborations in Section 2, Theorem 4.1 of Betsch and Ebner (2019a) shows that X ∼ p ϑ0 if, and only if, η(t, ϑ 0 ) = 0 for every t ∈ R. Therefore, if, initially, X ∼ p ϑ0 , then η(· , ϑ) L q = 0 if, and only if, ϑ = ϑ 0 . Here, L q = L q R, B 1 , w(t) dt , 1 ≤ q < ∞, with a positive weight function w satisfying Thus, with a reasonable estimator for ϑ 0 is ϑ n,q = arg min η n (· , ϑ) L q | ϑ ∈ Θ .
Apparently, once we switch to density function supported by the whole real line, the characterization result due to Betsch and Ebner (2019a), and thus our estimator, have slightly different forms, but using the results from Section 3, we could still prove existence and measurability for this type of estimator, and give a formal definition as in (6). Moreover, a classical proof via the law of large numbers for random elements in separable Banach spaces and the Arzelà-Ascoli theorem [considering the modulus of continuity, as employed by Billingsley (1968)] yields the convergence results from Lemma 4.1 for ψ n,q = η n (· , ϑ) L q , but with all convergences only in probability. That result can then be used to derive consistency as in Theorem 4.2, again with all convergences only in probability. However, choosing a fixed (i.e. parameter-independent) weight function on R with a mere scale-tuning, as we employ it throughout (using the weight t → e −at ), appears not to be sufficient to account for the possible location-dependence of the model. Thus, in simulations (for instance with the Cauchy distribution) the problem, to us, seems empirically more involved and is therefore not addressed in the work at hand. Still, we deem it possible to apply our new type of estimator to models which are supported by any connected subset of R as indicated in the previous lines. Of course, the next question which forces itself on us is whether a similar method can be devised for multivariate models. Here the frontiers are somewhat blurry: The Stein density approach identity which appears at the beginning of Section 2 is not yet fully understood in the multivariate case [as stated in Remark Brown and Purves (1973)], but it requires σ-compactness of the parameter space, thus essentially reducing the study to euclidean parameters (a Banach space is σ-compact if, and only if, it is of finite dimension, which follows easily from Baire's category theorem). Of course this is enough for our purposes, but currently the interest in statistical inference for infinite dimensional models grows remarkably. Hence if a statistician was to investigate measurability of an estimator for some infinite dimensional quantity, she would have to resort to a result in the generality of Theorem 3.1. Another reason for us to build on Theorem 3.1 is that other measurability results known to us do not quite fit the construction of our estimators. For instance, Sahler (1970) considers minimum discrepancy estimators, where discrepancies are (certain) functions on the Cartesian product of a suitable set of probability measures with itself. It is (formally) not possible to identify such a set of probability measures in our setting, as we ought to introduce the empirical distribution of a sample into the discrepancy function, while only considering parametric distributions with a continuously differentiable density. Even though we believe this to be a purely formal issue which might be resolved to render results from Sahler (1970) applicable, additional caution is needed that Theorem 3.1 does not require. Likewise, the setting considered by Pfanzagl (1969) does not cover our estimators.
Note that since completing (the σ-field of) an underlying probability space does not interfere with measurability properties of random maps, nor does it meddle with push-forward measures, the corresponding assumption in Theorem 3.1 is no restriction. If S is a complete, separable metric space and the map Γ from Theorem 3.1 takes compact subsets of S as values, the condition imposed on the graph is equivalent to Γ being measurable with respect to the Borel-σ-field generated by the Hausdorff topology [see Theorems III.2 and III.30 by Castaing and Valadier (1977)]. Likewise, if S is a locally compact, complete, separable metric space and Γ maps into the closed subsets of S, the condition is equivalent to Γ being measurable with respect to the Borel-σ-field generated by the Fell topology [this can be proven using results from Beer (1993) and Castaing and Valadier (1977)].
Proof of Lemma 3.2. First recall the following lemma on product-measurability, the proof of which is an easy exercise. Then h is A ⊗ B(I), B(T ) -measurable.
Since P ϑn,q n ≤ n 0 is a finite set of measures, there exists a compact set K ⊂ R d such that P ϑ n,q ∈ K ≥ 1 − ε 2 for all n ≤ n 0 . The set K ∩ B ⊂ Θ is a compact subset of R d and thus also of Θ, for a compact metric space is a compact subset of every metric space it embeds into continuously [see p.21, Theorem 3, of Kuratowski (1968)]. By choice of the sets, P ϑ n,q ∈ K ∩ B ≥ 1 − ε, which is the claim. | 12,839.4 | 2019-08-30T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
A Survey of Fault-Injection Methodologies for Soft Error Rate Modeling in Systems-on-Chips
The development of process technology has increased system performance, but the system failure probability has also significantly increased. It is important to consider the system reliability in addition to the cost, performance, and power consumption. In this paper, we describe the types of faults that occur in a system and where these faults originate. Then, fault-injection techniques, which are used to characterize the fault rate of a system-on-chip (SoC), are investigated to provide a guideline to SoC designers for the realization of resilient SoCs.
Introduction
Recently, the development of process technology has increased system performance, but the system failure probability has also significantly increased.It is important to consider that the system must be robust against failures when designing the circuits in addition to the cost, performance, and power consumption.Thus, resilient design becomes particularly prominent in system design to increase reliability.Many studies have been conducted for over half a century on fault tolerance and fault diagnosis to control the system in anticipation of inevitable defects.
Fault avoidance is a technique that addresses the faults of devices to prevent failure.This technique includes improving the reliability of the product through inspection and testing processes.However, a complete fault-free design and manufacturing process for complex devices such as processors is difficult to achieve.Further, these devices often encounter dangerous situations because of aging and internal and external defects of the hardware.Therefore, researchers have actively studied fault-tolerance techniques to ensure normal operation, although the devices may experience some faults.Fault-tolerance techniques have made much progress for nearly half a century.Among others, the redundancy scheme has been highlighted, which tolerates faults using additional resources.This scheme is simple to implement and provides high reliability.The redundancy scheme can be classified into four general types: hardware, software, information, and time redundancies.
To detect the faults in a system, the hardware redundancy technique uses replicated hardware, the software redundancy technique employs an additional program routine, the information redundancy technique inserts extra bits into the data to be transferred, and the time redundancy technique repetitively performs the same process and compares the results.These techniques involve additional costs and delays because of the additional resources; thus, designers should select one of these techniques after considering the tradeoffs.Structures and methods have been studied in recent years to reduce these additional resources.
System diagnosis is naturally important to handle a system failure caused by faults.Methods that test and evaluate the devices have also been developed following the development of fault-tolerance techniques.Fault-injection (FI) techniques have been widely used as a fault-testing plan.The FI techniques can be classified according to which device injects a fault into a target system.These techniques have their own advantages and disadvantages.Because most of the defects can be eliminated during the testing process, the proper FI technique should be selected in accordance with a particular design.In this paper, we describe what types of faults occur in a system and where the faults originate.Then, the FI techniques, which are used to characterize the fault rate of a system-onchip (SoC), are investigated to provide a guideline to SoC designers for the design of resilient SoCs.
The rest of this paper is organized as follows.We first classify the types of faults in Section 2 and explain the causes that lead to faults in Section 3. Section 4 addresses the FI techniques, in which a discussion is included.Finally, we conclude this paper in Section 5.
Type of Faults
A fault can be classified into a hardware or a software fault according to where it occurs.We focus on hardware faults in this study, which greatly affect the device and system.A hardware fault is classified into a permanent, an intermittent, or a transient fault according to how long it exists in a device (see Figure 1 The data errors that result from a hardware fault include hard and soft errors.A hard error causes data corruption because of hardware faults arising from permanent and intermittent faults.A soft error causes data corruption because of a disturbance in the environment, such as alpha particles or neutrons, and originates from transient faults.In contrast to a hard error, a soft error arises under conditions where the device is not damaged.A soft error can be divided into single and multiple bit flips.A single bit flip consists of one data flip, and multiple bit flips consist of several data flips.Further, a single bit-flip can be categorized into a single event upset (SEU) or a single event transient (SET), depending on where it occurs.An SEU occurs at storage element, e.g. in the latch or flip-flop, whereas an SET appears in combinational logic.Erroneous SEU values in storage element can potentially be captured in the following sequential logic.An SET in the combinational logic encounters fewer occurrences of rates of failure than an SEU because the errors are reduced by logical, temporal, or electrical masking.However, higher cost is involved in correcting the error because the result of the operation is directly propagated as soon as the input data are entered.
The soft error rate (SER) is defined as the occurrence rate of a soft error in a device.The number of failures-in-time (FIT) or the mean time between failures (MTBF) are commonly used to express the SER.The main source of SER originates from the flip-flops in most embedded digital applications without a microprocessor [1].
ISSN: 2302-9285
A Survey of Fault-Injection Methodologies for Soft Error Rate Modeling in SoC (Seung Eun Lee) 171
Cause of Hardware Faults
Understanding the types of faults and how they occur is essential for fault modeling and diagnosis.The majority of causes of hardware faults result from the development of process technology.Following the development of process technology, the probability of encountering process variations or external noise sources (alpha particles or neutrons in cosmic rays) increases in the semiconductor manufacturing process, leading to soft errors.Similarly, VLSI circuits that operate under high operating speeds and low supply voltage are susceptible to process variations; thus, these circuits have a higher error probability owing to the switching delay of transistors.In addition, the defects per chip area in a VLSI process increase with the increasing number of on-chip transistors, which increases the probability of failure.Thus, process-technology improvements result in both hard and soft errors.
Cosmic-Ray Partile
Cosmic rays cause soft errors in the system.Most cosmic rays do not reach the Earth's surface.However, cosmic rays produce energetic secondary particles such as neutrons and protons by collision with a nucleus in the Earth's atmosphere.A neutron by itself cannot interfere with the circuit; however, it is absorbed by the nucleus and causes a "neutron capture" reaction that emits alpha particles.These alpha particles generate an incorrect value when they collide with the circuit.Neutrons also originate from nuclear-fission reactions or from the creation and destruction of radioactive nuclei.Alpha particles originate from various radioisotopes during radioactive decay and are detected in materials such as glasses, fillers, alumina, plastic, and even in the sea [2].
A study that cosmic rays potentially affect devices was presented in 1962 [3].Communication disruption due to cosmic rays actually occurred, and the cosmic-ray event rate was calculated in 1975 through experiments using a scanning electron microscope [4].Devices have become more vulnerable to neutrons and alpha particles from cosmic rays because of the development of the process technology [5].The fact that circuits are more susceptible to atmospheric neutrons was confirmed by a comparison of the SER caused by neutrons depending on the scaling of device size of CMOS transistors [6].By checking the SER caused by alpha particles and radiation, it observed that the circuits are vulnerable to alpha particles when the operating voltage of the devices was lowered in the sequential logic, static combinational logic, and SRAM [7].Reference [8] confirmed that the multi-bit error rate for 90nm SRAM was slightly higher than that for 130-nm SRAM.
Noise Sources
Layman and Chamberlain demonstrated that the various noise sources that cause soft errors are thermal, shot, and l/f noise [9].Thermal noise is caused by heat when the charge carriers (electrons or holes) move erratically in the capacitor.Thermal noise affects the semiconductor threshold voltage and flips the original value in the logic, resulting in a soft error.Thermal noise can be modeled with the voltage or current [10].Shot noise is generated when the carriers pass over the potential barrier in a semiconductor, and the number of carriers becomes irregular.Because the direction and speed of the electron motion is irregular, each carrier introduces a problem in the semiconductor.The 1/f noise is caused by conductance fluctuation, which is inversely proportional to the frequency.The 1/f noise in the internal components increases significantly in the low-frequency region, and the noise decreases in the high-frequency region.Thus, these additional noise sources attack the noise margins of the semiconductors and increase the SER.
Critical Charge
Critical charge is the minimum required amount of charge to change the states of a semiconductor.When enough critical charge is collected, the logic value is changed.By decreasing the semiconductor size, the collected charge required to upset the logic also decreases and becomes susceptible to soft errors.Similarly, critical charge has been confirmed to decrease under lower operating voltages and smaller feature sizes [11].Reference [12] confirmed that the SER is altered depending on several factors, including the critical charge.A device-level 3D simulation was performed to model the relationship between the bit error rate and the critical charge values in 90-nm SRAM [13].
Crosstalk
Crosstalk is electrical interference that occurs when the distance between two conductors is sufficiently small.Narrowing of the distance resulting from deep submicron technology causes electrical distortion and adversely affects reliability.The high-frequency operation of VLSI causes a skin effect propagated along the surface of a conductor [14].This skin effect causes frequency-dependent interconnection resistance.The reliability problem of a circuit can be easily found in other places because of the increasing signal interference of the crosstalk in a smaller transistor and the interconnect dimensions [15].Most designs encounter potentially soft errors from the RC delay, noise interference, and crosstalk [16].To investigate the crosstalk properties, coupled RLC parameter values on four different interconnects were measured for the 0.13-µm and 0.18-µm processes [17].
NBTI
Negative bias temperature instability (NBTI) is a type of aging.The time delay of a circuit increases in proportion to the transistor threshold voltage (Vth).NBTI leads to timing error because the initial value of Vth for a PMOS varies with the negative bias and temperature of a circuit that has been used for a long time.This phenomenon was observed in 1967 [18].Reference [19] demonstrated that a longer exposure time to a negative voltage at the gate results in a larger fluctuation in the threshold voltage.Further, a larger change in V th results is more occurrences of the critical timing problem.
Schroder and Babcock presented many process conditions such as oxide damage; the temperature; the oxide electric field; the presence of hydrogen, boron, nitrogen, water; and the gate length that affect the NBTI sensitivity [20].The reliability of the NBTI significantly decreases when the transistor operates at high temperature, has a small gate length, and has a large content of boron, nitrogen, hydrogen, or water.The fact that hydrogen increases the NBTI was proven in [21].Similarly, the lifetime of a semiconductor is significantly reduced because the change in Vth is different, depending on the boron content of the gate oxide and the thin gate length [22].The fluctuation in Vth increases according to the NBTI stress in nitrogen oxide compared with that in pure SiO2 [1].Water can also affect Vth when the gate oxide layer is formed.As the size of CMOS devices is gradually reduced with the development of process technology, nitride oxide is being used in the gate instead of the existing SiO2 to reduce the gate insulator film and improve the performance.However, the thin nitride oxide is very sensitive to NBTI stress; thus, the PMOS transistor easily acquires defects compared with that using the existing SiO2 [1].
Fault Injection (FI) Techniques
FI is adopted to verify the reliability of a system or to perform fault modeling.In this manner, we can ensure the sensitive part of the system against faults and the potential lack of fault tolerance to create a resilient design.The basic environment of the FI method includes FI system and the target system (see Figure 2).The FI system interacts with the target system for fault generation, control, and fault analysis.The FI methods can be classified into four techniques as follows: Hardware-based FI, Software-based FI, imulation-based FI, and Emulation-based FI.Hardware-based FI is the most realistic method, which makes target system to experience faults in a physical level and measures the occurrence of the failure (see Figure 3(a)).The circuit is tested using the change in the operating power or temperature or the external shocks that cause transient errors.Moreover, this technique directly provides a stimulus at the pins or the sockets.The testing speed is fast owing to the real-time FI structure.By directly changing the environment, a wide range of circuits can be evaluated through these disturbances.However, its processes are difficult to monitor and control because we do not know the exact moment when a fault is injected by the disturbance.In addition, damage can be done to the target system because the actual circuit cannot be restored after testing [23].A circuit was validated by using a pin-level FI tool (MESSALINE) by derivation of the experimental measurements such as defective time distribution and size [24].Wang tested the fault-tolerance capability of a software to changes in the power supply and payload at a satellite's on-board computer [25].He injected the faults through a cable and monitored the changes in the output port.
Laser injection schemes into a system are available.The reliability of time-resolved ICs exposed to a pulsed laser was evaluated [26].The SER was confirmed by calculating and normalizing the cumulative error histogram in accordance with the laser pulse delay of the circuits.Pin-level FI was conducted to verify a fault-tolerant multiprocessor system (FASST) [27].FASST performed a fast fail-silent technique that analyzed the error detection coverage and latencies.A method that used a high-intensity laser in the microcontrollers was proposed [28].Its drawback was that the disturbance of the circuit could be not completely controlled in the experiment.Software-based FI causes a software fault by modifying the execution code of the actual running software in the system (see Figure 3(b)).Software-based FI is practical because the required hardware and software are actually used in the device, and additional hardware to inject a fault is not required.However, the method suffers from limitations in terms of the types of faults injected by the software.In addition, detailed information on the hardware and software is necessary to model and control the fault.
Wulf et al injected faults by using a software-based FI tool for multi-core devices on a cache using MATLAB [29].This tool can generate a cache error by randomly injecting faults in the data accessed by the load instructions.Roberto et al. suggested a method for selecting appropriate fault locations from the analysis of the circuit complexity by 3.8 million experiments [30], which significantly reduced the load size of the fault effectively and improved the performance.A dynamic software fault injection system that targeted the Apache Web server was proposed using the PIN framework-a dynamic binary instrumentation tool-from Intel [31].
To test the reliability of the server, this tool injected faults dynamically after recording the information of the fault locations.The work in [32] injected faults in microprocessors and the main memory circuits.Some of the FI methods were evaluated for software fault tolerance that detects and masks hardware errors, and the results were then compared.Several fault models were experimented on a communication channel between the serial port driver and the OS kernel to evaluate the effect on the system according to software FI [33].Through the tests, the results of the average execution time, implementation complexity, coverage, and injection efficiency were manifested in detail.
Simulation-Based FI
Simulation-based FI injects a fault into the design and observes the failure using computer simulation tools (see Figure 3(c)).Simulation-based FI operates along with the actual workload in the software program and can be used in every process of design for function verification.The reliability can be verified simultaneously by functional verification of the design.Performing fault modeling and control is possible without damaging the real system.Moreover, simulation-based FI can change the data of any location thanks to its superior accessibility.Additionally, environment construction is cheap because additional hardware is not required.However, simulation-based FI suffers from the drawbacks of long simulation setup process and simulation time.
A fault injector that injects board-level component faults was implemented for boardlevel built-in test (BIT) software [34], which is suitable for testing the reliability of a BIT system because it is created to handle the lack of validation in the BIT software.A system C hardware simulation model that uses embedded benchmark software was proposed to reduce the hardware resources [35].This model supports a mixed-level simulation conducted at an electronic system level and RTL.Ruano et al used a simulation-based FI platform that models soft errors to evaluate the reliability of a system [36].This platform has low circuit costs and high controllability and can be performed with both synthesizable and non-synthesizable models.Reference [37] revealed that injecting faults into all places in the RTL and gate-level designs is possible, which supports a C function to add new types of faults.Wang et al. tested a method that modifies the data of a processor using a full system simulator-based FI tool (FSFI) on a system level [38].The FSFI can check the processor components such as the integer register files, ALU, and decoder.
Emulation-Based FI
Emulation-based FI injects faults into a design implemented in the FPGA (see Figure 3(d)).Emulation-based FI is proposed to overcome the long simulation time of simulation-based FI.Diagnosis can be processed quickly with real-time or partial reconfiguration.However, emulation-based FI is constrained by the precondition that the target design must be optimized in the FPGA before the experiment.Further, flexibly checking the response to the failure of the target design is difficult.
An FI method for any microprocessor implemented on an FPGA with an on-chip debugger (OCD) and a JTAG interface was implemented to complement the time bottleneck of an OCD built in a processor for debugging [39].This implementation combined hardware and software FI in the FPGA design.The OCD-based method is a balanced technique in terms of
ISSN: 2089-3191 Bulletin of EEI Vol. 5, No. 2, June 2016 : 169 -177 170 ).A permanent fault (stuck-at, stuck-open, and bridging faults) remains permanently in the circuit, a transient fault appears and disappears within a brief time, and an intermittent fault introduces repetitive broken data in a specific place because of hardware damage.Permanent and intermittent faults occur because of inaccurate specifications, implementation mistakes, or component defects.A transient fault usually occurs because of internal and external noise.
Figure 1 .
Figure 1.Block diagram of fault and error terminology focused on the hardware fault. | 4,355.4 | 2016-06-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
A Selectable Sloppy Heap
We study the selection problem, namely that of computing the $i$th order statistic of $n$ given elements. Here we offer a data structure called \emph{selectable sloppy heap} handling a dynamic version in which upon request: (i)~a new element is inserted or (ii)~an element of a prescribed quantile group is deleted from the data structure. Each operation is executed in (ideal!) constant time---and is thus independent of $n$ (the number of elements stored in the data structure)---provided that the number of quantile groups is fixed. This is the first result of this kind accommodating both insertion and deletion in constant time. As such, our data structure outperforms the soft heap data structure of Chazelle (which only offers constant amortized complexity for a fixed error rate $0<\varepsilon \leq 1/2$) in applications such as dynamic percentile maintenance. The design demonstrates how slowing down a certain computation can speed up the data structure.
Introduction
The following problem has been devised by Fredman about 25 years ago, for inclusion in homework assignments for an algorithms course. This paper both generalizes and strengthens that result.
A "very sloppy heap" (abbreviated vsh) is a data structure for performing the following operations on a set S: (i) insert and (ii) delete-small. The latter operation deletes (and returns) an element x which is among the ⌈n/2⌉ smallest elements in the set, where n is the current size of the set. Explain how to implement a vsh in constant amortized time per operation.
Together with sorting, selection is one of the most widely used procedure in computer algorithms. Given a sequence A of n numbers and an integer (selection) parameter 1 ≤ i ≤ n, the selection problem asks to find the ith smallest element in A. Sorting trivially solves the selection problem; however, a higher level of sophistication is required by a linear time algorithm. A now classic approach for selection [6,18,25,36,39] from the 1970s is to use an element in A as a pivot to partition A into two smaller subsequences and recurse on one of them with a (possibly different) selection parameter i.
The time complexity of this kind of algorithms is sensitive to the pivots used. If a good pivot is used, many elements in A can be discarded, while if a bad pivot is used, the size of the problem may be only reduced by a constant in the worst case, leading to a quadratic worst-case running time. But carefully choosing a good pivot can be time consuming. Choosing the pivots randomly (and thus without much effort) yields a well-known randomized selection algorithm with expected linear running time; see e.g., [11,Ch. 9.2], [31,Ch. 13.5], or [34,Ch. 3.4]; however its worst case running time is quadratic in n.
The first deterministic linear time selection algorithm Select is due to Blum et al. [6]; it is recursive in nature. By using the median of medians of small disjoint groups of the input array (of constant size at least 5) good pivots that reduce the size of the problem by a constant fraction and thereby lead to O(n) time overall, can be chosen at low cost in each recursive invocation. More recently, suitable variants of Select with groups of 3 and 4 also running in O(n) time have been also put forward [10,40]. The selection problem, and computing the median in particular are in close relation with the problem of finding the quantiles of a set, that we describe next.
Quantiles. The kth quantiles of an n-element set are the k − 1 order statistics that divide the sorted set in k equal-sized groups (to within 1); see, e.g., [11, p. 223]. It is known that the kth quantiles of a set can be computed by a recursive algorithm running in O(n log k) time. Such an algorithm can be modified, if needed, so that the k groups can be also output, say, each as a linked list, within the same overall time. For 2 ≤ i ≤ k − 1, the ith group of elements (bounded by the (i−1)th and the ith quantile) is referred to as the ith quantile group; the first quantile group consists of the elements less or equal to the first quantile, and the kth quantile group consists of the elements greater or equal to the (k − 1)th quantile.
Our main result is the following; for the optimality of the dependence in k, see the first remark in Section 4. Theorem 1. For any fixed integer k, a data structure for dynamic sets exists accommodating each of the following operations in constant time (that depends on k): (i) Insert a new element and (ii) Delete (and return) an element of the ith quantile group of the current set, where 1 ≤ i ≤ k. The time per operation is O(log k), which is optimal in a comparison-based model even in an amortized setting.
Background and related problems. Since the selection problem is of primary importance, the interest in selection algorithms has remained high ever since; see for instance [2,4,5,12,14,15,16,18,20,21,22,23,27,29,35,37,39]. In particular, determining the comparison complexity for computing various order statistics including the median has lead to many exciting questions, some of which are still unanswered today; in this respect, Yao's hypothesis on selection [38,Ch. 4] remains an inspiring endeavor [15,35,36]. A comprehensive review of early developments in selection is provided by Knuth [32]. Computational results on the exact comparison complexity of finding the ith smallest out of n for small i, n, have been obtained in [21,24]. We also refer the reader to the dedicated book chapters on selection in [1,3,11,13,31] and the more recent articles [10,26,30].
The selection problem is also of interest in an online or dynamic setting, where elements are inserted or deleted. A balanced binary search tree on n distinct elements can be augmented with a size attribute for each node, thereby allowing the retrieval of an element of a given rank in O(log n) time [11,Ch. 14]. We say that the ith smallest element has rank i, where i = 1, . . . , n, with ties broken arbitrarily. Further, determining the rank of an element in the data structure can also be done in O(log n) time. Consequently, a dynamic order statistic operation (inserting a new element or deleting an element of a given rank) can be accomplished within the same time bound.
Priority queues (heaps in particular) are ubiquitous data structures; a typical operation set might include Insert, Delete-Min, Decrease-Key (and perhaps also Find-Min, Increase-Key, or other operations). See for instance [7] and the references therein for some new variants and recent developments.
A variant of a priority queue that allows dynamic maintenance of percentiles is the soft heap data structure due to Chazelle [8]; in addition to Create and Meld, the data structure accommodates Insert, Delete and Findmin (Delete x removes item x). Consider a sequence of m ≥ n operations after its creation, that includes n Insert operations. For any 0 < ε ≤ 1/2, a soft heap with error rate ε supports each operation in constant amortized time, except for Insert which takes O(log 1/ε) amortized time. The data structure ensures that at most εn of its items are corrupted (their keys have been artificially raised). Findmin returns the minimum current key (which might be corrupted). In contrast, our data structure uses O(log 1/ε) time per operation, where ε ∼ 1/k, and thereby outperforms the soft heap with respect to the worst-case time per update (insert or delete) operation.
Definition. Let k be a fixed positive integer. A "selectable sloppy heap" (abbreviated ssh) is a data structure for performing the following operations on a set S with n elements: (a) Insert x: a new element x is inserted.
(b) Delete i: this deletes (and returns) some element x which belongs to the ith quantile group of the current set, where 1 ≤ i ≤ k; if n < k, the deleted element is not subject to any requirement.
Outline of the paper. To explain the main challenges and introduce the main ideas, we first sketch several preliminary implementations of the data structure, meeting suboptimal (in k) benchmarks: (i) O(k log k) amortized time per operation for the first variant in Section 2; (ii) O(k log k) worst-case time per operation for the second variant in Section 2; (iii) O(log k) amortized time per operation for the third variant in Section 2. We then further refine these methods in Section 3 to obtain an optimal implementation of the data structure running in O(log k) worst-case time per operation. In particular, for constant k, the second variant in Section 2 and the (main) variant in Section 3 run in O(1) worst-case time per operation. We conclude in Section 4 with applications and technical remarks.
Comments and notations. Duplicate elements are easily handled by the design. Each operation request is associated with a discrete time step, i.e., the jth operation occurs during the jth time step, where j = 1, 2, . . . Without affecting the results, the floor and ceiling functions are omitted in the descriptions of the algorithms and their analyses. Let A be a set; we write x ≤ A if x ≤ a for every a ∈ A. The size of a bucket b, i.e., the number of elements in b, is denoted by s(b). If buckets b ′ and b ′′ merge into a new bucket b, written as
Some ideas and preliminary solutions
A first variant with O(k log k) amortized time per operation. Let n denote the current number of elements in the data structure, with the elements being stored in a linear list (or an array). By default (if n is large enough) the algorithm proceeds in phases.
If n < 3k, and no phase is under way, proceed by brute force: for Insert, add the new element to the list; for Delete i, compute the elements in the ith quantile group and delete (return) one of them arbitrarily. Since k is constant, each operation takes O(3k), i.e., runs in constant time.
If n ≥ 3k, and no phase is under way, start a new phase: reorganize the data structure by computing the 3kth quantiles of the set and the corresponding groups; store each group in a list (bucket) of size n/(3k). The next n/(3k) operations make up the current phase. Process each of these n/(3k) operations by using exclusively elements stored in these 3k buckets. For Insert, add the new element to an initially empty overflow bucket (an empty overflow bucket is created at each reorganization). For Delete i, remove any element from the bucket (i − 1) n k + n 3k , i.e., from the middle third of the ith quantile group (out of the total k).
The reorganization takes O(n log k) time and is executed about every n/(3k) steps, where each operation counts as one step. The resulting amortized cost per operation is namely O(1) for constant k. The above idea is next refined so as to obtain this as a worst-case time bound per operation.
A second variant with O(k log k) worst-case time per operation. It suffices to consider the case of large n, namely n ≥ 3k. The algorithm proceeds in phases: each phase starts with a reorganization, namely computing the (3k)th quantiles and the corresponding quantile groups; see [11,Ch.15]. The time taken is O(n log (3k)) = O(n log k). There are n/(3k) Insert and Delete operations, i.e., n/(3k) steps following each reorganization until the results of the next reorganization become available: assume that the data structure holds n items when the current reorganization starts (n is redefined at the start of each reorganization) and there are old buckets to use until the reorganization is finalized. That is, after that many steps new buckets (however, with an old content) become available. Use these buckets for the next n/(3k) operations, and so on.
To transform the constant amortized time guarantee into a constant worst-case guarantee for each operation, spread the execution of reorganization over the next n/(3k) operations, i.e., over the entire phase. The resulting time per operation is bounded from above by Since Delete is serviced from the existent buckets and Insert is serviced using the overflow bucket, each operation involves inserting or deleting one element from a list; thus for constant k, each operation takes O(1) overall time to process.
New buckets become available about every other n/(3k) operations; this is a rather imprecise estimate because the current number of elements, n, changes. Since a reorganization is spread out over multiple steps, the result becomes available with some delay, and moreover, its content is (partially) obsolete. To verify that the data structure operates as intended, one needs to check that the rank of a deleted element belongs to the required interval (quantile group); we omit the calculation details. The key observation is that any one operation can affect the rank of any element by at most 1: to be precise, only Insert or Delete of a smaller element can increase the rank of an element by 1 or decrease the rank of an element by 1, respectively.
A third variant with O(log k) amortized time per operation. We briefly describe an implementation achieving O(log k) amortized time per operation that is tailored on the idea of a B-tree; this variant is due to Fredman [19]. Use a balanced binary search tree with Θ(log k) levels storing O(k) splitting keys at the leafs. Each leaf comprises a bucket of Θ(n/k) items, with order maintained between buckets, but not within buckets. When an insertion causes a bucket to become too large, it gets split into two buckets by performing a median selection operation. Small buckets (due to deletions) get merged. A bucket of size m, once split, won't get split sooner than another Ω(m) operations.
When the number of elements doubles (or halves), a tree reorganization is triggered that partitions the present items into 6k uniform sized buckets, so that the new common bucket size m is n/(6k) items. These event-triggered reorganizations ensure that buckets do not become too small unless they are target of deletions; similarly, buckets do not become too large unless they are target of insertions. Since the reorganization cost is O(n log k), and Ω(n) operation requests take place between successive reorganizations, this scheme yields O(log k) amortized time per operation, namely O(1) for constant k.
A variant with optimal O(log k) worst-case time per operation
A brief examination of the approach in the 3rd variant reveals two bottlenecks in achieving O(log k) worst-case time per operation: the median computation that comes with splitting large buckets and the tree reorganization that occurs when n doubles (or halves). We briefly indicate below how ideas from the 2nd and 3rd early variants are refined to obtain O(log k) worst-case time per operation; in particular, O(1) time for constant k. It is shown in the process how execution in parallel of several sequential procedures can lead to a speed up of the data structure.
A balanced BST for Θ(k) keys is used, subdividing the data into O(k) buckets. A size attribute is associated with each node reflecting the number of elements stored in the subtree rooted at the respective node. Modifications in the data structure at each operation are reflected in appropriate updates of the size attributes at the O(log k) nodes of the O(1) search paths involved, in O(log k) time overhead per operation.
If n is the current number of elements, each bucket holds at most n/(3k) elements, and so each of the k quantile groups contains at least one of the buckets entirely. By choosing one such bucket at the bottom of the search path in the BST for executing a Delete operation from the ith quantile group guarantees its correctness.
The n elements are kept in Θ(k) buckets of maximum size O(n/k) that form the Θ(k) leaves of a balanced binary search tree A. As in the third variant (in Section 2), each leaf holds a bucket, with order maintained between buckets, but not within buckets. Buckets that become too large are split in order to enforce a suitable upper limit on the bucket size. Median finding procedures along with other preparatory and follow-up procedures accompanying bucket splits are scheduled as background computation, as in the second variant (in Section 2).
Our data structure merges small buckets in order to keep the number of buckets under control and renounces the periodic tree reorganizations by introducing new elements of design: a round robin process and a priority queue jointly control the maximum bucket size and the number of buckets in the BST. These mechanisms are introduced to prevent buckets becoming too small or too large as an effect of changes in the total number of elements, n, and not necessarily as an effect of operations directed to them.
Outline and features. Let N := 12k. For illustrating the main ideas, assume now that n ≥ N . The buckets are linked in a doubly-linked linear list B, in key order; adding two links between the last and the first bucket yields a circular list C, referred to as the round robin list. We note that B and C are two views of the same data structure.
Each operation request translates to locating a suitable bucket for implementing the request. The circular list is traversed in a round robin fashion, so that the current round robin bucket in the list is also examined during the current operation request. The round robin process ensures that (i) the buckets do no exceed their maximum capacity, and (ii) certain "long-term" preparatory bucket-splitting procedures are run in the background over a succession of non-consecutive discrete time steps allocated to the same bucket.
Each bucket split entails a merge-test for the pair of adjacent buckets with the minimum sum of sizes. The process of merging adjacent buckets in B is controlled by a priority queue in the form a binary min-heap H. If |B| = t, i.e., there are t buckets, H holds the t − 1 sums of sizes s(b) + s(b + ), for buckets b ∈ B; here b + denotes the bucket that follows b in B. A merge is made provided the minimum value at the top of H is below some threshold; and A, B and H are updated. Merging adjacent buckets ensures that the total number of buckets remains under control, regardless of which buckets are accessed by operation requests.
Elements of the design. We highlight two: (i) use of the priority queue H to keep the number of buckets under control, and (ii) running the procedures involved at different rates as needed to ensure that certain task deadlines and precedence constraints among them are met. The data structure maintains the following two invariants: I1 Each bucket contains between 1 and n/(3k) elements; there is no limit from below imposed on the bucket size.
I2
The number of buckets is between 3k and N = 12k, as implied by the maximum bucket size, and a later argument based on the rules of operation (on merging adjacent buckets); see Action 2 and Lemma 1.
Recall that all Θ(k) buckets are linked in a circular round robin list: in key order, and with the last bucket linked to the first one. A pointer to the current round robin bucket (initialized arbitrarily, say, to the first bucket) is maintained. Each operation request advances this position in the list by one slot. Observe that every bucket becomes the round robin bucket about every Θ(k) discrete time steps. Further, each operation request leads via the search path in A to one of the buckets, referred to as the operation bucket. Executing one operation is done by a sequence of actions performed in the operation bucket and the round robin bucket; these are referred to as the current buckets. All these actions are therefore associated with the same discrete time step. Every bucket update (this includes creation or deletion of buckets) entails a corresponding update of the binary heap H in the (at most) two heap elements s(b) + s(b + ) that depend on it.
A bucket is declared large if its current size exceeds 9/10 of the maximum allowed, i.e., if the current size exceeds 9n/(30k). All other buckets are declared regular. Each operation may cause an update in the status of the round robin bucket and the operation bucket (among large and regular). Action 1. Execute the requested operation: either add a new element to the respective bucket, or delete an arbitrary element from it. If the operation bucket becomes empty as a result of the current Delete operation, it is deleted, and the BST is correspondingly updated in O(log k) time. Status updates in the current two buckets are made, if needed. If a median finding procedure or a follow up procedure is under way in the current operation bucket or the current round robin bucket (see details below), the next segment consisting of 500 (or less) elementary operations of this procedure is executed. It is worth noting that a bucket for which a partitioning procedure is under way, as described below, cannot be part of a pair of buckets to merge (i.e., passing the merge-test).
Action 3 (finalizing a split). Similar to the merge operations in Action 2, finalizing the split can be completed in O(log k) time: it essentially involves updating A, B and H. It will be shown subsequently that the size of any new bucket resulting from a split is in the range [4n/(30k), 6n/(30k)].
Besides Actions 1 − 3, there are actions associated with procedures running in the background, whose execution is spread out over a succession of non-consecutive discrete time steps allocated to the same bucket. The procedures are in preparation of splitting large buckets.
Splitting a large bucket. For simplicity of exposition, we assume that n = Ω(k), for a sufficiently large constant factor. If the current bucket b is large, i.e., s(b) ≥ 9n/(30k), and no procedure is active in the current bucket, let n 0 := n (the number of elements existent when the procedure is initiated); and place 8.8n 0 /(30k) elements into (a main part) P 1 and the remaining s(b) − |P 1 | elements into (a secondary part) P 2 . Observe that 0.2n 0 /(30k) ≤ |P 2 | ≤ 0.4n 0 /(30k). Any insertions and deletions from the current bucket until the split is finalized are performed using P 2 . A balanced partition of the current bucket will be obtained in at most n 0 /10 time steps.
Consider the following n 0 /10 operation requests. By Lemma 1 below, the number of buckets is at most 12k at any time, and so at least (n 0 /10)/(12k) = n 0 /(120k) time steps are allocated to b in the round robin process by the n 0 /10 time mark. Let t 1 , where n 0 /(120k) ≤ t 1 < n 0 /10, mark the first occurrence when a total of n 0 /(120k) discrete time steps have been allocated to b (as operation steps or round robin steps). As shown next, this number of steps suffices for finalizing the balanced partition and the split of b. The process calls two procedures, labeled (A) and (B): (A) Start a median finding procedure, i.e., for finding the two quantile groups Q 1 , Q 2 of P 1 : an element m ∈ Q 1 ∪ Q 2 = P 1 is found so that Q 1 ≤ m ≤ Q 2 and ||Q 1 | − |Q 2 || ≤ 1. At the point when this procedure is launched, the computational steps are scheduled so that every time the bucket gets accessed (either as the operation bucket or as the round robin bucket), 500 computational steps for selecting the median element of P 1 take place. Assume for concreteness that median finding on an input set S takes at most 10|S| elementary operations; when applied to P 1 , we have |P 1 | ≤ 8.8n 0 /(30k), and thus 88n 0 /(30k) elementary operations suffice. At the rate of 500 per discrete time step, it follows that 88n 0 /(15000k) ≤ n 0 /(160k) time steps suffice for finding the median of P 1 .
(B) After the median has been determined, a follow up procedure is initiated in the same bucket. It aims at reducing and finally eliminating the leftover of P 2 , so that in another n 0 /(480k) discrete steps, a balanced partition of the current bucket is obtained. The follow up procedure starts with the two quantile groups of the same size, 4.4n 0 /(30k), as created by the median finding procedure, by comparing each element of P 2 against m and properly placing it in one of the two groups. This procedure runs at the rate of 10 items per discrete time step accessing the current bucket (either as the operation bucket or as the round robin bucket), until finally the partitioning process is completed with all elements in the current bucket properly placed against the pivot m. Note that |P 2 | ≤ 0.4n 0 /(30k) + n 0 /(480) ≤ 0.5n 0 /(30k) at any time; at the rate of 10 items per discrete time step, it follows that 0.5n 0 /(300k) ≤ n 0 /(480k) time steps suffice for completing the follow up procedure.
The two procedures terminate within a total of at most n 0 /(160k) + n 0 /(480k) = n 0 /(120k) discrete time steps, as required. The parameters are chosen so that the split takes place before the large bucket b becomes illegal.
Recall that 0.9n 0 ≤ n ≤ 1.1n 0 ; thus, in terms of the current number of elements, n, the split of a large 1 bucket b produces two smaller buckets Overlapping partitioning phases in the same bucket are precluded by the fact that no fewer than 3n/(30k) data structure operations accessing a given fresh bucket can trigger the next launching of a partitioning phase for that bucket.
Remark. Let b be a new bucket produced in the current operation. If b is generated by a split operation, then 4n 30k ≤ s(b) ≤ 6n 30k by (1). If b is generated by a merge operation, then s(b) ≤ 5n 30k by the merge-test.
Analysis of merging buckets and maintaining the two invariants. To prove that the two invariants I1 and I2 are maintained, we need the following key fact. Lemma 1. The number of buckets is at most N = 12k at any time.
Proof. Let t denote the number of buckets after the current operation is executed, and j = 1, 2, . . . denote the discrete time steps. Let B 1 , . . . , B t be the buckets after the current step in key order. Write a i = s(B i ), for i = 1, . . . , t. We proceed by induction on j and show that, if the number of buckets in A (and B) is at most N after each of the preceding N time steps, it remains at most N after the current time step. Observe that the number of buckets can only increase by one after a bucket split, and there can be at most two splits associated with a discrete time step. The induction basis is j ≤ N , and then indeed, we have t ≤ j ≤ N , as required.
For the induction step, assume that the number of buckets is at most N after the previous operation. If no bucket splits occur during the execution of the current operation, the number of buckets remains unchanged, and is thus still at most N after the current operation. If bucket splits occur, it suffices to show that the number of buckets is at most N after each split. Consider a split operation, and let σ := s(b) + s(b + ) be the minimum value at the top of the heap H after the split. There are two cases: Case 1. σ ≤ 5n/(30k), and thus the merge is executed. Consequently, the number of buckets after the split is still at most N , as required.
Case 2. σ > 5n/(30k), and thus no merge is executed. Since H is a min-heap, we have Adding these t − 1 inequalities yields or t ≤ 12k = N , as claimed, and concluding the induction step.
Applications and technical remarks
1. An argument due to Fredman [19] shows that executing a sequence of n operations (from among Insert and Delete i, where 1 ≤ i ≤ k) requires Ω(n log k) time in the worst case, regardless of the implementation of the data structure. The argument relies on the information theory lower bound [32,Ch. 5.3.1]. (For completeness, it is included in the Appendix.) 2. In addition to the two operations provided Insert and Delete i, the following operation can be also accommodated at no increase in cost: Read i: this returns some element x which belongs to the ith quantile group of the current set, where 1 ≤ i ≤ k; the element remains part of the data structure.
3. The new data structure finds applications in settings with large dynamic sets where insertions and deletions need to be handled fast, and there is no need to be very precise with the ranks of the elements handled. One such application is approximate sorting. An array of n elements is said to be L-sorted if for every i ∈ [n], we have |rank(a i ) − i| ≤ L; see also [17,33]. As shown by the lower bound argument, the selectable sloppy heap can be used to L-sort n elements in O(n log (n/L)) time.
4. As mentioned in the introduction, the selection problem, and computing the median in particular, are in close relation with the problem of finding the quantiles of a set. A typical use of the median relies on its property of being both larger or equal and smaller or equal than a constant fraction of the n elements. Following Yao [38], an element (think of a player, in the context of tournaments) is said to be (i, j)-mediocre if it is neither among the top i nor among the bottom j of a totally ordered set S. As such, an element from the middle third quantile group of an n-element set is (⌊n/3⌋, ⌊n/3⌋)-mediocre; or simply mediocre, for short. Repeatedly returning a mediocre element in a dynamic setting (with an initially empty set) can thus be accomplished via the new data structure, by setting k = 3 and then repeatedly executing Delete 2 or Read 2, as needed, at a minimal (constant) query cost. Similarly, putting k = 100 sets up the data structure for dynamic percentile maintenance in O(1) time per operation; to the best of our knowledge, this outperforms previously available data structures with respect to this application; e.g., the soft heap data structure of Chazelle [8] only offers constant amortized time per operation.
5.
Here we continue our earlier discussion in Section 1 on using BSTs in the context of dynamic selection taking into account the new data structure. Traditionally, search trees do not allow duplicate keys; it is however an easy matter to augment each node of such a tree with a multiplicity attribute associated with each distinct key. In particular, the multiplicity attribute is taken into account when computing the size attribute of a node. Now having a balanced search tree augmented as described, insertion, deletion and search all take O(log n) time.
Our construction method can be used for any parameter k, with 2 ≤ k ≤ n. The data structure obtained in this way can be viewed as an approximate search tree. In particular, one can search for a given key and the approximate rank of a search key can be determined. To be precise, the following hold: (1) For any 2 ≤ k ≤ n, an approximate search tree on n items (duplicate keys allowed) can be constructed in O(n log k) time. (2) Search for a given key takes O(n/k+log k) time per operation; its approximate rank, i.e., the quantile group to which it belongs (out of k), can be reported in O(log k) time (regardless of the presence of the key!). It is worth noting that the approximate search tree we just described appears competitive when k is small and the search function is infrequently used. Then insertion and deletion of an unspecified key from a given quantile group (out of k) takes O(log k) time (the deleted element is revealed after the operation); e.g., if k = O(log n), these operations take O(log log n) time. On the other hand, the approximate search tree we just described is no real competitor for the exact solution previously discussed; indeed, no choice of k in our data structure would allow an improvement in the performance of all the basic three operations.
6. An alternative solution (to that outlined in Section 3) achieving O(log k) worst-case time per operation was conceived by Fredman [19] (after the author of the current paper has communicated him the main result). (i) His solution avoids the need to merge small buckets (in order to keep the number of buckets under control) by maintaining two running copies of the data structure and performing periodic tree reorganizations that create uniform-sized buckets. Buckets that become large are split using a mechanism similar to that devised here-in Section 3. While a complete description of this solution is not publicly available, the fact is that the decision of avoid merging small buckets comes at a high price. The main reasons are the need to maintain multiple running copies of the data structure (with two copies under permanent construction, etc), handling subtle consistency issues generally hard to satisfy, and the likely reduced speed caused by the above items. (ii) Another cumbersome approach claimed to be a simplification can be found in his subsequent arXiv posting. As such, neither alternative solution ((i) or (ii) above) matches in elegance and simplicity the one given here. 7. As mentioned earlier, the O(log 1/ε) amortized complexity of the soft heap is optimal, however this does not hold for its worst-case complexity, in regard to its Insert, Delete and Findmin operations. In contrast, the O(log 1/ε) worst-case complexity per operation of the selectable sloppy heap is optimal (where ε ∼ 1/k).
8. Interestingly enough, most applications envisioned by Chazelle [8] for the soft heap (i.e., four items from his list of five) can be dealt with using the selectable sloppy heap; these include dynamic maintenance of percentiles, linear-time selection, and two versions of approximate sorting. One can observe that the complexity analysis for the soft heap, of the error rate in particular, is more complicated than that for the selectable sloppy heap; moreover, special effort is required to ensure that the space needed by the soft heap is linear in the number of items present. In at least one application, approximate sorting, the analysis of doing that with a soft heap is more involved than that for using our structure for this task. Needless to say, the soft heap only allows deletion of small items. As such, the new data structure presented here compares favorably to the soft heap with respect to insertion and deletion. Moreover, the constant amortized guarantee per operation is replaced by the stronger constant worst-case guarantee per operation in our case.
It is worth recalling that the soft heap was designed with a specific application in mind, minimum spanning trees. Given a connected graph G = (V, E), where |V | = n, |E| = m, with weighted edges, Chazelle [9] showed that a minimum spanning tree (MST) can be computed by a deterministic algorithm in O(mα(m, n)) time, where α is the (slowly growing) inverse of the Ackermann function. (A randomized linear-time algorithm was given by Karger et al. [28].) The question of whether the selectable sloppy heap can be used for MST computation is left open. | 8,470 | 2016-07-26T00:00:00.000 | [
"Computer Science"
] |
Annexin A8 Identifies a Subpopulation of Transiently Quiescent c-Kit Positive Luminal Progenitor Cells of the Ductal Mammary Epithelium
We have previously shown that Annexin A8 (ANXA8) is strongly associated with the basal-like subgroup of breast cancers, including BRCA1-associated breast cancers, and poor prognosis; while in the mouse mammary gland AnxA8 mRNA is expressed in low-proliferative isolated pubertal mouse mammary ductal epithelium and after enforced involution, but not in isolated highly proliferative terminal end buds (TEB) or during pregnancy. To better understand ANXA8’s association with this breast cancer subgroup we established ANXA8’s cellular distribution in the mammary gland and ANXA8’s effect on cell proliferation. We show that ANXA8 expression in the mouse mammary gland was strong during pre-puberty before the expansion of the rudimentary ductal network and was limited to a distinct subpopulation of ductal luminal epithelial cells but was not detected in TEB or in alveoli during pregnancy. Similarly, during late involution its expression was found in the surviving ductal epithelium, but not in the apoptotic alveoli. Double-immunofluorescence (IF) showed that ANXA8 positive (+ve) cells were ER-alpha negative (−ve) and mostly quiescent, as defined by lack of Ki67 expression during puberty and mid-pregnancy, but not terminally differentiated with ∼15% of ANXA8 +ve cells re-entering the cell cycle at the start of pregnancy (day 4.5). RT-PCR on RNA from FACS-sorted cells and double-IF showed that ANXA8+ve cells were a subpopulation of c-kit +ve luminal progenitor cells, which have recently been identified as the cells of origin of basal-like breast cancers. Over expression of ANXA8 in the mammary epithelial cell line Kim-2 led to a G0/G1 arrest and suppressed Ki67 expression, indicating cell cycle exit. Our data therefore identify ANXA8 as a potential mediator of quiescence in the normal mouse mammary ductal epithelium, while its expression in basal-like breast cancers may be linked to ANXA8’s association with their specific cells of origin.
Introduction
Annexins form a superfamily of calcium-dependent lipid-binding proteins broadly distributed throughout all eukaryotic phyla and even some bacteria and archea. These proteins feature unique homologous repeats that contain the calcium and lipid binding sites. However, their calcium-dependent lipid-binding ability is not a universal feature of annexins since some of them have partially or completely lost their type 2 calcium binding sites through evolutionary divergence [1]. This patterned structural diversity corresponds to a functional adaptation characteristic of individual subfamilies that ranges from membrane and cytoskeletal organisation to the regulation of membrane traffic and signalling. Annexins have also been shown to act as extracellular anti-inflammatory and anti-coagulant factors as cell surface proteins, and some have even been proposed to have nuclear roles [2][3][4][5]. They are further involved in phagocytosis as well as endo-and exocytosis (for reviews see [6][7][8][9]). Most initial studies have focused on their calcium-dependent membrane-binding properties but these may not be universal nor essential features for their action. Function-oriented studies have described annexins involved in cell growth and proliferation [10][11][12] and alterations of their expression have been associated with cancer subtypes and other diseases [13][14][15][16].
ANXA8 is one of the least characterised members of the annexin superfamily. ANXA8 was first described as an inhibitor of phospholipase A2 and as a blood coagulation factor (VAC-β) because of its structural similarity to VAC-α (ANXA5, lipocortin V) [17]. It was later found to be specifically over expressed in acute promyelocytic leukaemia (APL) where it was repressible by all-trans retinoic acid (ATRA) [18][19][20][21]. Deregulation of ANXA8 has since then been found in several other malignancies, including infiltrating adenocarcinomas of the pancreas [22], cholangiocarcinoma [23], malignant pleural mesothelioma [24], melanoma [25], squamous carcinoma of the uterine cervix [26], esophageal adenocarcinoma and Barrett's metaplasia [27]. Perou et al. (2000) identified AnxA8 by microarray analysis as part of an RNA signature for a subgroup of breast cancers with poor prognosis they called basal-like breast cancers because of their expression of basal cell associated cytokeratins (CK) 5 and 17 [28]. Our own work has previously established that ANXA8 protein is not detected in the majority of breast cancers but in a distinct subset of CK5 positive, oestrogen receptor (ER) α and progesterone receptor (PgR) negative breast cancers with poor prognosis and in a high percentage of BRCA1associated cancers [29], confirming the RNA profiles by Perou et al. [28] and Sorlie et al. [30].
ANXA8 has been linked to the formation of endosomes and epidermal growth factor receptor (EGFR) turnover in Hela cells [31], and is required for efficient cell surface presentation of CD63 and P-selectin to allow leukocyte recruitment by activated endothelial cells [32]. Other studies identified ANXA8 as a target for p53-activated DNA damage response after treatment with adriamycin/doxorubicin of mouse embryonic fibroblasts [33] or when p53 was over expressed in Saos2 cells [34]. However its biological function in the mammary gland is still unknown.
We have previously shown that Anxa8 mRNA was up-regulated during mouse mammary gland involution [29], a multi-step process in which the alveolar epithelium regresses by programmed cell death to a near pre-pregnant morphology [27,32]. In the pubertal gland, Anxa8 mRNA was found in enzymatically isolated epithelial ducts but not in terminal end buds [29].
In general, Anxa8 mRNA abundance was highest during periods of widespread cell death or low proliferation. To get a better understanding of ANXA8's role during mammary gland development we aimed to determine its cellular distribution at different developmental time points, to assess its association with different epithelial subpopulations, and to study the effect of ANXA8 expression in vitro.
Here we show for the first time that ANXA8 is expressed in a distinct quiescent subpopulation of ERα−ve cells of the ductal mammary epithelium during puberty and early pregnancy, but not in proliferating TEB or alveoli. During late involution, ANXA8 was only detected in the surviving epithelium, but not in the apoptotic cells. qRT-PCR using mRNA from FACSsorted cells showed that AnxA8 was strongly associated with c-kit+ve/ERα−ve luminal progenitor cells (CD45 − , CD24 +/high , Sca1 − , cd49f − , c-kit + ), and triple-IF staining associated ANXA8 expression with a transiently quiescent subpopulation of the ductal luminal epithelium. Over expression in the mammary epithelial cell line KIM-2 altered the cell morphology and removed these cells from the cell cycle. Our data therefore strongly link ANXA8 to a subpopulation of c-kit+ve/ERα−ve ductal luminal epithelial progenitor cells and links ANXA8 function with cellular quiescence in the mammary epithelium. As this cell population was recently identified as the likely cells of origin for basal-like breast cancers, ANXA8's expression in this cancer subgroup may therefore be a consequence of their cells of origin and thus a useful diagnostic marker.
Ethics statement
All animal work was conducted under project licence numbers PPL 60/3712 and PPL60/4181 in accordance with accepted standards of humane animal care and according to the UK Animals (Scientific Procedures) Act 1986 and the EU directive 2010 in dedicated facilities proactive in environmental enrichment. Ethical approval granted by University of Glasgow.
Mammary gland preparation
The 4 th (inguinal) mammary glands were dissected and used for immunohistochemical staining or RNA extraction as described previously [35]. Balb/C mice were used unless stated otherwise.
Cell Culture
KIM-2 cells were generated in the laboratory of C. Watson [36] and were maintained as previously described [36].
Anxa8 cloning into pRTS1
Anxa8 cDNA was cloned into the pRTS1 episomal vector [37], a generous gift from Prof Bornkamm, before generation of KIM-2 cell lines that express Anxa8 under the control of doxycycline. To generate the pRTS1:Anxa8 construct, the mouse cDNA of Anxa8 was cut from the IMAGE clone 5322310 using the restriction enzymes XhoI and EcoRV and ligated into the pUC19-SfiI vector using the XhoI and EcoRV restriction sites. The resulting construct was digested with Sfi1 and the cDNA fragment ligated into the Sfi1 sites of the pRTS1 vector. Positive clones were confirmed by sequencing. KIM-2 cells were transfected using FuGene 6 (Roche Applied Science, Burgess Hill, UK) according to manufacturer's instructions with pRTS1: AnxA8 and pRTS1 constructs and transfected cells were selected as pools using 250μg/ml of hygromycin (Calbiochem, Merck KG, Darmstadt, Germany) to establish Kim2A8 and Kim2RTS cell lines.
Growth assay
Cells were plated in 24 well plates in triplicate and allowed to grow overnight before the addition of 100ng/ml doxycycline (Sigma-Aldrich, Gillingham, UK) at day 0. After washing the cells twice in PBS, protein lysates were prepared every 24 hours in Triton X-100 lysis buffer (20mM Tris pH 7.4, 250mM NaCl, 1% Triton X-100) plus Complete Mini Inhibitor mix (Roche Applied Science) and Protein Phosphatase Inhibitor Cocktail 2 (Sigma-Aldrich), and stored at −20°C. The protein concentration was measured using a BCA Protein Assay (Pierce, Thermo Fisher Scientific Inc, Rockford, USA).
BrdU/EdU incorporation assay
Proliferation was measured using a Cell Proliferation ELISA, BrdU colourimetric kit (Roche Applied Science) in a 96 well format after 48 hours of doxycycline treatment (100ng/ml).
For in vivo EdU incorporation: C57Bl/6 mice were stud mated and allowed to lactate with standardized litter sizes of 5-6 for 7 days at which time forced involution was induced. The females were injected intra-peritoneally with 0.2ml of 5mg/ml EdU 2hr prior to sacrifice on day 4 after forced involution. For pre-pubertal samples EdU was injected in 3-week old females as above.
Colony formation assay
Between 250 and 300 cells were seeded each in 10cm culture dishes in medium containing no doxycyline or 100ng/ml doxycycline. Cells were allowed to adhere and grow for 2 weeks, fixed with methanol, air dried and stained using Giemsa's staining solution (BDH Laboratory Supplies, Merck Ltd., Lutterworth, UK). Pictures of the plates were taken using a digital camera and the number of colonies was quantified using ImageJ software.
Cell cycle analysis by flow cytometry
Cells were grown in 6-well plates and treated with 100ng/ml doxycycline for 48 hours, trypsinized with Trypsin-EDTA, and collected together with any floating cells in the culture media. The cells were resuspended in 0.5ml PBS and fixed in 5ml 100% ice-cold methanol while vortexing. Cells were incubated for two hours at 4°C. After brief centrifugation, the methanol was removed and cells incubated in 400μl PI solution (50μg/ml propidium iodide, 50μg/ml RNAse A in PBS) for 30min before analysis on a BD FACSCanto II Flow Cytometer. Cyflogic software was used for analysis.
Antibody production
The mouse Anxa8 coding region was amplified by PCR using the primers A8-5' (aatagaattcaatggcctggtggaaagcc) and A8-3' (cgatctcgagtcagaggtcagtgcccac) and the IMAGE clone 5322310 as template. The PCR fragment was digested with EcoR1 and XhoI, cloned into pET302NT His vector (Invitrogen, Paisley, UK) digested with the same restriction sites and sequenced to verify that the 6xHis tag was in frame and the Anxa8 sequence was correct. This construct was used to produce ANXA8 protein in BL21 cells and the protein was purified using the TALON Metal Affinity Resin and Buffers (Clontech, Takara Bio Europe, Saint-Germainen-Laye, France). After dialysis against PBS to remove imidazole, the purified protein was used to immunize two rabbits at EUROGENTEC (Fawley, Southampton, Hampshire, UK), using their standard protocol. The antiserum was affinity purified using recombinant His-tagged ANXA8 protein immobilized in a column generated using an AminoLink Plus Immobilisation Kit (Pierce). The antibodies were eluted from the column using 100mM glycine pH 2.5 and immediately neutralized by addition of 1/10 of the elution volume of 1M Tris. The specificity of the antibody was tested by western blot.
Immunofluorescence and Immunohistochemistry
Cells were grown on eight-well chamber slides for the indicated times and fixed in 4% paraformaldehyde for 20 min at RT. After extensive washes with PBS, cells were incubated for 10 min in 50mM ammonium chloride followed by incubation for 10 min in 20mM glycine. Cells were incubated for 30-45 min in blocking solution (2.5% horse serum in PBS, 0.3% Triton X-100) to prevent nonspecific binding. The primary and secondary antibodies were diluted in blocking solution and antibody incubations were carried out at RT for 45-60 min. Washes were done with 0.1% Triton X-100 in PBS. Cells were finally washed in PBS before mounting the slides using Prolong Gold Antifade Reagent with DAPI (Molecular Probes, Invitrogen). Images were taken using an Olympus IX51 inverted microscope using a F-View camera and Cell^P 2.5 software (Olympus UK Ltd, Southend-on-Sea, Essex, UK). ImageJ software was used for image analysis.
For immunofluorescence (IF) on paraffin-embedded tissue, the sections were dewaxed in xylene and rehydrated through an alcohol gradient. Antigen retrieval was performed in 10 mM EDTA pH 8.0, sections were treated with Image-iT FX (Molecular Probes) for 30 min at room temperature and blocked with 2.5% horse serum in TBS-0.01% Tween 20. Tissue sections were stained in the same way as cells except that TBS-Tween 20 was used instead of PBS. Rat anti Ki67 clone TEC3 staining was developed by sequential incubation with the biotinylated secondary antibody from the rat ABC Staining System (Santa Cruz Biotechnology, Santa Cruz, CA, USA) at 1:100 and Streptavidin Dylight 488 (Pierce) at 1:200.
For immunohistochemistry on paraffin-embedded tissues, sections were treated in the same way as for IF, without the Image-iT FX step and the staining was developed using the Imm-PRESS Peroxydase System (Vector Labs, Peterborough, UK).
For EdU staining, a Click-iT EdU Alexa Fluor 595 Imaging Kit was used as per manufacturer's instructions.
Freshly sorted normal cells were resuspended in RLT buffer (Qiagen, Crawley, West Sussex, UK) and stored at −80°C until required for RNA extraction. qPCR reactions were performed as previously described [40] using TAQMAN Assays-on-Demand probe for Anxa8 (Mm00507926_m1). Actb (β-actin) was used as an endogenous control and results calculated using the Δ-ΔC t method. Data were expressed as the fold difference in gene expression between the mean of three independently isolated cell preparations compared to control samples with 95% confidence intervals.
Quantitative RT-PCR on total mammary gland RNA RNA from mammary glands was prepared using TRIZOL (Invitrogen) as described previously [35]. The RT reaction was carried out using 1μg of total RNA and Transcriptor reverse transcriptase (Roche Applied Science) following the guidelines of the supplier. For qPCR the following sets of primers and probes (Universal Probe Library from Roche Applied Science) were used to amplify Anxa8 (ggaaaagcagcagacaggat, gagaactacccttcacgctgac, probe #31) and Krt18 (agatgacaccaacatcacaagg, tccagaccttggacttcctc, probe #78) as internal control. The qPCR was performed using 1μl of cDNA as template, LC480 QPCR Master Mix (Roche Applied Science) and the appropriate set of primers in a 20μl reaction in a LightCycler 480 Instrument (Roche Applied Science).
ANXA8 is expressed in a distinct subpopulation of luminal ductal epithelial cells
To obtain an indication towards ANXA8's role during mammary gland development it was necessary to assess its cellular distribution at different developmental time points. Since no antibodies were commercially available that recognised mouse ANXA8, a polyclonal antibody was raised and affinity-purified against full-length mouse ANXA8 protein, which showed specific reactivity in western blots with mouse ANXA8 but not with other annexins. Immunohistochemistry (IHC) detected ANXA8 protein specifically in a distinct subset of ductal luminal epithelial cells during puberty, adulthood, and pregnancy, and to a lesser extent in the major ducts during lactation (Fig. 1, S1 Fig.), while no ANXA8 was detectable in proliferating TEB or 7) and involuting mice (days 1, 2, 4, and 10) were stained for ANXA8 protein. Staining was detected in a distinct set of ductal luminal epithelial cells (arrows), while alveoli (arrowheads) did not stain for ANXA8. ANXA8 was not expressed in the alveoli, or in differentiated alveolar epithelium. After enforced involution ANXA8 expression increased slowly and after four days was widely detected in major ducts and rarely in collapsed alveoli. After 10 days, ANXA8 was expressed in the majority of surviving ductal epithelial cells, which was consistent with the increased abundance of AnxA8 mRNA observed by qRT-PCR post-involution (S2 Fig.). In summary, ANXA8 expression was associated with a subpopulation of luminal ductal epithelial cells and with the surviving ductal epithelium during involution.
ANXA8 expression is absent in highly proliferative mammary epithelium Analysis of data from a previous microarray study of pre-pubertal, pubertal and post-pubertal mouse mammary glands revealed that Anxa8 mRNA abundance was highest during pre-puberty and strongly reduced at the onset of puberty ( Fig. 2 (A)) [41] when the non-proliferative rudimentary ducts form proliferative TEB that grow out into the surrounding fat pad to establish the primary ductal mammary epithelial network. This reduction was confirmed by early involuting epithelium, but in the major ducts and widely in the surviving epithelium during late involution. The black bar represents 100μm.
doi:10.1371/journal.pone.0119718.g001 [41], using RNA extracted from 3-, 4-, 5-, 6-and 7-week old CD1 mice, show a reduction in AnxA8 mRNA at the onset of puberty. Signal intensities for two independent probes targeting AnxA8 were normalised to cytokeratin 18 (Krt18) to eliminate changes due to differences in epithelial content. (B) qRT-PCR results for AnxA8 normalised to Krt18 expression from RNA extracted from mammary glands of 3-, 4-, 6-, and 12-week-old mice. (C-D) Immunohistochemical analysis of ANXA8 expression using the E2R6.2 antibody on mammary glands from 3-(C) and 6-week old (D) mice showing staining for ANXA8 in the pre-pubertal rudiment and in ducts, but not TEB, of pubertal mice. Negative control (−ve ctrl): no primary antibody. Bars represent 100μm. qRT-PCR using mRNA from 3-, 4-, 6-, and 12-week old mice, when normalised to the epithelial cell marker CK18 (Fig. 2(B)). IHC analysis showed again that ANXA8 was expressed in a distinct subpopulation of luminal epithelial cells of the pre-pubertal rudimentary epithelium ( Fig. 2 (C)), as well as in individual cells of the ductal luminal epithelium in pubertal glands but never in TEB (Fig. 2 (D)). In contrast, Ki67 expression was widespread in TEB and in proliferating alveoli during pregnancy but rare in major ducts (S3 Fig.). ANXA8's association with non-proliferative cells was further emphasised when 3-week old mice were injected with EdU for two hours (S4 Fig. (A)). Mammary glands from three independent mice showed no costaining for ANXA8 and EdU. Highly proliferative regions, possibly by the onset of puberty, showed strong EdU staining, but no ANXA8 positivity, while ANXA8+ve ducts showed little EdU positivity with no overlap. Similar results were found in 4-day involuting glands, where strong ANXA8 staining but no EdU staining was observed in the surviving ductal epithelium (S4 Fig. (B)). Our results therefore associate ANXA8 with the low-proliferative rudimentary ductal epithelium, and show that its expression is switched off during pubertal outgrowth and proliferation.
ANXA8 expressing epithelial cells are ERα−ve and transiently quiescent
Double-immunofluorescent (IF) labelling of pubertal mammary sections for ANXA8 and Ki67 established that over 99% of ANXA8+ve cells were quiescent with a lack of Ki67 expression ( Fig. 3 (A, B)) and of the licensing factor MCM3 (S5 Fig.). However, the proportion of Ki67+ve/ANXA8+ve cells increased significantly during early pregnancy, reaching *15% of all ANXA8+ve cells (compared to *20% in ANXA8−ve ductal cells) but decreasing again to *5% during mid-pregnancy (day 12.5), while the proportion of cycling ANXA8−ve cells remained constantly high (*19%; Fig. 3(B)). This demonstrated that ANXA8+ve cells were not all terminally differentiated, but were able to enter the cell cycle at the start of ductal budding, though ANXA8 was not detected in the newly formed epithelial structures.
Since it has previously been reported that ERα−ve cells of the mammary epithelium are the main proliferative compartment, the ERα-status of the ANXA8+ve cells was also established. Double-IF staining demonstrated that all ANXA8 expressing cells were in fact ERα−ve ( Fig. 3 (C, D)), showing that ANXA8 was associated with a transiently quiescent ERα−ve subpopulation.
AnxA8 mRNA is associated with c-kit+ve/ERα−ve luminal progenitor cells ANXA8 is strongly expressed in BRCA1-associated breast cancers and these cancers have recently been shown to originate from ERα−ve luminal progenitor cells [42]. Since ANXA8 showed strong association with ERα−ve cells in the mammary gland it was hypothesised that ANXA8 was associated with the ERα−ve progenitor cell population. AnxA8 mRNA expression was therefore measured by qRT-PCR in RNA from mammary epithelial cells that had been sorted according to their expression of the cell surface proteins CD24, CD49f, Sca1, and c-kit: a) mammary stem cells (MaSC; CD24 +/low , Sca1 − , CD49f +/high , c-kit − ), b) myoepithelial cells (CD24 +/low , Sca1 − , CD49f +/low , c-kit − ), c) mature luminal ERα+ve cells (CD24 +/high , Sca1 + , CD49f − , c-kit − ), and ERα−ve luminal progenitor cells (CD24 +/high , Sca1 − , CD49f − , c-kit + ) [39]. While no AnxA8 mRNA was detectable in MaSC or myoepithelial cells, the luminal ERα−ve progenitor cell population had a 17-fold increased abundance compared to the differentiated luminal ERα+ve population (Fig. 4 (A)). The strong association of ANXA8 and c-kit expression was further emphasised by IF (Fig. 4 (B, C)). ANXA8 was co-expressed with c-kit in the luminal epithelium of mammary ducts, and localised to the cytoplasm as well as the apical and, similar to c-kit, to the lateral membranes ( Fig. 4 (C)). However, co-expression of c-kit and ANXA8 varied within and between sections. While all ANXA8+ve cells were c-kit+ve independent of developmental stage, ANXA8 positivity of c-kit+ve cells ranged from as little as 0% to 100%. During puberty, strong c-kit staining was detected in the inner body cells of the TEB while ANXA8 could only be found in the ductal epithelium (S6 Fig.). During pregnancy ANXA8 and c-kit expression were both restricted to the ductal epithelium, though limited ckit staining was found in the newly formed ductal outgrowth. The percentage of ANXA8+ve/c-kit+ve positive cells was reduced during pregnancy from just *62% to *34% (Fig. 4 (B)), possibly reflecting the overall reduction in ANXA8+ve cells as c-kit was still expressed in the majority of ductal luminal epithelial cells.
Triple-staining of mammary glands from pubertal and mid-pregnant mice further confirmed that ANXA8+ve/c-kit+ve cells were mostly Ki67−ve (Fig. 5). Our results therefore strongly suggest that ANXA8 is associated with a subpopulation of mostly quiescent c-kit+ve/ERα−ve ductal luminal progenitor cells.
ANXA8 over expression reduces proliferation in Kim-2 cells
Since ANXA8 expression was associated with low proliferation in the mammary gland and other tissues [43,44], in vitro studies were carried out to assess whether ANXA8 could directly affect cell proliferation. Mouse ANXA8 was over expressed in the mouse mammary epithelial cell line Kim-2, using an inducible episomal vector under the control of doxycycline (dox) (Kim2A8). Approximately 50% of the pooled cells expressed ANXA8 and EGFP through a bidirectional promoter when treated with 100ng/ml dox (S7 Fig.). Since only EGFP-positive cells expressed ANXA8, EGFP-positivity was used as a surrogate marker for ANXA8 expression in further experiments.
Microscopic analysis after dox-treatment showed that after two days the EGFP+ve cells were increased in size (Fig. 6 (A)) compared to EGFP−ve cells, and after six days showed a highly enlarged and flattened morphology (Fig. 6 (B)) with significantly enlarged nuclei ( Fig. 6 (C)). Although this morphology was reminiscent of senescent cells, the cells were negative for the senescence markers β-galactosidase (β-gal) and p16 (data not shown).
A cell growth assay of Kim2A8 cells showed that after three days of dox-treatment Kim2A8 cell growth was significantly reduced compared to control cells (Fig. 7 (A)). Since the decreased growth rate was associated with reduced BrdU incorporation (Fig. 7 (B)) our results showed that in this system ANXA8 over-expression was able to reduce cell proliferation.
ANXA8 over expression prevents colony formation of Kim-2 cells in vitro
To further characterise the effect of ANXA8 over expression on cell proliferation, a 2D colony formation assay was performed. Kim2A8 and control cells were seeded as single cell suspensions and treated with or without dox. Colonies were analysed by bright-field and fluorescence microscopy. After two weeks of dox-treatment Kim2A8 cells formed significantly fewer colonies of more than 50 cells compared to untreated Kim2A8 cells or control cells (Fig. 7 (C, D)). All large colonies that formed from the dox-treated Kim2A8 cells were largely EGFP−ve (with the occasional entrapped green cell), and hence did not express ANXA8. EGFP+ve cells remained either as single cells or very small colonies of <10 cells, and showed again the above mentioned enlarged morphology (S8 Fig.).
To establish whether ANXA8 over expressing cells were arrested in G 1 or entered G 0 , we measured Ki67 expression levels by IF staining (Fig. 8 (B, C)) and western blot (Fig. 8 (D)). IF staining showed that only *50% of EGFP-positive Kim2A8 cells expressed Ki67, while nearly all EGFP negative cells, like the control cells, were Ki67 positive (*95%; p<0.003). In western blots untreated cells and dox-treated control cells showed similar levels of Ki67 protein, while over expression of ANXA8 for 6 days significantly reduced Ki67 protein levels by *50%. This demonstrated that ANXA8 over expression had taken KIM-2 cells out of the cell cycle.
Discussion
We previously described Anxa8 mRNA to be expressed in isolated mammary ducts and after enforced involution, but not during pregnancy and lactation when primary ducts branch and bud to form milk secreting alveoli [45]. The current study further defines ANXA8 expression in a mostly quiescent subpopulation of c-kit+ve/ERα−ve progenitor cells of the ductal epithelium, and identifies ANXA8 as a potential regulator of proliferation and/or quiescence in the mouse mammary ductal epithelium. No significant levels of AnxA8 expression were found in ERα+ve cells, or in differentiated alveoli of late pregnant or lactating mice (Figs. 1, and 3), showing that ANXA8 expression was not associated with terminal differentiation of the mammary epithelium. Instead, ANXA8 was only present in the major ducts, which bud during pregnancy to form alveoli, and was widespread in the ducts of a late involuting gland, when apoptosis had ceased and the primary mammary ductal system was regenerating.
Although a large proportion of c-kit+ve cells showed ANXA8 expression by double-IF staining, not all of them did, and as ANXA8 and Ki67 staining were largely exclusive it appears that ANXA8 may be characteristic of a mostly quiescent subpopulation of committed ERα−ve progenitor cells, which our triple-staining supports. However, rare Ki67+ve/ANXA8+ve cells could be detected (less than 1% of all ANXA8+ve cells) during puberty and this proportion increased at the start of pregnancy, but decreased again with prolonged pregnancy while Ki67 staining of ductal ANXA8−ve cells remained increased. Therefore, although ANXA8 expressing cells were mostly quiescent, these cells were not terminally differentiated. It is unclear whether the progeny of the ANXA8+ve cells contribute to the formation of side branches and alveoli. No ANXA8 staining was found in alveoli and c-kit staining was equally down-regulated. Though AnxA8 mRNA was recently detected in quiescent normal human mammary stem cells isolated from mammosphere cultures on the basis of PKH26-retention [46], we could not detect ANXA8 in mouse mammary stem cells. An association with very early mammary epithelial progenitor cells is supported by the recent finding that AnxA8 mRNA is expressed in the early developing mammary bud epithelium (E12.5) which showed limited proliferation and expressed c-kit [47], though in contrast to c-kit we were not able to detect ANXA8 protein by IHC at this time point (data not shown).
A recent finding that ANXA8 is part of the ADAM-17/AREG shedding complex and can modulate the shedding of pro-amphiregulin and other EGF family members on the cell surface [48], and the finding that ANXA8 affects ligand-induced degradation of EGFR [31] raises the Kim2A8 and Kim2RTS cells grown in chamber slides with or without 100ng/ml dox for six days were fixed and stained for Ki67 antigen. EGFP was used as a reporter of ANXA8 expression. (C) Graph showing the percentage of Ki67-positivity in the EGFP positive and negative populations of Kim2A8 cells grown with or without 100ng/ml dox. At least 1000 cells were analyzed in each population. (D) Western blot showing Ki67 and ANXA8 protein expression in cells after six days in culture. Actin was used as a loading control. Numbers show the relative intensities of ANXA8 and Ki67 bands respectively (normalised to actin) determined by measuring area pixel intensities using AIDA Image Analyzer software. The reduction of Ki67 levels (*50%) is consistent with the reduced number of Ki67+ve Dox-treated Kim2A8 cells seen in (B).
doi:10.1371/journal.pone.0119718.g008 attractive possibility that ANXA8 may directly affect ligand availability for growth factor receptors and/or signalling, thereby controlling cell growth and/or differentiation. Since ADAM17 has also been shown to be a major sheddase for c-kit [49] and growth factor ligands including kit-ligand [50] it is tempting to speculate that ANXA8 may directly affect c-kit signalling in the luminal progenitors of the mammary gland. It was also noticeable that the number of ANXA8 positive cells varied greatly between ductal areas in a gland and between glands of similar time points. Whether ANXA8+ve cells mark the sites of future secondary/tertiary branches is currently subject of further investigations using a lineage tracing approach.
Our current data are further consistent with other studies that have linked AnxA8 expression to reduced proliferative activity and/or quiescence. When quiescent NIH3T3 fibroblasts were driven into proliferation by transduction with an adenoviral E2F1 construct Anxa8 was one of the strongest down-regulated genes in two independent experiments [51]. Similar results for Anxa8 were obtained when NIH3T3 cells were transfected with Nanog, leading to increased proliferation and transformation and a reduction in AnxA8 mRNA [52]. In the fetal bovine growth plate ANXA8 expression forms a gradient in which the higher expression is found in the low-proliferative hypertrophic zone [44], while in adult mouse stratified epithelia ANXA8 is expressed in supra-basal layers, suggesting that ANXA8 expression may be associated with partial differentiation [43], though our own studies identified ANXA8 in Ki67−ve cells of the basal layer (data not shown). Further, since ANXA8 expression was not detected in differentiating alveolar cells during pregnancy or lactation it is highly unlikely to be associated with differentiation in the mammary gland.
Despite AnxA8 mRNA up-regulation early during involution, ANXA8 protein could not be detected in the early collapsing alveoli and therefore was unlikely to be involved in apoptosis, as our microarray profile may have suggested. Involution can be induced through forced weaning of the pups at the height of lactation, leading to widespread alveolar cell death and tissue remodeling, after which the mammary gland resembles a pre-pregnant-like mammary gland. Transcriptional microarray profiling identified Anxa8 mRNA to be strongly increased 24 hours after enforced mammary gland involution with sustained abundance for several days [29], though our qRT-PCR data now show a much slower increase. Similar increases can be found for c-kit and the kit ligand SCF (S9 Fig.), indicating that the involution recovery involves c-kit+ve progenitor cells and/or leads to a relative increase in c-kit+ve cells due to a preferential loss of differentiated alveolar cells. ANXA8 protein was not detected in the apoptotic alveoli 48 hours after enforced weaning (Fig. 1, S1 Fig.), but was detectable in major ducts and in most of the surviving epithelium after 10 days. Neither could ANXA8 be found in TEB during puberty where apoptosis is prevalent during ductal lumen formation [53,54]. Our view that ANXA8 is not associated with cell death is further supported by our finding that ANXA8 over expression in Kim-2 cells did not induce cell death, as indicated by no change in the sub-G 1 fraction after ANXA8 over expression (Fig. 8). There was though some expression of ANXA8 in the collapsed epithelial structures 4 days after forced weaning when most apoptosis has already ceased and tissue remodeling with an immune response and suppressed inflammation occurs [35,55,56]. It cannot be ruled out that ANXA8 expression in these structures may be associated with its PLA 2 inhibitory activity described for many annexins, including ANXA8 [17], thereby supporting the inflammatory suppression described previously [35,56].
Several annexins, including annexins A1 [10,57], A2 [58,59] and A6 [60,61] have been found to modulate proliferation or to be directly involved in cell division, including annexin A11, which is part of and necessary for midbody formation during cytokinesis [5]. Inhibition of proliferation induced by ANXA1 and ANXA6 correlated with concomitant changes in the actin cytoskeleton and cell morphology. ANXA8 has previously been found to interact with Factin in co-sedimentation assays and with PIP 2 , suggesting that ANXA8 may play a role in the regulation of actin/membrane interactions [62]. However, we did not detect any changes in the actin cytoskeletal structure in Kim2A8 cells (data not shown). The morphological changes we observed were similar to those reported for ANXA2 over-expression in MIO Müller cells [63], in which ANXA2 expression induced the cells to become flattened, but the increase in the size of the footprint observed did not correlate with an increase in cellular volume when analysed by FACS. It has also been demonstrated that over expression of ANXA8 in SCK, MDA-MB231 and NIH3T3 cells can induce morphological changes, inducing a more epithelial-like morphology, and that these changes may be mediated through direct interaction of ANXA8 with FAK [23].
ANXA8's reduced expression in the majority of breast cancers is consistent with an antiproliferative role of ANXA8. However, the finding that positive staining correlates with basallike breast cancers that are of high grade and positive for Ki67 [29] is somewhat counterintuitive. Similar results have been described for the tumour suppressor protein p16/INK4, which is associated with cell cycle arrest and senescence, but is strongly associated with basal-like breast cancers [64]. Further, the cell cycle protein cyclin E has been shown to induce cell cycle arrest by p27KIP accumulation in mammary epithelial cell lines HC-11 and 184B5, though it increased proliferation in others [65,66] and is strongly expressed in basal-like breast cancers [67]. Since c-kit+ve/ERα−ve luminal progenitor cells have recently been shown to be the origin of basal-like breast cancers [42,68], and since ANXA8 is strongly associated with this subgroup, it is possible that ANXA8 expression in these cancers is a reflection of ANXA8's close association with luminal progenitor cells, and that the pro-quiescence function of ANXA8 might be perturbed. A similar association of ANXA8 with committed human progenitor cells and cancer had previously been found in the haematopoietic system, where AnxA8 mRNA expression was detected in pro-myelocytes and was further up-regulated in APL [69], a leukaemia in which the c-kit+ve progenitor cell population is abnormal and expanded. AnxA8 was specifically upregulated in APL but not in other myelocytic leukaemias [18,19,69]. Treatment with ATRA down-regulated ANXA8 expression and pushed cells into differentiation [19]. Though a causal link between ANXA8 down-regulation and differentiation of APL cells has not been tested it is tempting to speculate that ANXA8 might be involved in progenitor cell maintenance. Further studies will reveal whether the same association exists between the ANXA8+ve cells of the terminal duct lobular unit [29] and c-kit in the human breast.
Given our results and the association of ANXA8 expression with ER−ve breast cancers it will be worth testing whether ANXA8 over-expression in ER+ve breast cancer cell lines can induce cellular quiescence or reduce proliferation. However, as the human genome contains at least two Anxa8 genes (Anxa8, Anxa8l1), which vary slightly in their encoding sequence, and in the absence of promoter expression studies, epigenetic analyses and human genotyping, it is impossible to know which one (if not both) of these genes contributes to the variations observed in gene expression in the normal and malignant breast. The existence of copy number and allelic population variants could further affect gene product "dosage" as well as the pharmacogenetic responsiveness of these genes [70]. Therefore, over-expression of each variant on its own and in combination in a variety of breast cancer cell lines will be necessary.
Conclusion
We have established for the first time that ANXA8 expression is associated with a subpopulation of transiently quiescent c-kit+ve/ERα−ve cells of the ductal epithelium and that ANXA8 over expression can induce quiescence in vitro. The mechanism(s) by which ANXA8 induces this G 0 -arrest is still unknown. Its expression is therefore strongly associated with a luminal epithelial progenitor cell population that is thought to be the origin of basal-like breast cancers, a subgroup of breast cancers with which ANXA8 is strongly associated. Further work will establish whether ANXA8 is functionally involved in progenitor cell quiescence and/or maintenance, and whether ANXA8 positive mammary epithelial cells may be the origin of ANXA8expressing basal-like breast cancers. Kim2A8 cells were grown for two weeks in the presence of 100ng/ml dox as described in Fig. 7(C). Single cells or small colonies (<20 cells) of EGFP-positive Kim2A8 cells were detected after two weeks of growth. These cells showed a flat, large and round morphology. Images of typical colonies from Kim2A8 cells with or without dox treatment are shown. (TIF) S9 Fig. RNA expression of c-kit, SCF and AnxA8 during enforced involution. Microarray results from lactating (day 7) and involuting (days 1, 2, 3, 4, 20) mouse mammary glands from a previous study [35]. The graphs show the normalized average signal intensities for AnxA8, ckit, and scf/kit ligand mRNAs ±standard error. (TIF) | 8,502.2 | 2015-03-24T00:00:00.000 | [
"Biology"
] |
Layout Synthesis for Symmetrical Facades constraint-based support for architects decision-making
,
Introduction
The problem. Currently buildings energetic consumption represents more than a third part of the total energy consumption in developed countries [6,8,17]. One strategy for reducing such energy consumption lies on buildings thermal retrofit achieved either by an internal or an external insulation [12]. Among several options [12], an external insulation may be based on covering the entire building with an envelope made out of rectangular wood-made panels [9,22]. However, some difficulties exist when targeting such retrofit in industrial proportions, e.g. in a country. These difficulties include slow conception using by hand configuration, human scheduling and craft assembly. Thus, it is essential to assist this massive retrofit of buildings with decision support systems [13]. Now, although finding a correct allocation of entities over a given surface can be efficiently made by a human, it is not the same for finding all solutions or finding an optimal solution. In fact, finding an optimal solution among the set of possible solutions would require a significant amount of time if the wrong technique is used due to the combinatorial properties of the problem. Simply stated, the problem is NP. The definition is as follows. Given as input a rectangular facade surface and size limitations for rectangular panels, create a layout-plan solution to cover the entire facade. A layout-plan solution is an assignment of size (width and height) and position to each panel in such a way that all facade and panel related requirements are respected. Size limitation for panels result from manufacturing conditions and are translated into lower and upper bounds for panels width and height. The problem includes three characteristics never considered simultaneously: It deals with the allocation of an unfixed number of rectangular parameterizable panels that must not overlap, frames (existing windows and doors) must be overlapped by one and only one panel, and facades have specific areas providing certain load-bearing capabilities that allow to attach panels. As far as we know, only a previous work by the authors that uses a greedy approach [2] have been proposed to address this problem. That solution, however, generates only one valid solution for a given facade specification and does not involve optimality criteria.
Related work and contribution. Literature relevant to solve the problem can be found on layout synthesis [14] and rectangle packing [11]. The work in [23], where an apartment space is divided into a matrix where a finite set of rooms with predefined size are allocated, inspires our solution. However, as it does not deal with unfixed number of entities; it cannot be used to solve our problem. The same drawback is found in [4], where authors present a constraint-based framework to tackle layout synthesis problems in a 2D reference plane. Although geometrical entities have undetermined size their number is known, and thus, is not appropriated for our problem. Two-dimensional packing literature also presents studies relevant to solve our problem [11]. Nevertheless, only few studies tackle the problem of undermined number of variables (rectangles). In fact, such scenario have been addressed using several approaches such as new consistency methods [21], exploration techniques [3] or combination of possibilistic [19] or weighted [7] constraint satisfaction to find solutions [10]. However, these kinds of methodologies are not implemented in most constraint programming environments and thus expertise for using existing technologies is required. Regarding the geometrical constraint Geost [5], we consider it to complex because we only deal with rectangular shapes. Also, given that our industrial application may evolve, we rely on our constraint-based implementation.
The aim of this paper is to propose a first strategy for a constraint-based layout synthesis over symmetrical facades. Then, we present a declarative model stating all constraints and the objective function, a process to generate compliant layout plans, and some experimental cases using a support system. The support system is able to generate different solutions for a given facade specification. The paper is divided as follows. In Section 2, the facade retrofit elements and our assumptions are introduced. In Section 3, the constraint satisfaction problem (CSP) definition of the problem is presented. In Section 4, the solving process and the construction procedure are introduced. Experimental cases, using a Java prototype [1] that uses Choco [18] as underlying solver, are provided in Section 5. Some conclusions are discussed in Section 6. Figure 1) is represented by a two-dimensional coordinate plane, with origin of coordinates (0,0) at the bottom-left corner of the facade, and contains rectangular zones defining:
Facades. A facade (presented in
-Perimeter of facade with its size (height and width in meters).
-Frames (windows and doors) which play an important role as they are meant to be overlapped by one and only one panel. Frames are defined by: Origin point (x,y) with respect to origin of facade, width and height (in meters). -Supporting areas. As the layout problem must deal with a perpendicular space plan, gravity must be considered. It turns out that some areas over the facade have load bearing capabilities that allow us to attach panels. Supporting areas have well-defined: Origin point (x,y) with respect to origin of facade, width and height (in meters). Rectangular panels. Panels (see Figure 2) are rectangular, of varying sizes and may include different equipment (solar modules, window-holes, shutters, etc.). These panels are designed one at a time in the process of layout synthesis and manufactured in the factory prior to shipment and installation on the building site. These panels have a well-defined: -Size (height and width). The size is constrained by a given lower and upper bound consequence of working-site or manufacturing limitations.
-Thickness and insulation. Thermal performance of a given panel depends on several properties: Size, thickness and insulation type. Consider that the smaller the thickness of the panel the better quality should be the insulation in order to reach performance objectives. -New frames (such as new doors and new windows). Given internal structure of rectangular panels, new frames must respect a parameterizable minimum distance (∆) with respect to panel's borders. -Cost depending mainly on size and attached equipment (in Euros).
-Thermal performance (in watts per meter square-kelvins, w.m −2 .k −1 ) depending on size, chosen thickness and insulation type. Key limitations. As mentioned in the introduction, there are three key issues reflected from the industrial scenario, been the unfixed number of panels the more problematic one. Considering the internal structure of panels, partially overlapping a frame is forbidden and, frames' border and panels' border must be separated by a minimum distance denoted by ∆. Additionally, in oder to attach panels, the corners of each panel must match a supporting area.
Assumptions. Our solution is conceived to tackle symmetrical facades, i.e., those facades in which supporting areas are uniformly distributed over the surface and in which frames are symmetrically arranged on the surface as shown in literal (a) of Figure 1. Two assumptions have been made for the present work. First, all supporting areas are strong enough in such a way that the problem only deals with the placement of panel's corners over supporting areas. In other words, there are unlimited load-bearing capabilities in supporting areas and no capabilities in the remaining of the facade surface. Second, in order to use a simple yet intuitive objective function, we assume panel's thickness to be constant and we consider only one type of panel's insulation.
A CSP is described in terms of a set of variables V, a collection of potential values D for each variable and a set of relations C over those variables, referred to as constraints [15,16]. A constraint is a relation representing partial information over the variables of the problem. A CSP solution is an assignment of values for each variable in such a way that all constraints in C are satisfied. Thus, to give the constraint-based solution a clear focus and to formalize the facade-layout synthesis problem as a CSP, we have to cope with: -Decision variables describing the layout solution, constraints describing the relations over panels and facade.
In addition, we present our objective function appropriated for the retrofit industrialization.
Constraint variables
We introduce the notation used in the model. Let F denote the set of frames and S the set of supporting areas. Let o e.d and l e.d denote the origin and size, respectively, of a given entity e in the dimension d, with d ∈ {1, 2} (1 for x-axis and 2 for y-axis). For instance, o f r.1 denotes the origin in the horizontal axis and l f r.1 denotes the width of frame f r. Additionally, lb d and ub d denote the size lower bound and size upper bound, respectively, in dimension d for all panels.
Intuitively, each panel is described by its origin point with respect to the facade origin and its size. For convenience, let us assume that P is the set of panels composing the layout-plan solution. Then, each p ∈ P is defined by o, l where:
Constraints
The following six constraints express the main relations among panels, and between panels and facade that must respect a layout solution.
(a) Manufacturing and transportation limitations constrain panel's size with a give lower bound lb and upper bound ub in one or both dimensions.
For two given panels p and q there is at least one dimension where their projections do not overlap.
(c) A given panel p must either be at the facade edge or ensure that enough space is left to fix another panel.
Each frame over the facade must be completely overlapped by one and only one panel. Additionally, frames' borders and panels' borders must be separated by a minimum distance denoted by ∆.
The entire facade surface must be covered with panels. i∈P d∈{1,2} l i.d = d∈{1,2} l f ac.d (f) Panels' corners must be matched with supporting areas in order to be properly attached onto the facade.
Objective function
In the industrial scenario, the ranking is made with respect to cost and thermal performance of the layout plan. To simplify computations, and assuming a constant thickness and one insulation type, the cost and thermal performance are only computed with respect to panels' size. Thus, the cost of a panel p is computed with where α is a factor provided by the panels' manufacturer which depends on panels material and internal structure. Note that the term (α − l p.1 − l p.2 ) decreases with the size of the panel and thus manufacturing large panels is less costly, globally, than manufacturing small ones. Additionally, due to the thermal characteristics of the retrofit, the less panels' junctions the better: It is at panels' junctions that the bigger thermal transfer exists. In consequence, given the upper bound for panels' size, facades should wear panels as large as possible while respecting the architectural constraints, supporting areas, manufacturing and working site conditions. An optimization function which entails optimal cost and optimal thermal performance is the maximization of panels' areas
The solving process
Dealing with the problem of having an unfixed number of panels (e.g. variables) is prerequisite for solving the problem. Thus, the solution have been divided in two phases: First find the structure of the layout plan by finding an appropriated number of panels to allocated (Section 4.1) and, second, launch constraint solving over the selected number of panels using the aforementioned declarative model (Section 4.2).
Providing structure to the plan
Given the lower and upper bounds for panel's size and the facade size, we may compute the minimum and maximum number of panels that can be fixed in the surface. For instance, the theoretical minimum number of panels that can be fixed in a given dimension d is nb h =l f ac.d /ub d if (l f ac.d mod ub d == 0) and nb h =(l f ac.d /ub d )+1 otherwise. The minimum number of panels over the facade surface is the product of minimum number of panels that can be fixed on each dimension.
Construction procedure
The main difficulty for solving the problem using Choco is that the number of panels is unknown. It is a difficulty because, as well as many other constraint programming environments [20], Choco uses a well-defined set of variables and constraints to execute search. Our proposed procedure, presented in what follows, creates a matrix of panels to cover the facade. To do so, it uses the minimum number of panels that can be fixed over the facades surface to define the layoutplan structure and then iteratively adds and constrains each panel with respect to the constraint knowledge and with respect to the already placed panels. If the number of panels is not enough the model is executed again using one additional panel.
Step 1:-Compute the minimum number of panels that can be placed both horizontally and vertically (nb h , nb v ).
Step 2:-Create an array of nb h × nb v constraint variables representing areas.
Step 3.9:-Add decision variables to solution.
Step 4:-Select a search strategy and apply search to the previous decision variables and maximizing decision variables areas. An illustration of the procedure's behavior is shown in Figure 3. As commented in the procedure, the origin point (o p.1 ,o p.2 ) of the first panel in the layout is deterministically assigned to (0,0). Two reasons support our choice. First, there is always a supporting area in (0,0), otherwise the total facade surface cannot be covered. And second, it avoids the use of symmetry breaking constraints. In fact, as each panel is indistinguishable from the others, the procedure places each panel in the first available point next to the previous placed panels.
States from Fig.3(b) to Fig.3(f) present the results for applying supporting areas (constraint (f)) and frames overlapping (constraint (d)) constraints. From Fig.3(g) to Fig.3(k), the Figure illustrates the relation between each panel and the previous placed panels that avoids overlapping. Finally, State in Fig.3(l) shows the complete layout plan. Note that panel's width and height are set by the heuristic minDom_UB trying the maximum of their domain.
Experimental cases
The proposed procedure have been tested over several simulated facades and real facades. Figure 4 present statistics for fictitious facades on execution time for first solution, number of solutions found over a time window of 10 seconds, number of backtracks and explored nodes in the same time window. As we are dealing with symmetrical facades, frames and supporting areas are uniformly distributed over the facade surface. Also, facades have enough supporting areas in such a way that is possible to attach an envelope, i.e., the problem has a solution. Configuration for panels are; lb 1 = 0.9 meters as width lower bound and ub 1 = 13.5 as width upper bound; lb 2 = 0.9 meters as height lower bound and ub 2 = 3.5 meters as height upper bound. Additionally, as we are using integer domains, values are parsed to integers by means of simple arithmetic operations. From the graphics is deduced the following. As expected the number of solutions found in 10 seconds decreases with the facade surface size. This is due to increasing of the number of relations when increasing the number of panels. Additionally, the execution time depends on the relation between facade size and panels' upper bounds. For when maximizing the area of each panel, the solver would fail less, and thus backtrack less, when panels can be fixed using their size upper bounds. Recall that the same upper bound have been used for all tests.
Nonetheless, regardless the relation between facade's size and panels' size, the time for finding a solution is competitive for the industrial scenario where no real-time interaction is needed and the user (e.g. an architect) has enough time to run and select the appropriated panels-made envelope. Size of facade in literal (a) Figure 6 is l f ac.1 = 12.598 and l f ac.2 = 10.907 meters, and it has 16 frames (15 windows and 1 door).
Example 3. The layout plan in literal (b) is generated using panels' upper bounds of ub 1 = 7 meters and ub 2 = 3.5 with an structure of 4×2. The first solution is found in 0.023, 10 nodes where explored and 1 backtrack executed.
Example 4. Literal (c) shows a layout plan generated using ub 2 = 7 meters and ub 1 = 3.5 meters and thus its matrix structure is 2×4 panels. The first solution is found in 1.071, 97019 nodes where explored and 97010 backtracks executed. As the results present, finding a compliant solution using vertical panels (i.e., height > width) takes more time, explores more nodes and backtracks more than horizontal panels because the possible attaching points (i.e., supporting areas) are more numerous. In fact, the presented facades have horizontal supporting area every 2.5 meters, approximately. Thus, using horizontal oriented panels for those facades permits only one possible supporting area to attach them whereas using vertical panels ones leads to several possible attaching points.
All the presented solutions where found by Choco solver by invoking the findOptimalSolution method. By contrast, the system finds 2544 solutions invoking the findAllOptimalSolutions method over facade in literal (a) Figure 5. Many of these solutions have not a significant difference as they may differ only in one centimeter for a given panel. However, in order to assist architects decision-making it is important to present a reduced number of solutions and let them, given aesthetics aspects or other criteria, the choice on what solution to implement.
Conclusions
The reduction of energy consumption in buildings is recognized as an international key issue for the coming years. As the construction of new energy efficient buildings is not enough to face the demand, the retrofit of existing ones is a necessity. In addition, conception and implementation of such retrofit must be supported by intelligent systems if efficiency and optimal solutions are desired.
This work is part of a project that investigates the possibility of automated building retrofit based on rectangular panels and assisted by a support system. Within the project, a greedy solution have been proposed [2] but it is able to generate only one solution without applying any notion of optimality. We have presented our first approximation for solving the layout synthesis of symmetrical facades using a declarative approach that includes: 1. A description of the (constraint) knowledge supporting this layout problem as a CSP, 2. presented a solution procedure that incorporates all of the identified constraints and, 3. illustrated its behavior on an example from the pilot project site.
The constraint-based solution presented in this paper follows a two-phase approach to generate facade-layout plans: Find the minimum number of panels to use and then pass a declarative model to generate layout plans using a constraint solver. Layout solutions are always a matrix whose dimensions depend on the width and height of the facade and panels. Evidently, a matrix solution is a mayor drawback given that they will generate valid solutions only in symmetrical facades. So, despite the benefits provided by constraint technology with respect to to the number of solutions thrown, efforts for targeting asymmetrical facades appear to be an important follow-up of this work. Then, a declarative solution for non-symmetrical facades is currently under development as well as a methodology for presenting to architects a pertinent (aesthetic constraints) subset of all possible layout-plan solutions.
Constraint programming is well-suited for addressing this industrial problem because on the one hand, its declarative view allows a clear knowledge representation, and on the other hand, the building of a prototype using an open constraint programming environment is much easier, thanks to all the pre-defined constraints, search and provided abstractions. We have shown that our solutions are consistent over symmetrical facades, and thus, can be used for early stages of architectural design. | 4,610.2 | 2015-09-14T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Understanding the Impact of High-Pressure Treatment on Physico-Chemical, Microstructural, and Microbiological Aspects of Pumpkin Cubes
In this study color, texture, starch–pectin, total antioxidant capacity, microbial count, and microstructure of HPP-treated Violina pumpkin cubes were evaluated. Samples were treated at six different pressures (100 to 600 MPa–HPP100 to HPP600) for 3 min. Moisture, total soluble solids, and pH showed no significant differences between untreated (UNTR) and treated samples. Pumpkin tissue showed great structural modifications as changes in cell size and shape, cell wall damage, increased cell wall thickness, cell detachment and dehydration, and calcium ions deposition mainly from HPP300 to 600. UNTR samples showed the highest value of maximum and minimum cell elongation, perimeter segment, and a more regular cell wall thickness whereas HPP600 showed the lowest values for all these parameters. A noticeable difference was observed in HPP600 samples, with a difference in terms of color (ΔE 11.3 ± 1.9) and hardness (87.4 ± 27.8 N) compared to the UNTR ones (194.9 ± 37.9 N) whereas treatments at other pressures changed less markedly the color and texture. HPP200 could ensure a higher amount of starch and pectin availability while HPP200 and HPP400 showed the highest total antioxidants capacity. High-pressure treatment from HPP400 to 600 gave the highest destruction of microorganisms but negatively influenced the structural quality as well as texture and microstructure.
Introduction
Pumpkin belongs to the Cucurbitaceae family, and Curcurbita pepo L., Curcurbita maxima Duchesne and Cucurbita moschata Duchesne ex Poir are the three most common species available worldwide [1]. These species are the most important in terms of quantity and spread in the world [2]. The importance of the pumpkin is also due to its high content of phytochemical compounds, such as polyphenols or carotenoids, antioxidants [3,4], and starch-pectin [5,6]. Pumpkin pectin has been studied because it exhibits some unique properties, such as the ability to form gels at lower concentrations than commercial citrus pectin [7]. In particular, the Violina rugosa squash cultivar, botanically classified as C. moschata Duchesne ex Poir., is a butternut squash heirloom variety. The fruits of this cultivar have the following characteristics: 22-30 cm in length, about 2 Kg of weight, and cylindrical shape. Furthermore, Violina rugosa has a smooth soft texture, nutty flavor, and excellent storage capabilities and it gets its name from its violin-like shape.
Fresh vegetables may undergo fast physiological deterioration, metabolic alterations, and microbiological degradation, which may compromise the product's qualitative attributes and safety. Consumption of contaminated fresh-cut vegetables has been linked to and on the same day, used for the texture and color analysis. The remaining six samples were subjected to HPP treatment.
The 6 samples were treated at the research institute Stazione Sperimentale Industria Conserve Alimentari (SSICA) by using 30 L AvureTM vertical machine (Model-AV-S) at 20 • C from 100 to 600 MPa for 3 min [22]. An indirect direct method for generation of high isostatic pressure using cold water (4 • C) was used, and the temperature increase due to compression was not higher than 2-3 • C/100 MPa. The pressurization time was about 100 MPa in 10 s. After the treatment, all HPP-treated samples were fixed in FAA solution for histological analysis, and on the next day performed the texture and color analysis. After treatment, all samples included untreated were stored at 4 • C for other necessary evaluations. For each condition, 3 independent samples (in 3 independent bags) were collected.
pH, TSS, and Moisture
The pH was measured at 20 ± 1 • C using pH meter (Elektronische Messgeräte GmbH & Co. KG, Berlin, Germany). The TSS (total soluble solids) was determined as • Brix at 20 ± 1 • C by the TDR095 table digital refractometer. The moisture content (g/100 g) of pumpkin samples was evaluated by means of the gravimetric technique following the official method [29]. All readings were performed in 10 replicates samples.
Histological Analysis
The samples were preserved in FAA solution [30]. After 15 days, they were dehydrated with gradually increasing alcohol concentrations. The inclusion was made both in paraffin and methacrylate resin (Heraeus Kulzer & Co., Wehrheim, Germany), and the resulting blocks were sectioned at 5 to 6 µm thickness for the paraffin blocks and 2 to 3 µm thickness for resin blocks. The sections were made by a semithin Leitz 1512 microtome (Leitz, Wetzlar, Germany). The sections were stained with toluidine blue (TBO) solution [30] for the evaluation of the general structure variation after each treatment and potassium iodide solution [30] was used for the evaluation of the starch inclusions.
In addition, the sections were stained by Von Kossa (Bio-Optica Kit, Milano, Italy) to identify the presence of calcium inclusions in tissue sections. Von Kossa stains black/pink color for calcium inclusions and red color for nuclei. The fixed and dyed sections were observed by means of an optical microscope Leica DM 4000 (Leica Imaging Systems Ltd., Wetzlar, Germania) equipped with a digital camera Leica DMC2900 (Leica Imaging Systems Ltd., Wetzlar, Germania). Image analysis was performed by using software LAS v4.10.0 (Leica Application Suite, Wetzlar, Germany) and the following parameters were analyzed: cell wall thickness, cell size and shape (µm), cell elongation (maximum-minimum), cell perimeter segment (µm), etc. For each treatment, three replicates were analyzed and for each replicate, 10 different cells were analyzed.
Texture Analysis
Texture profile was analyzed by texture analyzer using a TA. XT2i. Texture analyzer equipped with a 35 mm diameter cylindrical aluminum probe by means of a double compression with a test speed and post-test speed of 1 mm/s up to 40% of the original sample height. The textural characteristics considered were: hardness (maximum peak force of the first compression cycle, N), cohesiveness (ratio of positive force area during the second compression to that throughout the first compression area, dimensionless), resilience (area during the withdrawal of the force, divided by the area of the first force, dimensionless), and chewiness (product of hardness × cohesiveness × springiness, N) [31]. Ten replicates from each sample were analyzed at room temperature.
Extraction of Starch and Pectin
A wet-milling method [32] with some changes was used to extract the pumpkin starch. The milled pumpkin flesh (50 g) was steeped in 150 mL of 0.45% (w/w) Na 2 S 2 O 5 solution at 4 • C overnight. The slurry was then filtered by a nylon screen (400 meshes). The filtrate was then mixed and stirred with 150 mL of pure ethanol for 20 min. After stirring, the samples were centrifuged at 3388× g for 10 min, when the starch fraction could be seen to be precipitated at the bottom of the centrifuge tubes. The upper layer was removed, and other impurities were scraped off using a spatula. Finally, the extracted starch was washed with water and again centrifuged (3388× g for 10 min) with water finally collect the extracted starch at the bottom and then dried in an oven at 50 • C overnight. Analyses were carried out only on one sample.
The pumpkin pectin was extracted under acidic conditions [33] with little modifications: 50 g of the pumpkin pulp was suspended in 0.1 M HCl (500 mL) with stirring for 2 h at 65 • C and filtered through a nylon screen (200 mesh). The filtrate was cooled down to ambient temperature, mixed with three times its own volume of 96% ethanol, and left overnight (16 h) for pectin precipitation. After being centrifuged at 3388× g for 10 min, the precipitated pectin was recovered and washed with acidified aqueous alcohol (10 mL HCI in 1 L of 70% v/v ethanol), washed again with pure ethanol, pressed, and finally dried in a current of warm air (40-50 • C). Analyses were carried out only on one sample.
Total Antioxidant Capacity (TAC)
DPPH test (2,2-diphenyl-1-picrylhydrazyl free radical) was used to measure antioxidant capacity in accordance with Zhou et al. [18]. The pumpkin pulp was centrifuged at 15750× g for 15 min at 4 • C. After then, 0.2 mL of 10-fold diluted supernatant was mixed with 4.0 mL of a methanolic solution of DPPH (0.14 mmol/L). The solution's absorbance was assessed at 517 nm following a 70 min incubation period in the dark at room temperature. Analyses were carried out in duplicate.
The calibration curve, which was created by measuring the absorbance at 517 nm of Trolox methanolic solutions at various concentrations, was used to calculate the TEAC value (Trolox equivalent antioxidant capacity; mM Trolox/100 g) of the samples.
Microbiological Analysis
Decimal dilutions of 10 g pumpkin sample were prepared in sterile 0.1% (w/v) peptone solution. Aerobic total counts were measured in plate count agar (PCA) (Merck) at 30 • C for 72 h. Lactic acid bacteria were determined in Man, Rogosa, and Sharpe agar (MRS) at 30 • C for 48 h. The quantity of yeast and molds was measured by yeast extract, dextrose, and chloramphenicol (YEDC) agar at 30 • C for 72 h. All microbial counts were reported as log colony-forming units (CFU) per g of sample weight (log CFU/g). Three repetitions were performed for each sample.
Statistical Analysis
Means and standard deviations were calculated with SPSS (Version 26.0, SPSS Inc., Chicago, IL, USA) statistical software. SPSS was used to verify significant differences between data by one-way analysis of variance (ANOVA) followed by Tukey's post hoc test at p < 0.05 to identify differences among samples.
pH, TSS, and Moisture
Values of pH ranged from 6.15 to 6.58, the values of total sugar content ranged from 11.30 to 11.90 • Brix, while the moisture content ranged from 86.02 to 87.22 (g/100 g) (data not shown). The data obtained did not show statistically significant (p > 0.05) differences among samples. Zhou et al. (2014) [18] reported similar findings, concluding that there were no significant differences in pH and SSC (soluble solids content) following HPP treatment.
Histological Analysis
The microstructure of pumpkin samples appeared to be changed after HPP treatments. In the untreated samples (UNTR), the inner parenchyma (mesocarp) is composed of isodiametric cells with thin cell walls. Mesocarp is composed by thin-walled big and small cells with large intercellular spaces (is) (Figure 1). In the inner parenchyma, vascular bundles (vb) are present and its surrounded by small parenchymatic cells. In the mesocarp, starch granules (S) are observed near the vascular bundles (vb) but in fewer quantities ( Figure 2). The cells varied in shape between elongated and circular: their smallest diameter ranged from 57.2 to 72.2 µm and their largest diameter ranged from 71.3 to 88.9 µm. The UNTR sample showed a more organized cell distribution, uniform size, shape, and higher degree of cell-to-cell contact throughout the tissue ( Figure 1A). The same results were also observed in HPP100 samples (data not shown). In HPP200, few changes were observed ( Figure 1B), mainly related to the decrease in cell turgor (dehydration-d) and some cells showed signs of cell detachment (cd). Several authors reported an increase in cell wall thickness (cwt)-swelling, cell damage (cd), and dehydration (d) after pressure treatments on different fruits and vegetables [14,21,26,[34][35][36]. As the intensity of the treatment increased, the structure of the tissues changed, but the modification appears mild up to HPP300 ( Figure 1C), and, with higher pressure, it seems that the tissues are more affected by the treatment with more marked modifications ( Figure 1D-F).
Histological analysis indicated that in HPP400 samples the parenchyma showed evident damages compared to less intense treatments (i.e., UNTR, HPP100, HPP200, and HPP300). In this condition, the damage is evident throughout the tissue, regardless of the size of the cells ( Figure 1D). The HPP400 samples showed important changes, indeed broken cells, and cells with increased cell wall thickness (swelling) were observed (swelling cwt). In these cells, the membrane appeared destroyed as observed in HPP300 samples ( Figure 1C).
Another consequence due to the treatment is the formation of gaps, a similar observation was reported by Hu et al. [37] and in carrots after the application of 300 and 400 MPa, by Trejo Araya et al. [21]. Prestamo and Arroyo [25] noticed that in cauliflower and spinach leaves, the application of HPP at 400 MPa caused cellular structure changes and membrane folding. This effect was also observed in our study ( Figure 1D). After the treatment with a pressure of HPP500 ( Figure 1E), the parenchyma cells showed plasmolysis, gaps, and increased cell wall thickness (cwt). Several authors [14,[37][38][39] hypothesized that cells separation was due to the breakage of chemical bonds between the pectic components of the middle lamellae of adjacent cells and/or to the hydrolysis of some other components of the cell wall such as pectin, hemicelluloses, and cellulose.
HPP600 samples showed the greatest changes in the structure ( Figure 1F). The damage is comparable to that described in the samples subjected to lower pressures, but with a greater intensity of the damage. The first consequence of the treatment was the cell shape change, as observed by Knockaert et al. [10] in carrots ( Figure 1F). Other changes were cell breakage and damage (Cd); similar observation was reported by Xu and Han [23], Oliveira et al. [35], and Paciulli et al. [14,39]. Another impact was observed regarding the calcium ion (ci) deposition in tissues. Calcium ions are a fundamental component of the cell walls in fact they are responsible for cell-cell adhesion. Calcium ions have the function to bind the pectic substances between two adjacent cells, providing compactness of the plant tissues and organs. In our study, the evaluation of the presence of calcium ion inclusion is important as an indirect measure of cell detached separation in the mesocarp parenchymatic tissue. UNTR and HPP100 samples showed a scarce presence of calcium inclusions ( Figure 3A,B), but after the HPP200, the number of calcium inclusions (ci) increased ( Figure 3C-F). More interesting results about high calcium ions (ci) deposition were observed in HPP600 ( Figure 3F) mainly due to the liberation of calcium from the middle lamella which is previously bound in the pectin network [12,24]. Our study reveals that as pressure increased, the presence of calcium ions in the cells increased and was liberated as a result of cell separation. This result is supported by the histological analysis, where it was possible to observe a greater separation of the cells in the samples treated with higher pressure (Figure 3C-F). However, the use of the Von Kossa stain allowed us to evaluate even slight effects, in fact from the results it appears that just applying a pressure of HPP100 slight calcium accumulations are observed ( Figure 3B), this means that the cell separation process begins also at low pressures. In confirmation of our results, some authors asserted that the leaching of calcium ions also occurs due to PME activity [10,27], which consequently causes texture and microstructural changes [24,28]. [26], and Trejo Araya et al. [21] on carrots. All authors concluded that after treatment with high pressures, the cells changed their shape, from regular and rounded they become irregular and elongated, with consequent changes in their geometric characteristics. In our study (Table 1) the UNTR samples showed the highest values of maximum and minimum cell diameter (88.9 and 72.2 µm, respectively), and with increasing pressure, there is a more marked variation in cell shape and size. Samples treated at HPP500 and HPP600 exhibited very low values in maximum and minimum cell diameter (71.3 to 57.2 µm, respectively), due to damage to the tissues after high-pressure treatments as seen in Figure 1C,D. Another observation obtained by image analysis is cell perimeter. As with the size of the cell, the perimeter of the cells also changed with increasing pressure (Table 1). Our results confirm what Zhang et al. [40] observed in asparagus lettuce cells. Few papers [21,34,39] reported a variation in cell wall thickness, or swelling, after HPP treatments. In our study, the cell wall thickness of UNTR samples was uniform with a thickness of 1.51 ± 0.12 µm (Table 1). In Table 1 it is possible to observe a significant increase in the thickness of the cell wall in the treated samples only above 400 MPa and the greatest values are observed at the highest pressure (600 MPa), this could be explained by the sequestration of cell liquids by cell wall components during pressurization and consequent swelling-induced gelation [14,39].
Colorimetric Analysis
Color parameters (L*, a*, and b*) of pumpkin samples are reported in Table 2. In the UNTR sample, all colorimetric values were the highest with a great color difference when compared to HPP ones ( Table 2). Treatment with high pressures led to a decrease in the color of the pumpkin parenchyma, in fact, our results showed, in all treatments, a reduction (p < 0.05) in L*, a*, and b* values. A significant change was induced when pressures above HPP400 were applied, showing that, in our condition, extra pressure may not be a good choice for vegetable processing. A similar observation was found by several authors [14,18,37]. Oey et al. [20] noted that the decrease in color intensity was caused by oxidation and this leads to a decrease in red and yellow. According to the authors, the changes in color also appear to be linked to enzymatic activity and the isomerization of β-carotene. The dynamics and modes of occurrence of the effects described above may be due to the condition of the raw material, in fact in the pumpkin puree, Contador et al. [11] found little impact on color after HPP at 400 and 600 MPa for 300 s. This suggests that the high pressures must be applied in different ways and times depending on the state of the raw material.
The numerical value of ∆E (Table 2) can be used to categorize color differences into distinct categories. A noticeable difference was observed in HPP600 samples (11.3 ± 1.9); this result indicates that the treatment at HPP600 is the one that most modifies the color of the pumpkin samples. The other treatments changed the color of the pumpkin samples, but less markedly (Table 2).
Textural Parameter
Textural parameters of pumpkin samples are reported in Table 3. The highest hardness (194.9 ± 37.9 N) values were obtained from UNTR samples, as expected. After highpressure treatments, the hardness of the sample tended to decrease (Table 3). Other textural characteristics such as resilience, cohesiveness, springiness, and chewiness indicate better texture quality retention in UNTR than HPP treated samples (Table 3). Kato et al. [24] and Prestamo and Arroyo [25] stated that the softening of the texture and decrease in the hardness of plant tissue are caused by cell wall breakdown, cell rupture, degradation of pectin and loss of turgor pressure induced by high pressure. A similar effect of microstructure was noticed by Trejo Araya et al. [21] and Zhang et al. [40]. In this study, from the histological analysis, we observed cell detachment (cd), middle lamella separation, and dehydration (d) (Figure 1B,D) that are related to the loss of firmness after high-pressure treatment. HPP100 and HPP200 samples presented the same hardness values substantially but higher if compared with the HPP300 and HPP400 samples, confirming the starting point of texture reduction and structure disruption, as noticed in the histological analysis ( Figure 1C). Zhang et al. [40] noted that moderate pressures (100-300 MPa) caused an initial texture loss of asparagus lettuce, probably due to the loss of turgor pressure and the loose skeleton of the cell wall. On the contrary, Michel and Autio [42] evaluated no further significant hardness and firmness reduction in carrot at 300 MPa. Basak and Ramaswamy [17] observed 4% and 34% hardness loss by instantaneous pulse softening (IPS) at 100 MPa and 200 MPa/10 min, respectively, but no tissue recovery was observed at low pressures. Our results revealed that when pressure was lower (HPP200-300), the sample hardness was not significantly affected but it was affected by higher pressures (HPP500-600). As pressure increases, hardness decreases and enhances the activity of PME [22] which has a substantial impact on cell damage, breakdown of cell walls' structure and release of pectin and calcium, and less adhesiveness between the cell and cell dehydration. Similar results are obtained on cherry tomato [43], and asparagus lettuce [27,44] at elevated pressure.
This outcome was more apparent for HPP600 samples, probably on account of the highest structural degradation observed just after the treatment ( Figure 1F). According to Zhou et al. [18], the hardness of pumpkin slices decreased by 47.4, 42.8, and 32.3%, respectively, after 450 MPa/15 min and 550 MPa/10 min. In another study, it was reported that fresh pumpkin processed by HPP at 200 MPa had better texture retention than treated at 400 and 600 MPa [39] and 600 MPa/5 min led to significant changes in the firmness of potato, cocoyam, and Peruvian carrot, respectively [35].
Regarding the resilience values, the UNTR sample presented the highest value (37.5 ± 8.1) while on the contrary HPP600 and HPP500 presented the lowest values (21.6 ± 8.6) confirming microstructural results. Cohesiveness values of UNTR (0.6 ± 0.09) samples were significantly higher than all the HPP ones. In accordance with Hu et al. [37], among the treated samples, the HPP100 ones had the highest cohesiveness value (0.5 + 0.04) due to the low tissue damage; then, cohesiveness decreased with no further significant differences among samples. After high-pressure processing, springiness is reduced and there are no longer any variations between the HPP200 to HPP600. For all the samples, pressure-treated pumpkin cubes showed (p < 0.05) loss of hardness, cohesiveness, resilience, springiness, and chewiness compared to UNTR (Table 3). Similar observation was observed by Sun, et al. [44] on carrot samples.
Starch and Pectin Availability
In Figure 4, percentages of starch and pectin from fresh pumpkin flesh were reported. The percentage of starch ranged from 0.32 to 1.42 g/100g. In the pumpkin parenchyma, the starch content (S) is very low; this is also confirmed by the histochemical observations (Figure 2), where it is possible to observe that the starch inclusions are few and present only near the vascular bundles (vb). These data are quite in accordance with Yuan et al. [6], where the author observed that the starch content for three fresh pumpkin cultivars (Yinli, Heili and Miben) was 6.47 g/100 g (fresh weight; FW), 1.05 g/100 g (FW) and 3.13 g/100 g (FW), respectively. The percentage of pectin in pumpkin samples ranged from 2.47 (UNTR) to 5.18 (HPP600) g/100 g. The data indicate that as pressure increases, the amount of pectin extracted increases. Our results are in accordance with Kato et al. [24] and Moelants et al. [45], who observed that pectin can be solubilized by cell walls when high pressure is applied. The same observations were reported by Sun, et al. [27] on asparagus lettuce. Furthermore, recent studies reported that high pressure (600 MPa) can lead to high gelling of pectin [38,46]. In fact, our results showed that after treatment the thickness of the cell wall increases (cwt) ( Figure 1F and Table 1), probably due to the pectin gelification. The content of starch and pectin varies for many factors such as species, maturity, storage periods, and the method used for extraction.
Total Antioxidant Capacity (TAC)
In Figure 5 it is possible to observe the total antioxidant capacity (TAC) of the analyzed samples: UNTR showed a value of 2.9 mmol/100 g, while the highest values of TAC were observed in the samples treated at HPP200 and HPP300 compared to other samples ( Figure 5). Oey et al. [20] stated that the increase in antioxidant capacity after high pressure thermal treatment (HPTP) appears to be due to the better extractability of the antioxidant components from the plant matrix, because the composition of antioxidant molecules in each vegetable varies and the stability of these components under high pressure treatment determines the overall outcome. In addition, Paciulli et al. [39] observed that pumpkin samples, of two different species, after HPP treatment, increased the TAC. The same authors observed that the response of the two species is different, even if the trend of the TAC is similar for both. This suggests that the variation of antioxidant capacity after different pressures is dependent on the plant matrix and the composition of the cellular components. In confirmation of this, the same authors [14] reported that the antioxidant activity was significantly reduced by more than 70% when HPP 400 for 5 min and HPP 600 for 1 min were applied to zucchini slices. In our study, we observed that at middle pressure HPP200 and HPP300 more antioxidant compounds have been released from destructed pumpkin samples. On the contrary, a reduced antioxidant capacity was observed at HPP400 and HPP600 probably as a result of fewer antioxidants being present or due to the breakdown of antioxidants compounds. Table 4 showed the results of microbiological counts from UNTR and HPP treated pumpkin samples. Pressure inhibits protein synthesis, denatures enzymes, and reduces lipid membrane fluidity in vegetative microbial cells [47]. The main purpose of this analysis was to identify the efficacy of treatment on the inactivation of microorganisms. The starting microbial load of UNTR samples was 3.24 Log CFU/g in PCA, 2.64 Log CFU/g in MRS and 3.04 Log CFU/g in YEDC, respectively. UNTR samples and HPP100 have comparable microbial load in all tested media. The treatment effect begins to be appreciable from HPP200, although drastic reductions in microbial load can be observed from HPP300 (1.23 Log CFU/g in PCA, 1.64 Log CFU/g in MRS and 2.04 Log CFU/g in YEDC). Finally, from HPP400 to 600, the microbial load was lower than 1 Log CFU/g for all tested media. Zhou et al. [18] notified that 450-550 MPa achieved the inactivation of total aerobic bacteria. Generally, yeasts and molds are very sensitive to HPP. Chen et al. [48] and Wang et al. [49] and Gao et al. [12] reported that HPP at 400 MPa eliminated the populations of yeasts and molds below the detection limit in pomegranate juice, purple sweet potato nectar, and strawberry. Houška et al. [50] reported that in broccoli juice at 500 MPa pressure inactivates more than five logs of the microbial population.
Conclusions
In this research, high-pressure treatment was used on fresh pumpkin samples concerning physico-chemical and microstructural parameters to evaluate the pressure effect. It was evident that the greatest microstructural changes in vegetable cells were found at higher pressure, especially at 300 and 600 MPa. Pumpkin microstructure was studied in terms of cell elongation, perimetral segment, and cwt and it displayed differently as per different pressure. New evidence in the present study showed pectin conversion as pressure increased with calcium ions playing a key role in pumpkin texture modifications. The colorimetric parameters decreased by pressure compared to untreated samples and significant texture loss (p < 0.05) was witnessed at 500 and 600 MPa. Treatment at 200 MPa could ensure a higher amount of starch-pectin and antioxidant components availability.
A longer shelf life was expected from treatment at HPP400 to 600, indicating that much higher pressure is recommended to ensure microbial inactivation. Future studies should be carried out to evaluate the kinetic of HPP on Violina squash to exactly determine the process time and temperature for the desired quality product. | 6,307 | 2023-03-01T00:00:00.000 | [
"Environmental Science",
"Materials Science"
] |
Zero-sum squares in $\{-1, 1\}$-matrices with low discrepancy
Given a matrix $M = (a_{i,j})$ a square is a $2 \times 2$ submatrix with entries $a_{i,j}$, $a_{i, j+s}$, $a_{i+s, j}$, $a_{i+s, j +s}$ for some $s \geq 1$, and a zero-sum square is a square where the entries sum to $0$. Recently, Ar\'evalo, Montejano and Rold\'an-Pensado proved that all large $n \times n$ $\{-1,1\}$-matrices $M$ with discrepancy $|\sum a_{i,j}| \leq n$ contain a zero-sum square unless they are split. We improve this bound by showing that all large $n \times n$ $\{-1,1\}$-matrices $M$ with discrepancy at most $n^2/4$ are either split or contain a zero-sum square. Since zero-sum square free matrices with discrepancy at most $n^2/2$ are already known, this bound is asymptotically optimal.
Introduction
A square S in a matrix M = (a i,j ) is a 2 × 2 submatrix of the form S = a i,j a i,j+s a i+s,j a i+s,j+s .
In 1996 Erickson [11] asked for the largest n such that there exists an n × n binary matrix M with no squares which have constant entries. An upper bound was first given by Axenovich and Manske [2] before the answer, 14, was determined by Bacher and Eliahou in [3].
Recently, Arévalo, Montejano and Roldán-Pensado [1] initiated the study of a zero-sum variant of Erickson's problem. Here we wish to avoid zero-sum squares, squares with entries that sum to 0.
Zero-sum problems have been well-studied since the Erdős-Ginsburg-Ziv Theorem in 1961 [10], which says that any set of 2n−1 integers must contain a set of n integers which sum to 0 modulo n. Much of the research has been on zero-sum problems in finite abelian groups (see the survey [12] for details), but problems have also been studied in other settings such as on graphs (see e.g. [5,6,7,9]). Of particular relevance is the work of Balister, Caro, Rousseau and Yuster in [4] on submatrices of integer valued matrices where the rows and columns sum to 0 mod p, and the work of Caro, Hansberg and Montejano on zero-sum subsequences in bounded sum {−1, 1}-sequences [8].
Given an n × m matrix M = (a i,j ) define the discrepancy of M as the sum of the entries, that is, We say a square S is a zero-sum square if disc(S) = 0, or equivalently, a i,j + a i,j+s + a i+s,j + a i+s,j+s = 0.
We will be interested in {−1, 1}-matrices M which do not contain any zerosum squares. Clearly, matrices with at most one −1 cannot contain a zerosum square and, in general, there are many such matrices when the number of −1s is low. But what happens if there are a similar number of 1s and −1s? In particular, what happens if the matrix M is itself zero-sum?
An n × m {−1, 1}-matrix M = (a i,j ) is said to be t-split for some 0 ≤ t ≤ n + m − 1 if Note that when t = 0 the matrix consists entirely of −1 entries, and when t = n + m − 1 the matrix consists entirely of +1 entries. We say a matrix M is split if there is some t such that a t-split matrix N can be obtained from M by applying vertical and horizontal reflections. Split matrices are of particular interest since they can have low absolute discrepancy, yet they never contain a zero-sum square. However, it is not hard to check that an n × n split matrix cannot have discrepancy 0, and it may still be the case that a zero-sum matrix M must contain a zero-sum square.
This was confirmed by Arévalo, Montejano and Roldán-Pensado in [1]. In fact, they proved that, except when n ≤ 4, every n × n non-split {−1, 1}matrix M with | disc(M)| ≤ n has a zero-sum square. They remark that it should be possible to extend their proof to give a bound of 2n, and they conjecture that the bound Cn should hold for any C > 0 when n is large enough relative to C.
Conjecture 1 (Conjecture 5 in [1]). For every C > 0 there is an integer N such that whenever n ≥ N the following holds: every n × n non-split Let f (n) be the absolute value of the minimum discrepancy of a non-split {−1, 1}-matrix with no zero-sum squares. Arévalo, Montejano and Roldán-Pensado proved that f (n) ≥ n + 1 for all n ≥ 5, and the conjecture would imply that f (n) = ω(n). We improve the lower bound on f to ⌊n 2 /4⌋ + 1 (for all n ≥ 5), showing that f = Ω(n 2 ).
The best known construction for a non-split matrix with no zero-sum squares has discrepancy close to n 2 /2, about twice the lower bound given here, and our computer experiments suggest that this construction is in fact optimal. Although the lower bound now only differs from the upper bound by a constant factor, closing the gap between the upper and lower bounds remains a very interesting problem and we discuss it further in Section 3.
To show that every zero-sum n × n {−1, 1}-matrix (where n ≥ 5) contains a zero-sum square (for n ≥ 5), Arévalo, Montejano and Roldán-Pensado prove that a small t ′ -split submatrix M ′ determines many entries of the matrix M, and their proof leads to the following lemma. An example application is shown in Figure 1.
and suppose t ≤ n.
The submatrix
Furthermore, both a i,j = 1 and a j,i = 1 whenever T < j ≤ T + t − 2 and one of the following holds: Note that we can apply this lemma even when it is a reflection of M ′ which is t-split; we just need to suitably reflect M and potentially multiply by −1, and then undo these operations at the end. The matrix N will always contain at least one of a 1,1 , a 1,n , a n,1 and a n,n , and if N contains two, then M is split.
We will also make use of the following observation. This will be used in conjunction with the above lemma to guarantee the existence of some additional 1s, which allows us to show a particular submatrix has positive discrepancy.
Observation 4. Let M be an n×n {−1, 1}-matrix with no zero-sum squares, and suppose that a i,i = 1 for every i ∈ [n]. Then at least one of a i,j and a j,i is 1. In particular, Figure 1: The entries known from applying Lemma 3. The yellow squares represent −1s and the blue squares represent 1s. The submatrix M ′ is shown in a darker shade. The final lemma we will use to prove Theorem 2 is a variation on Claim 11 from [1]. The main difference between Lemma 5 and the result used by Arévalo, Montejano and Roldán-Pensado is that we will always find a square submatrix, which simplifies the proof of Theorem 2.
Proof. We only prove this in the case n is odd as the case n is even is similar, although simpler. Partition the matrix M into 9 regions as follows. Let the four (n − 1)/2 × (n − 1)/2 submatrices containing a 1,1 , a 1,n , a n,n and a n,1 be A 1 , . . . , A 4 respectively. Let the (n − 1)/2 × 1 submatrix between A 1 and A 2 be B 1 and define B 2 , B 3 and B 4 similarly. Finally, let the central entry be B 5 . The partition is shown in Figure 2a.
As these partition the matrix M, we have Let the overlapping (n + 1)/2 × (n + 1)/2 submatrices containing a 1,1 , a 1,n , a n,n and a n,1 be A ′ 1 , . . . , A ′ 4 (as indicated in Figure 2b). The submatrices B 1 , . . . , B 4 each appear twice in the A ′ i and B 5 appears four times and, by subtracting these overlapping regions, we obtain a second equation for disc(M): respectively, we are done, so we may assume that this is not the case. First, suppose that disc(A i ) > (n − 1) 2 /16 and disc(A ′ i ) > (n + 1) 2 /16 for all i = 1, 2, 3, 4. Since n − 1 is even and disc(A i ) ∈ Z, we must have disc(A i ) ≥ (n − 1) 2 /16 + 1/4, and similarly, disc(A ′ i ) ≥ (n + 1) 2 /16 + 1/4. Adding the equations (1) and (2) we get the bound which reduces to disc(B 5 ) ≥ 5/4. This gives a contradiction since B 5 is a single square. Similarly we get a contradiction if, for every i, both disc(A i ) < −(n − 1) 2 /16 and disc(A ′ i ) < −(n + 1) 2 /16. This only leaves the case where two of the 8 submatrices have different signs. If A ′ i > (n + 1) 2 /16, then, for n ≥ 8, and either | disc(A i )| ≤ (n − 1) 2 /16, a contradiction, or disc(A i ) > 0. By repeating the argument when disc(A ′ i ) is negative, it follows that A i and A ′ i have the same sign for every i. In particular, two of the A ′ i must have different signs, and we can apply an interpolation argument as in [1].
Armed with the above results, we are now ready to prove our main result, but let us first give a sketch of the proof which avoids the calculations in the main proof.
Sketch proof of Theorem 2. Assume we have an n×n {−1, 1}-matrix M with no zero-sum squares and which has | disc(M)| ≤ n 2 /4. We will prove the result by induction, so we assume that the result is true for 5 ≤ n ′ < n.
Applying Lemma 5 gives a submatrix M ′ with low discrepancy. Since M ′ also contains no zero-sum squares, we know that it is split by the induction hypothesis. Applying Lemma 3 then gives a lot of entries M and, in particular, a submatrix N with high discrepancy. Since we are assuming that M has low discrepancy, the remainder M \ N of M not in N must either have low discrepancy or negative discrepancy. In both cases we will find B, a submatrix of M with low discrepancy. When the discrepancy of M \ N is low, we use an argument similar to the proof of Lemma 5, and when the discrepancy of M \ N is negative, we find a positive submatrix using Observation 4 and then use an interpolation argument.
By the induction hypothesis, B must also be split and we can apply Lemma 3 to find many entries of M. By looking at specific a i,j , we will show that the two applications of Lemma 3 contradict each other.
We now give the full proof of Theorem 2, complete with all the calculations. To start the induction, we must check the cases n < 30 which is done using a computer. The problem is encoded as a SAT problem using PySAT [13] and checked for satisfiability with the CaDiCaL solver. The code to do this is attached to the arXiv submission.
Proof of Theorem 2. We will use induction on n. A computer search gives the result for all n < 30, so we can assume that n ≥ 30 and that the result holds for all 5 ≤ n ′ < n.
Suppose, towards a contradiction, that M is an n×n matrix with no zerosum squares and | disc(M)| ≤ n 2 /4 . By Lemma 5, we can find an n ′ × n ′ submatrix M ′ = M[p : p+n ′ −1, q : q +n ′ −1] with (n−1)/2 ≤ n ′ ≤ (n+1)/2 and | disc(M ′ )| ≤ (n ′ ) 2 /4. By the induction hypothesis and our assumption that M doesn't contain a zero-sum square, the matrix M ′ must be split. By reflecting M and switching −1 and 1 as necessary, we can assume that the submatrix M ′ is t ′ -split for some t ′ , and that t := t ′ + p + q − 2 ≤ n.
We will want to apply Lemma 3, for which we need to check If t + ⌊t/2⌋ ≥ n, the matrix M is t-split and we are done, so we can assume that this is not the case, and that t ≤ 2n/3. We will also need the following bound on 2t + ⌊t/2⌋ − 2, which follows almost immediately from (3).
Proof. Substituting n ′ ≥ (n − 1)/2 into (3) gives the following bound on t: We now lower bound ⌊t/2⌋ by (t − 1)/2 to find 2t + ⌊t/2⌋ − 2 ≥ 2t + t − 5 2 The right hand side grows like √ 75 8 n asymptotically, which is faster than n, so the claim is certainly true for large enough n. In fact, the equation 5 Let k = ⌈5n/6⌉ and let N = M[1 : k, 1 : k] be the k × k sub-matrix in the top left corner which contains a 1,1 . We will apply Lemma 3 and Observation 4 to guarantee lots of 1s in N, and therefore ensure N has large discrepancy. This will mean that the rest of M which is not in N must have low discrepancy, and we can find another split submatrix B.
Proof.
Consider the 11 (n − k) × (n − k) disjoint submatrices B 1 , . . . , B 11 of M given by and shown in Figure 3. The submatrix B 1 contains a n,n and sits in the bottom right of M, while the others lie along the bottom and right-hand edges of M. If one of the B i satisfies | disc(B i )| ≤ (n − k) 2 /4, we are done by taking this submatrix as B, so suppose this is not the case.
As disc(B 1 ) > 0, if disc(B i ) < 0 for any i = 1, we can use an interpolation argument as in Lemma 5 to find the claimed matrix. The argument only requires which is true for (n − k) > 4.
We must now be in the case where disc(B i ) > (n − k) 2 /4 for every i. The bulk of the work in this case will be bounding the discrepancy of the matrix N, and then the discrepancy of M. There are 2n(n − k) − 12(n − k) 2 ≤ 10(n − k) entries of M in the gaps between the B i , or in other words, there are at most 10(n − k) entries a i,j which are not contained in either N or one of the B i . In particular, we have disc(M) ≥ disc(N) + disc(B 1 ) + · · · + disc(B 11 ) − 10(n − k) > disc(N) + 11(n − k) 2 /4 − 10(n − k).
Let s = min {k, t + ⌊t/2⌋} so that M[1 : s, 1 : s] is t split, and let r = k −s be the number of remaining rows in N. Let a 1 , . . . , a 4 be the number of 1s in N guaranteed by Lemma 3, and let a 5 be the number of additional 1s guaranteed by also applying Observation 4. This guarantees that at least one of a i,j and a j,i is 1 for all s + 1 ≤ i, j ≤ k, and a 5 ≥ r(r − 1)/2.
We have the following bounds.
Let us first consider the case where s = k, so that N is t-split. In this case a 2 = · · · = a 5 = 0, and we can easily write down the discrepancy of N as k 2 − t(t + 1). Since k ≥ 5n/6, we get the bound disc(N) ≥ 25n 2 36 − t(t + 1).
Substituting this into (4) and using the bounds (n − 5)/6 ≤ n − k ≤ n/6 we get For n ≥ 4, the righthand side is greater than n 2 /4 whenever Since we have assumed t ≤ 2n/3, we get a contradiction for all sufficiently large n. In fact, we get a contradiction for all n ≥ 40. The remaining cases need to be checked using exact values for the floor and ceiling functions which we do with the help of a computer. Now we consider the case where s = t + ⌊t/2⌋ which is very similar, although more complicated. To be in this case, we must have t + ⌊t/2⌋ ≤ k which implies and t ≤ (5n + 8)/9 ≈ 0.556n.
We have the bounds and so, for n ≥ 44, disc(M) > n 2 /4. This again leaves a few cases which we check with the help of a computer.
Given a submatrix B as in the above claim we apply the induction hypothesis, noting that (n−k) ≥ 5 since n ≥ 30, to find that B is split. Let C be the split submatrix obtained from applying Lemma 4 to B, and let C be ℓ-split up to rotation. Note that ℓ ≥ 3 as (n − k) ≥ 5 and | disc(B)| ≤ (n − k) 2 /4, and we can assume ℓ ≤ 2n/3 as M is not split.
Hence, C contains exactly one of a 1,1 , a 1,n , a n,1 and a n,n , and we will split into cases based on which one C contains. We will also sometimes need to consider cases for whether the entry is 1 or −1, but in all cases we will find a contradiction.
From Lemma 3 applied to M ′ and Claim 1, we already know some of the entries and we highlight some important entries in the following claim.
Suppose the submatrix C contains a 1,1 so sits in the top-left corner. Since M[1 : t + ⌊t/2⌋ , 1 : t + ⌊t/2⌋] is t-split, C must also be t-split. As C was found by applying Lemma 3 to B, it must contain a −1 from B. Hence, t ≥ 5n/6 which is a contradiction as we assumed that t ≤ 2n/3.
Some illustrative examples of these three cases are shown in Figure 4.
The case where C contains a n,1 is done in the same way with the rows and columns swapped.
This leaves the case where C contains a n,n . Since ℓ ≥ 3, if the entry a n,n equals −1, so does the entry a n−1,n−1 , and this contradicts Claim 3. If instead a n,n = 1, we consider the entry a i,i where i = n + 1 − ⌈(ℓ + 2)/2⌉, which must be −1. However, since ℓ ≤ 2n/3, Figure 4: The three cases when C contains a 1,n and a 1,n = 1. The yellow squares represent some of the a i,j which are known to be −1 from Claim 3 and the blue squares those which are 1. The square which gives the contradiction is marked with a cross. Figure 5: The case where C contains a n,n and a n,n = 1. The square marked with a cross gives a contradiction. and a i,i = 1 by Claim 3. This final contradiction is shown in Figure 5.
We remark that it should be possible to improve the bound n 2 /4 using a similar proof provided one can check a large enough base case. Indeed, we believe that all the steps in the above proof hold when the bound is increased to n 2 /3, but only when n is large enough. For example, Claim 1 fails for n = 127 and our proof of Claim 2 fails for n = 67. Checking base cases this large is far beyond the reach of our computer check, and some new ideas would be needed here.
Open problems
The main open problem is to determine the correct lower bound for the (absolute value of the) discrepancy of a non-split {−1, 1}-matrix with no zero-sum squares. We have improved the lower bound to ⌊n 2 /4⌋ + 1, but this does not appear to be optimal.
The best known construction is the following example by Arévalo, Mon-tejano and Roldán-Pensado [1]. Let M = (a i,j ) be given by a i,j = −1 i and j are odd, 1 otherwise.
This has discrepancy n 2 /2 when n is even and (n − 1) 2 /2 − 1 when n is odd.
With the help of a computer we have verified that this construction is best possible when 9 ≤ n ≤ 32, and we conjecture that this holds true for all n ≥ 9. In fact, our computer search shows that the above example is the unique non-split matrix avoiding zero-sum squares with minimum (in magnitude) discrepancy, up to reflections and multiplying by −1. An example when n = 9 is given in Figure 6. We note that the condition n ≥ 9 is necessary, as shown by the 8 × 8 matrix with discrepancy 30 given in Figure 7. Arévalo, Montejano and Roldán-Pensado prove their result for both n × n and n × (n + 1) matrices, and computational experiments suggest that Theorem 2 holds for n × (n + 1) matrices as well. More generally, what is the best lower bound for a general n × m matrix when n and m are large? | 5,147.8 | 2020-10-20T00:00:00.000 | [
"Mathematics"
] |
Quantitative Measurement of the Exposure Response of Digital Cameras
Digital cameras span a large range in price and performance. Consumers of-ten focus mainly on the resolution in pixels when shopping for a camera. Of equal importance is the quality of the optics and the exposure response. Digital cameras generally have a linear exposure response, but the amount of noise and the dynamic range vary. It is difficult to obtain quantitative information on these parameters to make an informed assessment. This work ex-plores and demonstrates first-principles methods to measure the exposure response to make meaningful comparisons between different camera models. It also shows how to make the most of a particular camera by measuring its noise level and dynamic range, to understand the limits of its useable ISO amplification. The methods only require a computer and free software to download images and extract their RGB pixel values. The analysis, based on the RGB values, uses standard spreadsheet software. The procedures are therefore accessible to anyone with a digital camera and computer, and will help to reduce speculation in comparing cameras, and help consumers make an informed decision.
tio of the focal length to the aperture diameter is called the aperture value A V (also called f-number or f-stop). The relationship between E V and A V is given by: A camera's shutter opens to let in light for a controlled period called the time value T V (also called the exposure time or shutter speed). L V , A V , and T V together control the light energy falling on the sensor per area called the radiant exposure H (also called the luminous exposure or photometric exposure): The radiant exposure should remain constant for different A V and T V values as long as the ratio T A is kept constant. This is called the reciprocity principle. The camera amplifies H by a factor proportional to the ISO setting, and stores it as a binary number, the camera exposure C, for each pixel. Thus we expect: and ISO C ∝ (4) For the 8-bit jpeg images used in this work, C can have values between 0 and (2 8 − 1) = 255. Further background on photographic parameters can be found in [1] [2] [3] and [4].
Digital Imaging
Digital cameras achieve imaging by having a mosaic of pixels, each of which contains a photo circuit that produces a voltage representing the light energy received by that pixel. We will concentrate here on the CMOS (complementary metal oxide semiconductor) type of sensor, which is currently the most common type and has replaced the CCD (charge-coupled device) in most consumer cameras. Figure 1 shows an example of a simplified functional diagram for what happens in a single pixel's photo circuit leading to its camera exposure value being stored in the camera's memory.
Before exposure, the upper switch (which is also a transistor) closes momentarily to charge the gate electrode of a MOSFET (metal oxide semiconductor effect transistor), which behaves like one plate of a capacitor. The reverse biased photodiode is almost non-conductive when no light shines on it, so that the charge sits on the MOSFET gate until exposure begins. Then, photoelectrons are excited and make the diode conductive causing charge to leak away, reducing the gate voltage. The total charge that leaks away and the lowering of the gate voltage is proportional to the light energy that falls on the pixel. Before the exposure, the gate at full voltage pinches off the MOSFET cutting off the current through it. As the gate voltage V G falls with exposure, the MOSFET begins to conduct and the current through the resistor and its voltage V R increase. Thus V R is proportional to the light energy collected by the pixel. This is now amplified (ISO amplification), digitized, and stored along with the values for the other pixels that make up the image.
For determining color, each pixel has sub-pixels for each of the three primary colors: red (R), green (G), and blue (B), resulting in a set of three camera exposure values C R , C G , and C B for each pixel. Ideally, C H ∝ , but there is an upper limit H max above which C stops increasing. This happens when all the charge stored on the MOSFET's gate ( Figure 1) leaks away. Then V G drops to its minimum value and V R saturates to its maximum value. There is also a minimum value H min below which noise and dark current leakage through the photodiode start to cover up differences in H.
Methods
Two DSLR (digital single lens reflex) cameras were used in these experiments: a Canon Rebel XSi12-mega-pixel consumer level DSLR (with a 55 mm lens) and a Canon 6D20-mega-pixel professional DSLR (with an 85 mm lens). The cameras were mounted on a tripod pointing straight at a white piece of paper illuminated by a uniform source of incandescent lighting. The camera lens was focused at infinity to blur the image and further even out the illuminance over the entire view. At the beginning, an image of the paper was taken and this was set as the custom white balance. For the starting point of some experiments, the meter in the camera was used to determine the "correct" exposure, which appears to be near the middle of the DR with a camera exposure value of C ≈ 255/2.
After downloading the images from each camera as jpeg files, a free software called GetRGB [7] was used to extract the camera exposure values for each of the three colors (RGB) from each pixel. The mean and standard deviation for each color type for all pixels were calculated using a spreadsheet software. The mean Journal of Analytical Sciences, Methods and Instrumentation indicated the average camera exposure and the standard deviation indicated the noise. An example of the first few rows for a sample spreadsheet is shown in Figure 2.
Four experiments were conducted on both cameras as explained below.
Reciprocity
For testing reciprocity, the ISO was set to 100 and the time value was set to 1/8 s, which is in the middle of the 18 stops of T V available on both cameras. Using this time value, a picture was taken at the aperture value that resulted in a correct exposure as indicated by the camera's meter. Keeping the ISO fixed, pictures were taken at different combinations of aperture and time values while keeping the exposure correct as indicated by the camera (which tries to satisfy Equation (2)). The actual camera exposure values obtained from the images were then compared to see if they were in fact constant for all the exposures.
Camera Exposure Dependency on ISO
For testing how the camera exposure depends on ISO, the ISO was set to 100, the aperture value was set to the largest setting (smallest hole) and the time value to 2.5 stops below the correct exposure to allow room for growth and to be able to see a larger range of camera exposures achieved by changes in ISO. Pictures were taken while increasing the ISO from 100 to 1600. The measured camera exposures were then plotted versus ISO to see how linearly the ISO amplifier works (Equation (4)).
Exposure Response Curve
The exposure-response curves were measured at the highest ISO value of 1600 (common to both cameras) where noise becomes more apparent. T V was first set to the middle value of 1/8 s and A V was adjusted so that a correct exposure was indicated by the camera's meter. (2)) to see the exposure response curve (Equation (3)).
Dark Exposure and Noise
If the same amount of light energy falls on two pixels (i.e. they receive the same radiant exposure H) then the camera should record the same camera exposure C for these pixels. However, the presence of noise will cause C to depend irregularly on H. This noise was measured by taking the standard deviation in C for all pixels, which have received the same H. The noise will be most noticeable if H = 0, i.e. for a dark exposure. These dark exposures were taken in a dark room, with the lens cap on, while keeping A V at its highest setting (A V = 38 for the Rebel and A V = 22 for the 6D), so that no light energy reached the sensor. The standard deviation in C for each color was calculated to determine the level of noise. Figure 3 shows results of the reciprocity test as per the procedure described above. Both cameras have an excellent reciprocity (constant camera exposure for a variety of A V and T V combinations that satisfy
Data and Analysis
. Notice that the white balance is well maintained as well (no separation of R, G, and B data points). In this respect, the professional 6D camera shows no advantage over the Rebel. Figure 4 shows results of the dependence of the camera exposure on ISO.
Here the radiant exposure (energy captured) is kept constant by keeping T V and A V fixed. The ISO setting changes the gain after the sensor. As can be seen, there is an almost linear relationship and the two cameras have similar performances. The tops of the graphs start to saturate as they approach 255, which is the maximum value that C can have. Figure 5 shows the complete exposure response (dependence of the camera the noise level whose measurement is described below) are used to define the usable dynamic range. The much lower noise level of the 6D gives it a more expanded DR = 9.4 compared to the Rebel's DR = 7.8. Thus the 6D will be particularly superior when shooting in low light because of its low noise level and when shooting scenes that have a large luminance (scene brightness) range. Figure 6 shows the noise plotted against the time value. As explained in the C. R. Kunchur methods section, these are dark exposures so that no light energy fell on the sensor. However, even without light, the photodiode conducts because of thermal energy. This is referred to as the dark current and it leads to a camera exposure that is proportional to the time value. Two other sources of noise are read noise in the sensor and the circuit noise that comes after the ISO amplifier. These will lead to randomness in the C values between different pixels. Quantitatively, the noise was taken to be the standard deviation σ C in camera exposure values for different pixels. Of these three components of noise, the dark current and read noise coming before the ISO amplifier will appear to increase with ISO.
Of those two, only the dark current component will depend sensitively on time value and it will have a smaller contribution at short time values. There can also be a few defective "hot pixels" that have fixed positions and will be part of the pre-ISO noise. A general discussion of noise can be found in [8] and [9]. The plotted σ C was the average of the σ C values for the three colors.
From the graphs, it appears that the Canon 6D has very little sensor read noise (almost no dependence on ISO at short time values) and the noise level is a factor of 5 lower than the Canon Rebel. At ISO = 100, dark current noise starts to increase beyond T V ≈ 100 s for both cameras. Circuit noise after the ISO amplifier (dominant at low ISO and T V ) is about twice as much in the Rebel as in the 6D. Thus the two cameras should perform similarly for pictures taken at low ISO, short T V < 30 s, and for bright scenes. The advantage of the more advanced 6D camera will become apparent for pictures taken at high ISO in low light.
Summary and Conclusion
This work explored and established simple first-principles methods to quantitatively evaluate digital cameras, taking the concrete example of two Canon DSLR cameras. The study measured the entire exposure response from H min to H max , tested reciprocity and ISO gain, and obtained estimates of the noise and dynamic range. Furthermore, it was able to separate the various components of noise: dark current, sensor read noise, and post ISO circuit noise. The procedures re-quire no special equipment and use free or readily available softwares, allowing any consumer to quantitatively compare digital cameras, and find the best combination of settings of A V , T V , and ISO for a particular camera and photographic situation. It was not possible to find comparable information on these or other cameras published in the literature or the internet, including Canon's own https://www.usa.canon.com/ website. This work, therefore, provides new information and an objective approach to evaluate other digital cameras. An interesting future experiment would be to measure the noise at various temperatures.
The dark current increases with temperature; hence a more advanced DSLR might be needed if shooting is mostly done in a warmer climate, especially at high values of ISO and T V . | 3,064.6 | 2019-09-05T00:00:00.000 | [
"Engineering",
"Physics",
"Computer Science"
] |
Anti-miRNA103/107 encapsulated in transferrin-conjugated lipid nanoparticles crosses blood-brain barrier and reduces brain ischemic damage
MicroRNA (miRNA), by post-transcriptionally regulating the expression of genes involved in stroke response, represents important effectors in stroke pathophysiology. Recently, the 103/107 miRNA family emerged as a possible therapeutic target in stroke, as it controls the expression of sodium calcium exchanger 1, a plasma membrane transporter that plays a fundamental role in stroke pathophysiology. Although the neuroprotective properties of this and other miRNAs are promising, several pharmacokinetic drawbacks remain to be faced for the development of a translatable therapy based on small RNAs in CNS diseases. In the present study, to overcome these limitations, the anti-miRNA103/107 was encapsulated in specific preparations of lipid nanoparticles (LNPs), and their effectiveness was evaluated both in an in vitro model of hypoxia represented by primary neuronal cortical cultures exposed to oxygen and glucose deprivation followed by reoxygenation, and in an in vivo model of stroke obtained in rats exposed to transient occlusion of the middle cerebral artery. The results of the present study demonstrated that the encapsulation of anti-miRNA103/107 in transferrin-conjugated PEG-stabilized LNPs allowed the blood-brain barrier crossing and significantly reduced brain ischemic damage. The present achievements pave the way for the exploitation of a systemic intravenous miRNA delivery strategy in stroke therapy.
INTRODUCTION
Despite the advancement of knowledge in the field of therapy for neurodegenerative diseases and cerebral ischemia, the results so far obtained are not completely satisfactory.Indeed, despite that cerebral ischemia represents the second cause of death and the leading cause of disability in the world, the tissue plasminogen activator (rTPA) remains the only therapeutic option, and its use is limited to 3%-4% of patients, for its very narrow therapeutic time window. 1 Consid-ering these premises, it urges the need to develop innovative therapeutic strategies capable of overcoming the limits of the available treatment.In this regard, a class of small non-coding RNA, micro-RNA or miRNA molecules, are emerging as promising therapeutic options in stroke and other neurological disorders. 2In fact, these molecules can regulate the expression of target proteins involved in the pathogenesis of the entire spectrum of known disorders including neoplastic, cardiovascular, infectious, degenerative, and inflammatory-autoimmune diseases. 3What make miRNAs particularly attractive as potential therapeutic agents is the ability to target at the same time more than 20 different messenger RNAs, either preventing their translation, or promoting their degradation, thus regulating the activity of numerous proteins. 4,5This latter aspect is particularly relevant for pathologies characterized by the activation of several pathways that cannot be targeted by a single drug.
][8][9][10][11] The use of the specific anti-miRNA, able to prevent stroke-induced NCX1 downregulation determined a significant reduction of the infarct lesion in ischemic rats. 12Although the neuroprotective properties of this and other miRNAs are promising, several pharmacokinetic drawbacks remain to be faced for the development of a translatable therapy based on small RNAs in CNS diseases, such as (1) stability after systemic administration, (2) blood-brain barrier (BBB) crossing, (3) achievement of the ischemic brain region, and (4) uptake into the target cells.
To overcome these limitations, the anti-miRNA103/107 was encapsulated in specific preparations of lipid nanoparticles (LNPs), including transferrin-conjugated stabilized lipid nanoparticles, and their effectiveness was evaluated both in an in vitro model of hypoxia represented by primary neuronal cortical cultures exposed to oxygen and glucose deprivation (OGD) followed by reoxygenation, and in an in vivo model of stroke obtained in rats exposed to transient occlusion of the middle cerebral artery (tMCAO).
The development of this delivery approach of 103/107 anti-miRNA capable of specifically reaching the CNS following systemic administration may therefore constitute an innovative therapeutic strategy for the treatment of ischemic brain disease both in terms of drug molecular structure and in terms of pharmaceutical formulation.
Characterization of LNPs encapsulating anti-miR-103/107
Different lipid nanoparticle (LNP) formulations were prepared (Table 1) and characterized as described in the materials and methods section.As reported in Table 2, LNPs prepared without anti-miR103/107 had a mean diameter of about 147.7 nm and a negative zeta potential (z) (À16.8 mV).The encapsulation of anti-miR103/107 as well as the conjugation of transferrin (Tf) on the LNP surface led to an increase of the size up to 173.4 nm in the case of LNPs 3.All the formulations were characterized by a narrow size distribution (polydispersity index [PI] < 0.2) and a negative z (from about À11.6 to À29.4 mV).The use of different lipid molar ratio did not significantly affect the particle size; on the other hand, the higher percentage of the ionizable lipid, namely DODAP, led to an increase of the zeta potential (Table 2).In all cases, the encapsulation efficiency (EE%) of anti-miR103/107 into the LNPs was always very high, especially when a higher percentage of DODAP was used, e.g., LNPs 2 and LNPs 4.
LNPs did not exhibit toxicity in primary rat cortical neurons
In vitro experiments were performed in primary rat cortical neurons to demonstrate the absence of toxicity.To this aim, rat cortical neurons were exposed to different LNP preparations with an initial concentration of 0.7 mg/mL lipids and 0.4 mg/mL anti-miRNA in basal conditions, after 1:1,000, 1:100, and 1:10 v/v dilution, and their effects on mitochondrial redox activity were observed after 24 h of treatment.The results reported in Figure S1 demonstrated the absence of toxicity of empty LNPs 1-4, since no changes in mitochondrial oxidative capacity were observed in basal conditions.
LNPs 1, 3, and 4 encapsulating anti-miRNA103/107 counteracted the impairment of mitochondrial redox activity in cortical neurons exposed to OGD/reoxygenation In order to demonstrate the effectiveness of LNPs to exert neuroprotection, primary cortical neurons were exposed to OGD followed by reoxygenation (OGD/REOXY) in presence or in absence of LNPs 1-4.The results obtained demonstrated that LNPs treatment was able to prevent the impairment in mitochondrial redox ability occurring in cortical neurons during OGD/REOXY, and that this effect was more evident for cortical neurons exposed to OGD/REOXY in the presence of LNPs 1, 3, and 4 (Figure 1), whereas LNP 2 did not show any effect.
To clarify whether these effects might be related to the ability of LNPs to interfere with NCX 1 expression and activity through the release of anti-miRNA103/107 into neurons, further experiments were performed by using cortical neurons exposed for 6 h to LNPs 1-4 empty or pre-loaded with anti-miRNA103/107.As reported in Figure 2A, the expression of neuronal anti-miRNA did not change following empty LNPs exposure.By contrast, after treatment with anti-mirRNA-103/107-loaded LNPs, the expression of neuronal anti-miRNA significantly and robustly increased (Figure 2B).The effect of the four formulations of LNPs containing anti-miRNA103/107 (70 mg/mL of lipids) was assessed on the progression of ischemic damage.Several experimental approaches have been evaluated in terms of duration of treatment and frequency of administrations and the most significant results were obtained when the animals underwent four repeated intravenous (i.v.) infusions: 18 and 24 h before stroke induction, and 1 h and 5 h after ischemia.As shown in Figure 3, the use of LNP 3 and LNP 4 and, to a lesser extent, LNP 1, encapsulating anti-miRNA 103/107 was able to induce a significant reduction of ischemic volume.In order to verify whether LNPs may per se have a neuroprotective role and reduce the ischemic volume, experiments were also carried out in animals receiving LNPs loaded with miRNA scramble.In this experimental subgroup, LNPs were used without dilution from the starting stock.The data obtained did not show any difference in terms of changes in ischemic volume when the effect of LNPs loaded with miRNA scramble at dose 1/1 (n = 4) was compared with the ischemic volume calculated in animals receiving vehicle (n = 3) (Figure S2).Treatment with LNP 3 and 4 encapsulating anti-miRNA103/107 prevented NCX1 downregulation in rat cerebral cortex after tMCAO First, we observed an increase in anti-miRNA103/107 levels in temporoparietal cortex of rats treated with LNP1-4 loaded with anti-miRNA103/107 (Figure 4A).Then, we assessed NCX1 levels in terms of mRNA and protein content to evaluate whether treatment with LNPs might abolish the NCX1 downregulation mediated by cerebral miR-103/107 rise during ischemia in temporoparietal brain cortex of ischemic rats.Indeed, following treatment of ischemic rats with LNPs 1, 3, and 4 encapsulating anti-miRNA103/107, the decrease of ncx1 mRNA induced by tMCAO was prevented (Figure 4C).Similarly, western blot analysis showed that downregulation of NCX1 protein induced by tMCAo and observed in temporoparietal brain cortex of ischemic rats was significantly prevented after LNP 1, 3, and 4 treatments (Figure 4B).
Anti-miRNA-103/107 plasma levels changed over time after i.v.administration of LNP-anti-miRNA 103/107 in non-ischemic animals In order to verify the permanence in circulation of the nanoencapsulated anti-miRNA administered intravenously, plasma samples were collected from untreated animals (control group) and from animals receiving LNP anti-miRNA as previously described.Real-time PCR analysis showed that anti-miRNA103/107 levels were considerably high up to 96 h after the last administration.At a longer time interval of 7 days, anti-miRNA plasma levels returned to basal (Figure 5A).
Furthermore, to verify that LNP3 loaded with anti-miRNA103/107 reached the brain, anti-miR 103a was labeled with rhodamine in 5 0 and encapsulated in nanoparticles.
The rhodamine fluorescence intensity has been quantified in animals treated with 5 0 -Rho-Anti-miR103a-3 0 loaded LNP3 in several organs, as shown in Figure 5B.Interestingly, the fluorescence intensity was higher in brain than in heart, liver, and muscle, thus supporting the hypothesis that transferrin-conjugated LNPs were preferentially localized in the brain.
DISCUSSION
The results of the present study demonstrated for the first time that the encapsulation of anti-miRNA103/107 in transferrin-conjugated stabilized LNPs allowed the BBB crossing and significantly reduced brain ischemic damage.
In particular, here we gave the basis for the development of a new therapeutic strategy for the treatment of cerebral ischemia based on the use of anti-miRNA103/107 incorporated in lipid nanocarriers, which can be intravenously administered.The main element of originality consisted in the proposal of a drug formulation with an innovative pharmacodynamic mechanism to treat a disease for which, at present, there are no other strategies, except for the fibrinolytic agent rTPA or intraarterial thrombectomy.It should also be stressed that the use of a lipidbased nanocarrier, already used in the treatment of hereditary transthyretin-mediated amyloidosis or for vaccination against COVID-19, is proposed here for the delivery of anti-miRNA into the CNS, representing a strategy capable of generating technological improvements not only in the ischemic cerebral pathology under study, but also in numerous other unexplored neurodegenerative diseases.4][15][16] Therefore, an i.v.-administrable formulation able to deliver therapeutic RNA across the BBB to reach the brain regions involved in the pathological process represents a significant advancement toward the research of novel therapeutic strategies.
It should be underlined that drug delivery systems (DDSs) for BBB crossing in stroke and other neurological disorders is a subject of intense and diverse studies. 17Indeed, numerous papers reported specific, safe, and effective targeted delivery of the DDSs carrying diverse therapeutic agents to the cerebrovascular targets after intravascular injection in animals. 17An intense and promising field of research is represented by the use of nanoparticles modified with antibodies targeting proteins expressed on endothelium.In this regard, it has been reported that modified urokinase fused with the ligands (antibodies and scFv) binding to the endothelial cell surface determinant PECAM-1 accumulates in the brain, lyses thrombi, and alleviates thrombotic stroke in a mouse model. 18Another important outcome has been given with DDSs using ligands binding to VCAM-1, which being relatively more selective, appears on the surface of pathologically altered BBB endothelium, thus showing remarkable uptake in the animal brain injury models allowing magnetic resonance imaging of the pathology. 19More recently, it has been reported that targeting CAMs, especially VCAM, with mAbs and nanocarriers represent a promising direction for innovative stroke therapies. 20 our study we propose a new approach favoring BBB crossing and brain accumulation by using transferrin on the LNP surface.We showed that LNPs represent an additional promising approach to design new therapies based on miRNA in the treatment of ischemia.LNPs are the most investigated platform for RNA delivery with many formulations in clinical trials and three products approved by the Food and Drug Administration, 21 thus offering guarantees for the future scale-up and large-scale production of the formulation in GMP grade.Here, LNPs with different compositions, namely different content of ionizable lipids and conjugation with transferrin, have been tested.All the formulations, used at three different concentrations, showed no toxicity on cortical neurons.Neuroprotection in primary cortical neurons exposed to OGD/REOXY was also evident by using LNP encapsulating anti-miRNA103/107, but only in the case of LNPs 1, 3, and 4. Actually, among all the LNP formulations tested in the present study, LNP 2 is characterized by a higher content of ionizable lipid compared with LNP 1; thus, while the ionizable lipid is the key component to have RNA uptake into cells as well as lysosomal escape, its concentration must be finely tuned up for optimal transfection efficiency.Interestingly, in the case of transferrin-targeted LNPs, both formulations, LNP 3 and 4, provided high levels of neuroprotection, suggesting that the presence of transferrin on the surface should change the mechanism of cell uptake, leading to high neuroprotection independently on the ionizable lipid content.
In vivo studies demonstrated that animals receiving LNP encapsulating anti-miRNA103/107 showed an ischemic volume significantly lower than untreated animals.Interestingly, the highest reduction in ischemic volume obtained with LNP 3 and 4 could be ascribed to the presence of transferrin on the LNP surface that could facilitate BBB crossing.This peculiar feature may explain why LNP 2 loaded with anti-miRNA103/107, which lacks transferrin on their surface, was ineffective in vivo, although effective in vitro.However, the reduced ischemic volume found at a lesser extent also in the case of transferrin untargeted LNP1 also suggests a partial alteration of BBB in this experimental animal model.
Finally, it should be underlined that plasma levels of anti-miR-103/ 107 drops 96 h after LNP injection, thus rendering higher the translational potential of this therapeutic approach.This is not surprising because LNPs are designed to be long circulating nanoparticles due to the presence of polyethylenglycol on the nanoparticle surface.In particular, PEGylation of nanoparticles avoids their opsonization leading to a longer blood circulation and to a wide distribution in tissues.Notably, all LNP preparations did not show cytotoxic effects and, more importantly, the neuronal expression of anti-miRNA and miR-103/107 itself did not undergo changes following the exposure of neurons to the four different LNP empty preparations.By contrast, a significant reduction in miR-103/107 levels was observed in neurons exposed to anti-miRNA LNPs, thus confirming the occurrence of the internalization process.Notably, following the systemic administration of LNP 1-4 to ischemic rats, a quantitative increase of anti-miRNA103/107 was observed, the latter was associated with a reduction of miRNA-103/107 in the temporoparietal cortex of rats.As expected, the administration of LNPs containing anti-miRNA103/107 was able to prevent the reduction of the sodium cal-cium exchanger, NCX1, thus promoting neuronal survival in ischemic conditions. 12ile the study demonstrated that LNPs encapsulating anti-miRNA103/107 could represent a novel and powerful weapon against ischemic damage, further studies should allow for setting up the optimal therapeutic posology to design a future therapeutic strategy.Another limitation of our approach is represented by the lack of specificity for the ischemic region, as it occurs with DDSs equipped with antibodies against penumbra-related selective targets. 20 summary, the present study confirms the important role of the miRNA-103/107 family as a therapeutic target in stroke and validates a potentially exploitable pharmacological strategy not only for the administration of drugs that reach the CNS and are therefore useful for the treatment of ischemic or neurodegenerative pathology, but for any other pathology in which it becomes extremely important to direct the drug at the target organ level, opening new perspectives of pharmacological innovation in different sectors of medical pathology.
In the case of LNP-anti-miRNA103/107 decorated with Tf, a lipid composition of DSPC/CHOL/DODAP/PEG2000-Cer16/DSPE-PEG 2000-Mal (see Table 1) was used.Then, the lipid ethanol solution was added to the buffer solution under stirring; the resulting suspension was dimensioned by a thermos barrel extruder system (Northern Lipids Inc., Vancouver, BC, Canada), maintained at 65 C, repeatedly passing the suspension through polycarbonate membranes with decreasing pore sizes (Nucleopore Track Membrane 25 mm, Whatman, Brentford, UK).The preparation was then dialyzed (3.5 kDa cutoff) against 20 mM citrate buffer (pH 4.0 for 1 h) to remove the excess of ethanol and against HBS (20 mM HEPES, 145 mM NaCl, pH 7.4 for 12-18 h), to remove the citrate buffer and to neutralize the LNP surface charge.For Tf-LNP, Tf was first thiolated using 2-iminothiolane (Traut's reagent).Briefly, Tf was dissolved in 0.1M Na-borate buffer pH 8, followed by the addition of Traut's reagent (1:40 mol/mol).After 60-min incubation at room temperature, the excess Tf was removed by molecular exclusion chromatography, Sepharose G-25 column.Thereafter, thiolate-Tf was incubated with preformed LNPs-miR-103/107 (DSPC/CHOL/DODAP/ PEG2000-Cer16/DSPE-PEG2000-Mal) overnight at 25 C.The unconjugated Tf and non-encapsulated miRNA were removed by ultracentrifugation at 80,000 rpm at 4 C for 40 min (Optima Max E, Beckman Coulter, USA; rotor TLA 120.2).Blank LNPs were also prepared and used as control.All LNP formulations were prepared in triplicate.
LNP characterization Size, PI, and superficial charge of LNPs
The mean diameter, PI, and zeta potential (z) of LNPs were determined by Zetasizer Ultra (Malvern Panalytical, United Kingdom).Results were averaged over three measurements from independent batches.
Lipid dosage in LNPs
The amount of phospholipids in the LNP-anti-miR-103/107 was determined by the Stewart assay. 23Briefly, an aliquot of the LNPs was added to a two-phase system, consisting of an aqueous ammonium ferrothiocyanate solution (0.1 N) and chloroform.Each tube was mixed on vortex and then centrifugated for 5 min.The chloroform phase was collected and the concentration of DSPC was obtained by measure of the absorbance at 485 nm with an ultravioletvisible spectrophotometer (UV VIS 1204; Shimadzu Corporation, Kyoto, Japan).The concentration of the total lipids content was calculated considering a constant ratio between the lipids.
Anti-miR-103/107 encapsulation in LNPs
The amount of anti-miRNA103/107 encapsulated into the LNPs was determined spectrophotometrically.Briefly, an aliquot of the formulation was dissolved in methanol (1:100 v/v) and samples were centrifugated for 30 min at 13,000 rpm (MIKRO 20; Hettich, Tuttlingen, Germany).The supernatants were analyzed by UV (UV-1800, UV Spectrophotometer) at the wavelength of 260 nm.The amount of anti-miRNA loaded into the nanocarriers was expressed as anti-miRNA encapsulation efficiency (EE %), calculated as % ratio between anti-miRNA actual loading (mg of anti-miRNA/mg of total lipids) and anti-miRNA theoretical loading in formulation.For each formulation, results were calculated as the mean of the measures obtained in three different batches (n = 3).
Rat primary cortical neurons
Rat primary cortical neurons were prepared from 17-day-old Wistar rat embryos (Charles River). 24Briefly, rats were first anesthetized and then decapitated to minimize animal's pain and distress.Dissection and dissociation were performed in Ca 2+ \Mg 2+ -free phosphate-buffered saline (PBS) containing glucose (30 mM).Tissues were incubated with papain for 10 min at 37 C and dissociated by trituration in Earle's Balanced Salt Solution containing DNase (0.16 U\mL), bovine serum albumin (10 mg\mL), and ovomucoid (10 mg\mL).Neurons were plated in plastic Petri dishes (Falcon Becton-Dickinson) pre-coated with poly-D-lysine (20 mg\mL) and were grown in MEM\F12 (Life Technologies) containing glucose, 5% of deactivated fetal bovine serum, and 5% of horse serum (Life Technologies), glutamine (2 mM\L), penicillin (50 U\mL), and streptomycin (50 mg\mL) (Invitrogen).Within 48 h of plating, cytosine arabinoside (ara-C) (10 mM) was added to prevent non-neuronal cell growth.Neurons were cultured at 37 C in a humidified 5% CO 2 atmosphere and used after 7-10 days of culture.Cell density was 5 Â 10 6 cells/well of 60 mm for analysis of qRT-PCR and 15 Â 10 6 cells/well of 100 mm for western blot analysis.
In vivo experimental groups
Ninety Sprague-Dawley male rats (Charles River Laboratories, Calco, Varese, Italy) weighing 200 to 250 g were housed under diurnal lighting conditions (12-h darkness/light).It has been calculated that about 20% of the animals used were excluded from the experimental groups due to the absence of ischemic lesions (10%) or to mortality related to the experimental procedure (10%).Animals were randomly allocated in the different experimental groups, and the treatment with LNP formulations was blindly performed, since each formulation was labeled with a number by a researcher different from the one performing the treatment.Furthermore, the collected samples were identified with a numeric code and all the postmortem experiments were performed by a researcher blinded to the applied treatment.
The key to open the blinding was provided only after the analysis was concluded.Finally, sample size was a priori determined by G-power software.Experiments were performed according to the international guidelines for animal research.The experimental protocol was approved by the Animal Care Committee of the "Federico II" University of Naples.
Transient focal ischemia and evaluation of infarct volume
Transient focal ischemia was induced, as previously described 25 .In brief, occlusion of the middle cerebral artery (MCA) was performed in male rats anesthetized using a mixture of oxygen and sevoflurane at 3.5% (Medical Oxygen Concentrator LFY-I-5A).A 5-O surgical monofilament nylon suture (Doccol, Sharon, MA) was inserted from the external carotid artery into the internal carotid artery and advanced into the circle of Willis up to the branching point of the MCA, thereby occluding the MCA.Achievement of ischemia was confirmed by monitoring regional cerebral blood flow in the area of the right MCA.Cerebral blood flow was monitored through a disposable microtip fiberoptic probe (diameter 0.5 mm) connected through a Master Probe to a laser Doppler computerized main unit (PF5001; Perimed, Järfälla, Sweden) and analyzed using PSW Perisoft 2.5. 26nimals not showing a cerebral blood flow reduction of at least 70% were excluded from the experimental group, as well as animals that died after ischemia induction.Rectal temperature was maintained at 37 ± 0.5 C with a thermostatically controlled heating pad and lamp.All surgical procedures were performed under an operating stereomicroscope in a blind manner.
Animals were killed with sevoflurane overdose 24 h after ischemia.Brains were quickly removed, sectioned coronally at 1-mm intervals, and stained by immersion in the vital dye (2%) 2,3,5-triphenyltetrazolium hydrochloride.The infarct volume was calculated by summing the infarction areas of all sections and by multiplying the total by slice thickness. 26To avoid that edema could affect the infarct volume value, infarct volume was expressed as percentage of the ischemic damage by dividing the infarct volume by the total ipsilateral hemispheric volume.
Experimental in vivo drug administration LNP nanovectors, empty, or carrying the selected anti-miRNA, were administered in rats through the tail vein.For each experimental group, i.v.administration of 100 mL of the different LNPs, diluted to the appropriate concentration by adding saline solution (0.9% NaCl, pH 7.4), was carried out as a bolus, using a 1-mL insulin syringe (BD Plastics).Ischemic control animals received only saline adminis-tration.The animals received four administrations, 24 and 18 h before ischemia and 1 and 5 h after tMCAO induction.
LNPs loaded with rhodamine-labeled anti-miRNA were injected in non-ischemic animals four times at the same time intervals and in the same amount of non-labeled LNPs administered to ischemic animals.
Anti-miRNA levels were evaluated in the temporoparietal cortex of rats injected with anti-miRNA, 24 h after the last injection.Anti-miRNA levels were also in plasma obtained from blood (500 mL) withdrawn from tail vein at different time intervals after LNP injection: 3, 24, 48, 96, and 168 h.To dilate the blood vessels, rat tails were immersed in preheated water at 38 C for 40-50 s.Blood was collected into purple top EDTA tubes and centrifuged (2,000 rpm) at 4 C for 20 min.After centrifugation, plasma was collected into 1.5-mL Eppendorf tubes labeled with tracking number and "plasma."
Fluorescence intensity quantification
Animals were anesthetized and transcardially perfused with saline solution containing 0.01 mL heparin (10 U/mL heparin in 0.1 M PBS) followed by 60 mL of 4% paraformaldehyde.The brain, liver, and heart were rapidly removed on ice and postfixed overnight at +4 C and cryoprotected in 30% sucrose in 0.1 M phosphate buffer (PB) with sodium azide 0.02% for 24 h at 4 C.The gastrocnemius was frozen with liquid nitrogen and stored at À80 C. The organs were then sectioned frozen on a sliding cryostat at 10-mm thickness.Subsequently, after mounting the slides, they were observed directly under the confocal microscope (Zeiss LSM 700).
Quantification of fluorescence intensity on tissue sections at the level of brain cortex, liver, heart, and gastrocnemius muscle was done in terms of pixel intensity value by using the NIH image software.Briefly, digital images were taken with Â20 objective and identical laser power settings and exposure times were applied to all the photographs from each experimental set.Images were first thresholded to identify the positive signal; subsequently, the pixels expressing rhodamine co-localizing with fluorescein were identified.Results were expressed in arbitrary units.n = 3 mice per group.
Western blot analysis
Samples from cortical neurons and rat ischemic brain regions were homogenized in a lysis buffer (50 mmol/L Tris-HCl, pH 7.5, 100 mmol/L NaCl, 1% Triton X-100) containing protease and the phosphatase inhibitor.After centrifugation at 12,000 Â g at 4 C for 15 min, the supernatants were collected.Protein concentration was estimated using the Bradford method, by means of a spectrophotometer (Eppendorf).Then, 80-100 mg of protein lysate was mixed with a Laemmli sample buffer and boiled at 95 C for 5 min.The samples were resolved by sodium dodecyl sulfate polyacrylamide gel electrophoresis and transferred to nitrocellulose membranes.Blots were probed with antibodies for NCX1 (Swant, rabbit polyclonal, 1:1,000), and a-tubulin (Abcam, mouse monoclonal, 1:10,000) diluted in tris-buffered saline (TBS-T) 1% bovine serum albumin overnight (4 C).Then, they were detected using horseradish peroxidase conjugated secondary antibody (mouse and rabbit, Cell Signaling; 60 min at room temperature in 5% non-fat milk) and an enhanced luminescence kit (Amersham Pharmacia Biotech, NJ, USA).
OGD and REOXY
Cortical neurons were first exposed to OGD for 3 h and then to reoxfor 24 h. 27,28In brief, the culture medium was replaced with a hypoxia medium, which was previously saturated with 95% N 2 and 5% CO 2 for 20 min; it contained NaCl 116 mM, KCl 5.4 mM, MgSO 4 0.8 mM, NaHCO 3 26.2mM, NaH 2 PO 4 1 mM, CaCl 2 1.8 mM, glycine 0.01 mM, and 0.001 w/v phenol red.Hypoxic conditions were maintained with a hypoxia chamber (temperature 37 C, atmosphere 95% N 2 , and 5% CO 2 ).These experimental conditions induced a 30% decrease of pO 2 in the medium.Deprivation of oxygen and glucose was stopped by placing the cells in the regular culture medium saturated with a mixture of 95% O 2 and 5% CO 2 for 10 min.Reoxygenation was achieved by returning neurons to normoxic conditions (37 C in a humidified 5% CO 2 atmosphere) for 24 h.
Determination of mitochondrial oxidative activity
Mitochondrial function was assessed by measuring the level of mitochondrial dehydrogenase activity using the reduction of 3-(4,5dimethylthiazol-2-yl)-2,5, diphenyltetrazolium bromide (MTT) as substrate. 28The assay is based on the ability of living mitochondria to convert dissolved MTT into insoluble formazan.Briefly, after treatments, the medium was removed, and the cells were incubated in 500 mM of MTT solution (0.5 mg/mL) for 1 h in a humidified 5% CO 2 incubator at 37 C.The incubation was then stopped by removing the medium and by adding 1 mL of DMSO to solubilize the formazan.The absorbance was detected at 540 nm.Data were expressed as the percentage of mitochondrial redox activity compared with untreated cultures.
Statistical analysis
Values were expressed as means ± standard error of the mean (SEM).Real-time PCR results were expressed as fold change (2 ÀDDCt ) compared with the control group settled to 1, following the instructions provided by the literature. 24Briefly, difference between Ct values of gene of interest and internal control (DCt) was calculated for both control sample and target sample.Then, difference between DCt of target sample and control sample (DDCt) was calculated.Fold change of gene expression of target samples compared with control sample is calculated as 2 ÀDDCt .Statistical analysis was performed with GraphPad Prism 5.0 (GraphPad Software, Inc., San Diego, CA), using one-way analysis of variance followed by Newman-Keuls posttest for group more than two.To compare two groups, unpaired t test was used.Statistical significance was accepted at the 95% confidence level (p < 0.05).
Statistical analysis for in vitro experiments was performed by using one-way analysis of variance followed by Newman-Keuls.The MTT experiments were performed in triplicate and the values expressed as percentage of mean ± SEM.
Figure 1 .
Figure 1.Effect of anti-miRNA103/107-loaded LNP formulations on mitochondrial redox activity in cortical neurons exposed to OGD/reoxygenation The values for each column represent the mean percentage ± SEM. *p < 0.05 vs. control neurons; **vs.control untreated OGD/REOXY exposed neurons.
Figure 2 .
Figure 2. Expression analysis by real-time PCR of anti-miRNA103/107 levels on cortical neurons treated with empty (eLNP) and anti-miR-103/107-loaded LNPs Anti-microRNA levels are expressed as fold change of relative expression levels over the control group, represented by untreated cortical neurons.Each column represents the mean ± SEM. Results of anti-microRNA expression were normalized with respect to U6 snRNA as internal control.n = 4-5 samples per each group.*p < 0.05 vs. control group.
Figure 3 .
Figure3.In vivo effect of anti-miRNA103/107-loaded LNPs on ischemic volume of rats subjected to tMCAO followed by 24 h from reperfusion Ischemic damage was assessed in rats subjected to tMCAO and treated with four bolus injections of LNPs through the caudal vein 24 and 18 h before tMCAO induction, and 1 and 5 h after reperfusion.Vehicle group represents animals treated with tMCAO and saline solution.Each column represents the mean ± SEM. *p < 0.05 vs. vehicle group.
Figure 4 .
Figure 4. Evaluation of NCX1 expression in samples of ipsilateral cortex from ischemic rats treated with anti-miRNA103/107-loaded LNPs (A) Expression analysis by real-time PCR of anti-miRNA103/107 levels in temporoparitel cortex of rats treated with anti-miR-103/107-loaded LNP1-4 24 h after i.v.injection.Results were normalized with respect to U6 snRNA as internal control.n = 3 samples per each group.(B) NCX1 protein levels are expressed as percentage vs. the shamoperated controls.Each column represents the mean ± SEM. Results of NCX1 expression were normalized with respect to a-tubulin.On the top, representative blots of NCX1 and a-tubulin signals in the sham and ischemic animals euthanized at 24 h from reperfusion.n = 4 samples per each group.*p < 0.05 vs. sham-operated control group.# p < 0.05 vs. vehicle group.(C) ncx1 mRNA levels are expressed as fold change of relative expression levels over the sham-operated group.Each column represents the mean ± SEM. Results of ncx1 expression were normalized with respect to GAPDH as internal control.n = 3 samples per each group.*p < 0.05 vs. control group.
Figure 5 .
Figure 5. Plasma and organ distribution of aanti-miR-103/107-loaded LNP 3 after i.v.injection (A) Expression analysis by real-time PCR of circulating anti-miRNA103/107 levels in blood of rats exposed to tMCAO and treated with anti-miR-103/107-loaded LNP 3 at different time intervals from reperfusion.In (B) are reported RHOD-1/Fluorescein fluorescent levels in different organs after i.v.injection of fluorescent LNP3.(A) Anti-microRNA levels are expressed as fold change of relative expression levels over the vehicle group, represented by ischemic rats treated with saline solution.(B) Quantification of fluorescence intensity on tissue sections at the level of the brain cortex, liver, heart and gastrocnemius muscle is expressed in terms of pixel intensity value.Pixels expressing rhodamine co-localizing with fluorescein were identified.Results are expressed in arbitrary units.Each column represents the mean ± SEM. Results of anti-microRNA expression were normalized with respect to U6 snRNA as internal control.n = 3-4 samples per each group.*p < 0.05 vs. other experimental groups.
Table 1 .
LNP-anti-miR103/107composition Time PCR System (AB Applied Biosystems).cDNA samples were amplified simultaneously in triplicate in one assay run, following the protocol for Taqman assays: 50 C for 2 min, 95 C for 10, 40 cycles of amplification of 95 C for 15 s, and 60 C for 1 min.All reactions were run in triplicate.Results were analyzed and exported with 7500 Fast System SDS Software.Taqman probes used are the following: miRNA Assay anti-miRNA103/107-3p (ID: 121115_mat); miRNA Control Assay U6 snRNA (ID: 001973); | 7,380.8 | 2024-02-01T00:00:00.000 | [
"Medicine",
"Chemistry"
] |
Current fluctuations in nanopores: the effects of electrostatic and hydrodynamic interactions
Using nonequilibrium Langevin dynamics simulations of an electrolyte with explicit solvent particles, we investigate the effect of hydrodynamic interactions on the power spectrum of ionic nanopore currents. At low frequency, we find a power-law dependence of the power spectral density, with an exponent depending on the ion density. Surprisingly, however, the exponent is not affected by the presence of the neutral solvent particles. We conclude that hydrodynamic interactions do not affect the shape of the power spectrum in the frequency range studied.
Hydrodynamic interactions have a strong influence on the dynamics of Brownian particles suspended in a solvent, producing self-organized states, nonlinear dynamics, and synchronization [1][2][3][4]. Hydrodynamic interactions between objects decay slowly. Similar to the electrostatic potential, the strength of the hydrodynamic interactions in bulk is inversely proportional to the distance between the particles [5]. Moreover, the effects of hydrodynamic interactions are extremely sensitive to geometric confinement. Density perturbations in a fluid between stationary confining walls give rise to a long-time tail in the velocity autocorrelation function of colloidal particles [6]. Experiments show that hydrodynamic interactions even become independent of the distance between particles inside small pores [7]. These hydrodynamic effects have a pronounced effect on the dynamics of larger molecules, such as DNA, translocating through a nanopore [8,9]. Whereas the effect of hydrodynamic interactions on colloidal particles and polymer dynamics has attracted a lot of attention over the past decades, the effect of hydrodynamic interactions on ion dynamics remains largely unexplored.
The combination of experimental measurement and molecular modeling of the power spectral density constitutes a promising technique to study ion motion in unprecedented detail [10]. For example, the power spectrum can be used to study the microscopic properties of nanofluidic systems, such as the adsorption of molecules on the walls of a nanometer-scale cavity [11]. Recently, we showed that ion correlations at high particle density produce a power-law spectrum at low frequency, with an exponent depending on the ion density [12]. Experiments show that the power spectrum of the ionic nanopore current S (ω), with ω = 2πf being the frequency, typically follows a power law S (ω) ∝ 1/ω α with α ≈ 1, which is referred to as pink noise, or 1/f noise [13][14][15]. The appearance of pink noise in nanopore current measurements is ubiquitous; it is found in a variety of systems, from protein channels and flexible synthetic pores [16][17][18], to solid-state conical pores [19]. The molecular origin of the low-frequency pink noise has been debated for decades [20][21][22]. However, theoretical analysis of the power spectrum including the multi-body interactions between the ions and the effect of hydrodynamics remains challenging. Therefore, although hydrodynamic interactions are usually present in experimental studies, their effects on the frequency dependence of the power spectral density are unknown. The situation has changed with the recent advance of fast and versatile molecular simulation techniques, which now allow a systematic computational investigation.
In this manuscript, we present a Langevin dynamics simulation study of the ionic current through a nanometer-scale pore filled with an electrolyte, using the Espresso molecular dynamics package [23]. The electrolyte is modeled by ions in an explicit solvent. For the solvent, we use a coarse-grained description of neutral, nonpolar Lennard-Jones particles. To systematically study the effect of hydrodynamic interactions, we vary the density of both the ions and the solvent particles independently. We calculate the power spectral density of the ion current and compare the results with simulations without solvent and with a linearized meanfield theory of ion currents without hydrodynamic interactions. Whereas an increase in the ion density directly causes a power-law behavior of the power spectrum at low frequency, introducing hydrodynamic interactions by increasing the solvent density does not have the same effect.
Simulation model
We use the simulation package Espresso [23] to set up Langevin dynamics simulations of a nanopore filled with a mixture of monovalent positive and negative ions and neutral solvent particles (Fig. 1). The Langevin equation for particle i is expressed as where ξ i (t) is the stochastic force satisfying m i = 1 k B T τ 2 /Å 2 denote the velocity and the mass, respectively, and F i is an external force applied to the particle. The thermal energy equals k B T and we use γ = 1 k B T τ /Å 2 . By using an equal and arbitrary mass for all particles, m i is incorporated in the time scale τ . Short-ranged interactions between pairs of particles are modeled by a Weeks-Chandler-Andersen (shifted Lennard-Jones) potential V ij (r ij ), where σ ij denotes the distance between particles i and j, Q i is the charge of particle i in units of the elementary charge e, and ij represents the interaction strength. The Bjerrum length l B = e 2 / (4πεε 0 k B T ), and the distance between any pair of particles is given by r ij . The Lennard-Jones interaction is truncated at r ij = 2 1 6 σ ij for all combinations of i,j. The electric field on the ions is represented by a force applied inside the pore in the x direction, which is the direction along the length L of the pore (Fig. 1), The electric field is varied between E =0.3 and 1.6 k B T / eÅ . The simulations are performed in a cylindrical nanopore with a radius ranging from 19Å to 30Å, permeating a rigid membrane with a width of W = 96Å and a length L = 48Å (Fig. 1). We use an increasing solvent which are all well below the bulk freezing density of the Weeks-Chandler-Andersen fluid model. Compared to the molecular density of water, the maximum density corresponds to a coarse-grained force field where each particle represents approximately 6 water molecules. A smooth surface induces a crystalline order in the fluid, over a range depending on the molecular properties of the liquid [24]. For a fluid consisting of identical Lennard-Jones spheres, the induced order propagates over a distance larger than our simulation box. Therefore, to prevent crystallization of the Lennard-Jones fluid, we perturb the uniform membrane surface by randomly removing half of the particles from the outer layer. The membrane particles are frozen, and for the membrane-ion, membranesolvent, ion-solvent, ion-ion and solvent-solvent interactions we use ij = 2 k B T and σ ij = 4.7Å. For the membrane and solvent particles we use Q i = 0, and for the ions we use Q i = ±1. The ion concentration is varied When the motion of particle i perturbs the surrounding solvent, the hydrodynamic signal diffuses at a rate governed by the kinematic viscosity ν. For hydrodynamic interactions to occur, this viscous momentum must diffuse much faster than the particle itself. The relation is governed by the Schmidt number Sc = ν/D, with D being the diffusion coefficient of the solvent particles. To verify that the coarse-grained solvent particles produce hydrodynamic interactions in the strongly confined environment of the nanopore, we simulate a pressure difference across the length of the channel by applying a constant force to all particles inside a pore filled with pure solvent. We calculate the fluid velocity as a function of the radial coordinate, averaged across the length of the channel. The flow of ions in a slit-like cylindrical channel forms a Hagen-Poiseuille flow profile with a finite slip length b, with R being the radius of the pore, F C s = ∇ p being the pressure gradient across the length of the pore, which in our case is derived from the uniform applied force F = 0.8 k B T /Å on the solvent particles inside the pore, which have mass m i = 1 k B T τ 2 /Å 2 and number density C s . We show the velocity profile in Fig. 2(a) for the lowest solvent density C s = 5.5 · 10 −4Å−3 . The fit of Eq. 4 yields ν = 900Å 2 /τ , which in combination with D = 1Å 2 /τ yields Sc = 900. As the Schmidt number at higher solvent densities is even higher, all our simulations satisfy the conditions for hydrodynamic interactions.
Apart from propagation by viscous momentum diffusion, hydrodynamic interactions are transmitted by sound wave propagation. In an incompressible fluid, the sound velocity is infinite, and the viscous momentum diffusion is solely responsible for the time evolution of the hydrodynamic interactions. The compressibility of our model solvent is finite, however, depending on the solvent density C s , which might have implications for the hydrodynamic interactions [25]. We calculate the isothermal compressibility from the pressure p as a function of solvent density C s in separate bulk simulations using κ T = (d ln(C s )/dp) T , with p being the pressure, see Fig. 2(b). The compressibility is varied over two orders of magnitude as we change the solvent density. Nevertheless, the compressibility of water, equal to κ T = 2Å 3 /(k B T ), is still a factor 5 below the compressibility of our highest-density solution. We quantify the effect of the compressibility by calculating the sound velocity u s = γ/ (m i C s κ T ), with γ being the heat capacity ratio, which is of order γ ∼ 1 in a liquid. The sound velocity increases drastically when we change the solvent density in our simulations, from u s = 3Å/τ at the lowest density to u s = 239Å/τ at the highest density. The importance of the compressibility effects is estimated from the Mach number Ma = k B T /m i /u s , where the estimate of the thermal velocity k B T /m i is being used as the typical velocity of the particles. As Ma is well below 1 in all our simulations, the compressibility is not expected to have a large effect on the hydrodynamic properties [25].
The stochastic force ξ in the Langevin dynamics simulations provides a truncation length beyond which ξ exceeds the force due to hydrodynamic interactions [25]. Quantification is complicated, because the truncation length depends on the magnitude of the force from which the hydrodynamic interactions originate. As the interparticle forces in the system reach very high values, however, a part of the long-ranged hydrodynamic interactions will be preserved.
Linearized mean-field theory
We derive a theoretical description of the noise spectrum of the ionic current following our previous analysis [12]. The expression for the power spectral density S (ω) is derived for monovalent ions in implicit water. Therefore, the following derivation does not include the effect of hydrodynamic interactions. Comparison with the simulation results allows us to study the effect that the hydrodynamic interactions in the simulations have on the power spectral density. Ion-ion correlations, which are responsible for the low-frequency power-law increase of the power spectrum at high ion density [12], are also absent from this theoretical model. We consider a system consisting of a cylindrical nanopore of length L and radius R connecting two reservoirs (Fig. 1), and calculate the flux density J ± (x, t) of positive and negative ions inside the nanopore, with x denoting the position in three dimensions and t denoting the time. The ion concentrations C ± (x, t) are governed by the continuity equation, The corresponding flux densities J ± (x, t) are given by the Nernst-Planck equation, where E (x, t) is the applied electric field, e denotes the elementary charge, and η ± (x, t) denotes the thermal noise that accounts for fluctuations in the environment; most importantly the effect of the implicit water on the ions dynamics. From here, we switch to index notation where α, β, and γ correspond to the three components of our coordinate system. To simplify the notation, we assume D + = D − = D and η + = η − = η. After applying a standard Fourier transform to Eqs. 5 and 6, we find with ... denoting the Fourier transform, q being the wave vector, and ω being the frequency. Rewriting Eq. 7 leads to where M αβ denotes the matrix Combining Eqs. 8 and 9 and solving for J ± γ , we find with det(M ) denoting the determinant of M . Within the geometry of the pore, there is one parallel ( ) direction, and two equivalent perpendicular (⊥1, ⊥2) directions, see Fig. 1(a). The electric-field is nonzero only in parallel direction E = (0, 0, E ). Therefore, the flux in the parallel direction becomes with q ⊥1 and q ⊥2 being the two independent wave vectors in the plane of the membrane. As the random force is applied to every individual particle, the power spectrum of the thermal noise in our implicit-solvent model is proportional to the ion concentration inside the pore, with C V = N /(πR 2 L) being the average number of ions per unit volume in the pore, which is proportional to the bulk ion concentration C i , but depends nontrivially on the radius R, the length L, the electric field, and the interionic interaction potential. Introducing short-hand notation, we derive from Eqs. 11-12 with q 2 = q 2 ⊥1 + q 2 ⊥2 + q 2 . The two-sided power spectral density S (ω) of the current I (t) defined on the domain 0 < t < T is given by the limit of T → ∞ of which can be written as We rewrite I (t) as the integral of the current density J + (x, t) − J − (x, t) at a given position in the direction of x over the lateral surface area A of the pore, Some mathematical manipulation yields in the limit T → ∞, with Λ being the small-scale cut-off length, introduced because of the finite particle size. The Fouriertransformed area function in Eq. 17 is given by The integral in Eq. 18 is performed over the lateral area A of the pore, which is approximately circular. However, because our cylindrical direct space does not map exactly to a cylindrical reciprocal space, we use two different approximations to calculate the integral (Fig .1(c)). First, integrating over a square of sides 2R gives Alternatively, integrating over a circle of radius R gives with x ⊥ = x 2 ⊥1 + x 2 ⊥2 and θ = arctan(x ⊥2 /x ⊥1 ) being the cylindrical coordinates, q ⊥ = q 2 ⊥1 + q 2 ⊥2 and φ = arctan(q ⊥2 /q ⊥1 ) being the polar coordinates in reciprocal space, and J 1 being the first order Bessel function of the first kind. The primary difference between Eqs. 19 and 20 is the amplitude of the calculated noise spectrum [12]. Contrary to the circular area, however, the square area can be mapped directly to reciprocal space, enabling a straightforward evaluation of Eq. 17. Therefore, we use Eq. 19 for all the curves in the present paper. Together with Eqs. 13 and 19, Eq. 17 is solved numerically to get the linearized mean-field prediction of S(ω). We use a fixed ion concentration Ci = 5.5 × 10 −4Å−3 , radius R = 25Å and an applied electric field E = 1.6 kBT /(eÅ).
Results and discussion
We calculate the power spectral density of the ion current in simulations with three different solvent densities C s (Fig. 3). At low solvent density, the curves exhibit a transition around ω = 0.1/τ , similar to the implicit solvent case [12]. With increasing solvent density, the transition becomes less pronounced due to an increase in the high-frequency noise level. Surprisingly, the increasing solvent density does not induce any alteration of the power spectral density at low frequency, even for a tenfold increase in solvent density.
We verify the effect of increasing ion concentration on the power spectral density in the presence of hydrodynamic interactions (Fig. 4). The amplitude of the noise increases with increasing ion concentration, and the transition frequency shifts to slightly lower values. Most strikingly, however, is the change of the behavior at low frequency. The power spectral density exhibits a power law, with an exponent that increases sharply with increasing ion concentration. These results are similar to the results found in simulations with implicit solvent [12]. However, as the curves in Fig. 4 extend to higher ion concentration then treated previously, the new results show that the increase in the exponent of the power law continues, reaching a = 0.4 at an ion concentration of To test the effect of the hydrodynamic interactions, we fit the linearized mean field theory, which does not take hydrodynamic interactions into account, to the curves in Fig. 4. Apart from the low-frequency powerlaw dependence, which is caused by ion-ion correlations [12], the simulated curves are well described by the The solid colored lines show the simulation results and the shaded region represents the standard deviation that we get when applying a block-averaging method to improve the readability. The dashed lines represent the fits derived from the linearized mean field theory (Eqs. 13, 17 and 19), with parameters taken from the simulations: applied electric field E = 1.6 kBT /(eÅ), pore radius R = 25Å, diffusion coefficient D = 1Å 2 /τ , and the small-scale cutoff length is set to a value of the order of the ion size, Λ = 2.5Å, for all curves. The solid black lines indicate the fit with S ∼ 1/ω a . implicit-solvent model. Remarkably, it is not necessary to take hydrodynamic interactions into account to describe the power spectrum of the ionic current through an electrolyte-filled pore.
At low frequency, we fit the exponent a of the power law S(ω) ∼ ω −a for the curves shown in Figs. 3 and 4. We fit the noise spectra for log 10 ω < −1.8 and discard the lowest frequency data points because of their statistical uncertainty. The exponent is shown in Fig. 5 as a function of ion concentration C i for fixed solvent concentration (top panel) and as a function of solvent concentration C s at fixed ion concentration (bottom panel). Whereas the exponent increases sharply as a function of the ion density, increasing the solvent concentration has no effect. Because the charge is the only difference between an ion and a solvent particle, we conclude that electrostatic interactions cause the increasing exponent. Hydrodynamic interactions, despite having a similar longranged spatial dependence, do not have the same effect.
We perform an extra simulation without solvent particles (C s = 0), and compare the power spectra of simulations with and without explicit solvent directly in Fig. 6. Clearly, the curves have the same frequency dependence over the entire frequency range, confirming the results of the preceding sections.
Finally, we study the dependence of the power spectral density on the pore radius R and the applied electric field E . In Fig. 7, we show that the linearized mean-field The power spectral density S(ω) of the current through a pore (R = 25Å) filled with only ions (Ci = 5.5 × 10 −4Å−3 , implicit solvent) and with ions and explicit solvent (Ci = Cs = 5.5 × 10 −4Å−3 ). The electric field is set to E = 1.6 kBT /(eÅ). theory -derived for implicit solvent -captures the dependence on the pore radius and the electric field without further fit parameters, for all values of R and E studied.
Summary and conclusions
We present a systematic numerical investigation of the effects of hydrodynamic and electrostatic interactions on the power spectral density of ionic currents in nanopores using an explicit coarse-grained solvent. We find that an increase in ion concentration at fixed solvent den- sity leads to a power-law behavior at low frequency with an exponent increasing with ion density. The powerlaw frequency-dependence of the power spectrum is in line with our previous findings in simulations with implicit water, where the nonzero exponent was shown to be caused by ion-ion correlations. The exponent reaches a = 0.4 at an ion concentration of C i = 2 · 10 −3Å−3 . Hydrodynamic interactions influence the power spectral density at high frequency. In particular, the transition in the power spectral density becomes less pronounced with increasing solvent density. At low frequency, however, the hydrodynamic interactions have no effect, which is surprising in the view of the large influence of hydrodynamic interactions on the dynamics of colloids and polymers under confinement. Note, however, that the solvent used in the present study has a higher compressibility than water, and that the Langevin noise provides a truncation distance, which might influence the hydrodynamic interactions. The linearized mean-field theory without hydrodynamics, which has been derived in our previous work [12], can be used to describe simulation results with hydrodynamic interactions equally well. Instead, inclusion of electrostatic ion-ion correlations is paramount to describe the low-frequency power-law behavior as a function of ion density. Although a direct comparison with experimental results is not yet feasible, we show that the combination of simulations and analytical work provides a promising framework for the systematic investigation of experimental noise spectra. D.J.B. acknowledges funding from the Glasstone Benefaction and Linacre College, Oxford. R.G. would like to acknowledge support from HFSP (RGP0061/2013). | 4,963 | 2016-07-29T00:00:00.000 | [
"Physics"
] |
A DEEP LEARNING ARCHITECTURE FOR BATCH-MODE FULLY AUTOMATED FIELD BOUNDARY DETECTION
: The accurate split of large areas of land into discrete fields is a crucial step for several agriculture-related remote sensing pipelines. This work aims to fully automate this tedious and resource-demanding process using a state-of-the-art deep learning algorithm with only a single Sentinel-2 image as input. The Mask R-CNN, which has forged its success upon instance segmentation for objects from everyday life, is adapted for the field boundary detection problem. Such model automatically generates closed geometries without any heavy post-processing. When tested with satellite imagery from Denmark, this tailored model correctly predicts field boundaries with an overall accuracy of 0.79. Besides, it demonstrates a robust knowledge generalisation with positive results over different geographies, as it gets an overall accuracy of 0.71 when used over areas in France.
INTRODUCTION
An accurate knowledge of field boundaries is a requirement for many actors in agriculture. Amongst many applications, it is a prerequisite input for farmers to on-board fields on farm management software services, it improves the accuracy of crop type classification (Peña-Barragán et al., 2011, De Wit, Clevers, 2004, and it is used from government agencies to monitor subsidies and farming practices.
Typically, the collection of these geographical data is obtained by manual labelling of aerial or satellite imagery. This slow, repetitive, and error-prone acquisition hinders scalability. It prevents the batch-mode boundary delineation in large areas. Consequently, the scientific community has been exploring solutions to accurately and reliably generate field boundaries in a large-scale manner, without intensive user involvement.
The first challenge in the attempt to automate field boundary detection is the inherent subjectivity of their definition. For example, the Land Parcel Identification System (LPIS) 1 lists four different parcel types, with corresponding types of field boundaries. An automated approach is by default exposed to the more or less arbitrary selection of the field boundary definition that it follows.
Despite the definition limitations, field boundary detection has been investigated for several decades. Early automated field boundary detection techniques relying on some form of edge detection through the use of traditional computer vision (Rydberg, Borgefors, 2001, Yan, Roy, 2014. Lately this domain has benefited from the proliferation of deep learning , Waldner, Diakogiannis, 2019. In spite of the increased accuracy, these techniques still suffer from challenges on generating a single closed polygon for each field (instead of incomplete and noisy curves), large computational cost and lack of generalisation. As a matter of fact, the sparse collections of automatically-generated field boundary sets are often limited to a single geography, and involve post-processing to remove false omission and commission errors.
This work introduces the first step towards the systematic processing of satellite imagery for batch-mode field boundary detection. This is mainly achieved by transferring the state-ofthe-art instance segmentation algorithm Mask R-CNN to this domain of knowledge (He et al., 2017). This task requires the careful tuning of the architecture hyper-parameters as well as adjustments and modifications that increase its accuracy. Additionally, a novel tailored measure for field boundary detection evaluation is suggested. The experimental setup for such an approach includes large volumes of data from multiple geographies. Please note that the Rydberg and Borgerfors's (Rydberg, Borgefors, 2001) field boundary definition is followed in this work. The latter defines field boundaries as changes of crop types or discontinuity of natural features.
The rest of the article is structured as follows. Section 2 gives an overview of the existing approaches to delineate accurately field boundaries. Subsequently, section 3 discusses in detail the suggested architecture used, before experimental results confirm the validity of this approach (section 4). Section 5 concludes this work.
RELATED WORK
The rich literature of field boundary detection algorithms can be mainly categorised between traditional computer vision techniques and machine learning approaches.
The first algorithms investigating field boundary detection were typically build upon some form of edge detection. The main hypothesis is that the transition between fields would be characterised by sharp changes in pixel values (Ji, 1996, Rydberg, Borgefors, 2001, hence, field boundaries would be a subset of image edges. Edge detection commonly involves the computation of multi-directional gradient though kernel convolutions (North et al., 2019, Graesser, Ramankutty, 2017. Identifying image edges that actually represent field boundaries requires the use of region-based knowledge (Yan, Roy, 2014). For instance, North et al. compute the standard deviation for each pixel within a window moving across the image channel (North et al., 2019). Similarly, Graesser and Ramankutty consider small tiles from a satellite image to normalise the gradients locally. For each tile, an adaptive threshold is set to extract the boundaries (Graesser, Ramankutty, 2017).
Both convolution operations and region-based information, which have been widely used as part of edge detection techniques, are a common feature of deep learning architectures. Deep learning application to various remote sensing topics has already proved to be successful , Zhu et al., 2017. Lately, deep learning techniques (or hybrid techniques combining deep learning with edge detection) for field boundary detection have been published. Crommelinck et al. introduced a hybrid method (Crommelinck et al., 2019) in which candidate pixels were identified before a convolution neural network classifies tiles, centered on these candidates, that actually contain boundaries.
Semantic segmentation approaches, in which each pixel is assigned a label depending on whether it belongs or not to to a boundary, offer better granularity and may be directly used without an edge detection stage. (Masoud et al., 2020. Recently, U-Net (Ronneberger et al., 2015) architectures have become popular in field boundary detection. Gracia-Pedrero et al. employ a U-Net architecture to segment images into three classes: field, buffered boundary, and background. The boundaries are then computed from the contour of the first class (García-Pedrero et al., 2019). Likewise, in (Waldner, Diakogiannis, 2019), Waldner and Diakogiannis adapt a U-Net model to generate not only segmented images, but also predict distances to the field boundaries. In a post-processing step, they use a watershed algorithm to increase the algorithm accuracy (Beucher, Meyer, 1990).
Post-processing semantic segmentation results using computer vision (commonly, some geometric rules or watershed algorithms) is not uncommon in the relevant literature, because most of the introduced architectures generate intermediate results that require merging sub-fields in one field, splitting larger areas into fields, or both. This process hinders the scalability of such a solution, since it often requires tuning ad-hoc parameters in a case-by-case basis. Besides, accurate post-processing is far from being trivial, especially if the intermediate output presents a disconnected set of predicted boundary pixels (i.e. not adjacent pixels).
Moreover, many techniques make use of time-series images (North et al., 2019, Graesser, Ramankutty, 2017. However, working with time-series of satellite imagery introduces the significant challenge of cloud coverage. A fully automated timeseries-based pipeline should include a module for identifying and removing clouds, as well as replacing their values commonly through interpolation. This additional complexity has been found to not lead to a substantially increased accuracy (Waldner, Diakogiannis, 2019), while also impeding the solution scalability.
Finally, some works recently make use of very high resolution data acquired from unmanned aerial vehicles (Persello, Bruzzone, 2009, Crommelinck et al., 2019. It is straightforward that a resolution in the order of hundreds finer than the one achieved from satellites would increase the potential of field boundary detection techniques. However, this comes with a cost in availability and operations, which makes such approaches suitable only for small-scale applications. In this work we present a technique which is envisaged as the first step towards a systematic field boundary detection pipeline. This technique, designed to remove challenges related to scalability as well as to reduce the ad-hoc parameters that require manual tuning, is described in detail in the next section. Fig. 1 presents the workflow of the introduced technique, which is based on Mask R-CNN (He et al., 2017). Mask R-CNN is a deep learning architecture, which, has recently achieved exceptional performance in several instance segmentation setups. However, Mask R-CNN has been introduced in a totally different context, using images from images that are not relevant to Earth Observation (e.g. COCO dataset (He et al., 2017), which aggregates a large amount of common objects from everyday scenes (Lin et al., 2014)). Our hypothesis is that transferring this technique on field boundary detection requires a number of adjustments in several parts of the pipeline. In the rest of the section, we describe the model architecture that needs to be implemented with bespoke adjustments for the field boundary detection problem.
Data curation and pre-processing
The large-scale labelled dataset required for the training of our model can be sourced from several agricultural existing parcel registers, which are commonly maintained by governmental agencies in the form of annual records. However, these data have limited accuracy and would have an adverse effect in the algorithm prediction quality if used without denoising. Existing problems include (1) erroneous entries caused by inaccurate semi-automatic approaches used for their creation (2) difference between the field boundary definition used from us and the dataset (3) corrections made along the year to the initial field geometry which cause overlapping field boundaries or duplicate instances.
As a result, the first step of the training is cleaning the dataset. Apart from trimming off parcels, irrelevant geometries are removed using the Schwartzberg compactness score (Schwartzberg, 1965). More specifically, after enforcing non-overlapping fields, we discard any entry that is smaller than 1.5 ha and whose Schwartzberg compactness score is lower than 0.15. Schwartzberg compactness score expresses the ratio between the perimeter P of the field and the circumference of a circle that would have same area A (Eq. 1).
Subsequently, the labelled dataset is created by matching the ground truth with available satellite imagery. In this work Sentinel-2 is used, but, it should be noted that the proposed method is satellite-agnostic, with the only limitation being that the satellite includes a Near-Infrared (NIR) band. NIR is used to generate a 4-band input. The first three bands are the red, green, and blue of the true color image. The fourth band corresponds to the NDVI computed from the red and near-infrared bands (Eq. 2). NDVI is used because of its high information value in agriculture applications, as well as its non-linearity. Being a non-linear combination of two spectral bands, it brings additional information that a deep learning network could struggle to learn from the bands separately.
In a second step, each band is standardised by subtracting the mean value and dividing by the standard deviation, before the satellite imagery is split in 256 x 256 pixels tiles. Each tile is matched to the ground truth while the pre-processing ends with the generation of corresponding binary masks. Figure 1b illustrates the outcome of these pre-processing steps.
Mask R-CNN
This section summarises the architecture of Mask R-CNN. For more information, the reader is referred to the original publication (He et al., 2017).
In general, Mask R-CNN is a model that is designed to identify and classify areas of interest that belong to one or more object classes (Fig. 1c).
In Mask R-CNN, the backbone, which is based on traditional convolutional neural networks like ResNet (He et al., 2016), generates features from the image. These features, computed through convolutional operations, may be understood as primitive representations of visual concepts, like shapes or edges. At the end of the backbone lie two parallel branches. The first branch, called region proposal network (RPN), draws areas of interest that may contain a relevant object, in our case, a field. These areas of interest take rectangular shapes whose dimensions are set as parameters of the model. The second branch, extract the previous features within these candidate areas. The selected features are then passed to the heads stage.
The heads stage, which consist of fully convolutional networks smaller than the backbone, refine and classify each area of interest. This stage generates binary masks for each possible object class. In the case of field boundary detection only one object class is examined (fields), therefore this single segmentation mask defines the estimated field boundaries.
All parts of the network are trained together through backpropagation (He et al., 2017). The loss of the model is a linear combination of 3 losses. These quantify (a) classification accuracy, i.e. assigning the correct class to an object. In the class set the null class (corresponding to the background) is also included. (b) segmentation accuracy, i.e. identifying the correct contour of an object and (c) instantiation accuracy, i.e. estimating the correct boundary box framing each object in an image.
In this work, we use the Matterport Mask R-CNN implementation (Abdulla, 2017), with a ResNet 101 as backbone.
Model Adjustment for Field Boundary
The default implementation of Mask R-CNN has been developed for use cases that significantly differs from field boundary detection using satellite imagery. In order to adjust Mask R-CNN for field boundary detection we have carefully re-examined the tuning of its hyperparameters. Two main issues have been found with the default Mask R-CNN.
Firstly, the number of areas of interest generated from the RPN network is too low for field boundary detection. The default value of 100 areas is smaller than many images in field boundary detection datasets. Therefore, we have increased this value to 200 areas of interest.
Secondly, and perhaps most importantly, fields exhibit a large variation of sizes and shapes. For example, pedestrians in a surveillance setup (a typical use of Mask R-CNN) are expected to have medium variation in their size and even smaller in their shape. On the other hand, (a) satellite images may include fields that vary from 1.5 ha to hundreds of ha (b) the range of field shapes is even larger since they include very elongated rectangles, square fields, circular fields, multi-line polygons, etc. Therefore, (a) we have modified the possible side size of candidate regions to 8, 16, 32, 64, or 128 pixels (default set is {32, 64, 128, 256, 512}) and (b) we have augmented the ratio between width and height of the bounding boxes to {0.1, 0.5, 1, 2, 4} (default set is {0.5, 1, 2}).
The training is achieved with batches of 4 images using an NVIDIA TITAN RTX GPU with 24GB of dedicated memory.
Post-processing
As already mentioned, one the benefits of this architecture is that it does only require a trivial post-processing. By default the architecture produces closed polygon masks within the boundary box of the predictions, which can be straightforwardly used to extract each field boundary by estimating the contour of the output mask. The bottom-right panel of Fig. 1d shows an example of the final output. Vectorised polygons can also be obtained by reprojection of the predicted geometries.
EVALUATION
In this section the validity of our main assumptions is tested. Apart from evaluating the adjusted pipeline accuracy, comparing it with the default one, a second goal is to examine its generalisation capability. A technique that aims to systematise field boundary detection should be transferable across different geographies. For that reason, we have included in our dataset two agricultural areas, from Denmark and France, respectively.
After describing the study areas and the associated datasets, we present the different measures to assess the prediction accuracy. This includes adapting the precision and recall measures on this problem. Subsequently we conduct the core evaluation separately for each area. Finally, we examine the generalisation capability of the model, by evaluating over an area the performance of the model trained over a different one.
Study Area and Materials
Within the LPIS framework, most member states of the European Union makes publicly available datasets of agricultural parcels, which inform on their crop types and geometries. This dataset has been used before in the relevant literature (García-Pedrero et al., 2019). This work also used this source, more specifically, the French and Danish datasets for the year 2018 2 and 2019 3 respectively. In the case of Denmark most of the dataset was used, while for France the data were reduced to areas with high agricultural production. The total ground truth set consisted of 250,126 fields in Denmark and 395,969 fields in France.
As explained in the previous section, labelling is conducted through Sentinel-2 imagery, which was downloaded from the Copernicus Access Hub 4 . Apart from downloading imagery of the relevant year, a global cloud coverage lower than 1% was imposed to reduce cloud artefacts. Additionally, while we select only one satellite scene to cover one area for the dataset, these span a large period (Fig. 2) in order to provide to the dataset a richer variety of field aspects.
These criteria result in a selection of 4 Sentinel-2 images over Denmark and 3 over France. Each image is 10980 x 10980 pixels of 10 meters resolution. Figure 2 gives the zone and dates of the selected satellite imagery. Following the pre-processing described in section 3.1, we generate for each country a dataset of images of shape 256 x 256 x 4 and their corresponding ground truth. The data were finally split in training, test, and validation sets accounting for respectively 80%, 10% and 10% of the images. For such configuration, using an NVIDIA TITAN RTX GPU with 24GB of dedicated memory, the training over 10 epochs takes about 3.5 hours, while inference takes less than a few seconds per image.
Evaluation Measures
The accuracy of the predictions is assessed via several metrics introduced in (Persello, Bruzzone, 2009). The overall accuracy gives an estimation of how well the pixels are classified. It is computed following equation 3, where T Ppx, T Npx, F Ppx, and F Npx are respectively the true positive, true negative, false positive, and false negative rates for the pixel-wise classification (boundary or background).
It is important to highlight that this measurement does not convey how well the boundary is outlined, since (a) it punishes equally a false positive close to the boundary with a false positive in the center of the field (b) it punishes equally a false negative in a small field, which may reduce how distinct the field is, with a false negative in a larger field with small effect to the result quality.
For this reason, we are suggesting a new measure, which redefines the concept of true or false positives and negatives. In this measure, for each field of the ground truth Fgt, the predicted field Fp that has the biggest overlap is estimated. The pixels issued from this intersection are considered as true positive. The remaining pixels of Fgt are counted as false negative, because they have not been detected as belonging to Fp (even if they overlap with another field F p in the predicted mask). The remaining pixels of Fp are counted as false positives, because they do not fall within Fgt. If the list of ground truth fields is exhausted, all remaining pixels in the predicted fields are also counted as false positives. Conversely, if the list of predicted fields is exhausted first, all remaining pixels in the ground truth are then considered as false negative.
This algorithm estimates the T P f , F P f and F N f rates, from which recall, precision and f1-score can be defined. The f1score is defined by the equation 4.
In general, this definition is more strict than the commonly used accuracy measures, since it requires an one-to-one correspondence. To have insight of the distinct type of field-specific errors (over-segmentation and under-segmentation errors), we are also computing the fragmentation error (e fg ) and the undersegmentation error (eus) defined respectively by equations 5 and 6. GT and P represent the sets of ground truth and predicted masks respectively . |·| denotes the cardinality of a set, while A the area in pixel of a given mask. Fp * corresponds to the prediction mask that has the largest overlap with a given ground truth.
The fragmentation error is preferred over the over-segmentation error defined in (Persello, Bruzzone, 2009) as it accounts for all overlapping fields.
Validation of the Tailored Configuration of the Mask R-CNN
In order to verify that the tailored adjustments made to the default implementation (section 3.3) actually improve the predictions of field boundaries, we compare the predictions of the two different configurations on the Danish and French datasets, respectively. Figure 3 shows the evolution of the loss for the training and test datasets for both geographies. The test loss seems to converge with no evidence of over-fitting. The lower values of the loss indicate that the adjustments suggested in this work improve the accuracy of Mask R-CNN in a field boundary detection context.
Moreover, table 1 confirms the superiority of the suggested configuration in the prediction of field boundaries. For the Danish dataset all accuracy measurements are improved with the bespoke configuration. f1-score is raised by 7.3 percent. For the French dataset, there is a decrease in the precision value with a corresponding improvement of the recall which results in a 5.2 percent increase of the f1-score. It is worth to note that for both Mask R-CNN variations the fragmentation error remains almost zero. This may be a result of the Mask R-CNN architecture, which typically merges adjacent areas of interest that belong to the same class. Moreover, over-segmentation only occurs for ground truth fields of significantly large area. For such fields, the model might predict few instances whose number is however negligible compared to the size of the reference field, hence a steadily low fragmentation error.
Cross Area Prediction
The bespoke Mask R-CNN has showed evidence of improved predictions in the same geography where it has been trained. Further evidence for the last point is given from the surprising result of increased overall accuracy of the model transferred from Denmark to France in comparison to the model which was trained with the same dataset. Also, this counter-intuitive result may be partially explained from the fact that French dataset was noisier than the Danish one. Fig. 4j shows few parcels from the France dataset that clearly appear to be fields (correctly predicted by the model), while they are not labelled in the ground truth. This is promising for the additional use of such a technique for identifying errors in commonly available large-scale datasets. Besides, the French dataset contain much more fields than the Danish one. This greater number of fields may present a diversity that makes it more difficult for the model to learn. A longer number of epochs might be necessary for the model to train on the French dataset in order to achieve similar performances with the Danish dataset, for which generalisation seems easier.
Finally, figure 4 provides prediction examples made with the bespoke Mask R-CNN trained with the Danish dataset. It shows that most of the fields are being detected and fairly outlined. Importantly, urban areas, water bodies, and to a less extent, forests are correctly ignored by the model. Hence, one can imagine predicting a whole satellite image with the model without any processing required to remove non-agricultural areas. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B3-2020, 2020 XXIV ISPRS Congress (2020 edition) This contribution has been peer-reviewed. https://doi.org/10.5194/isprs-archives-XLIII-B3-2020-1009-2020 | © Authors 2020. CC BY 4.0 License. Table 2. Prediction accuracy assessment of Mask R-CNN for one geography when the model is trained with data from another area.
CONCLUSION AND FUTURE WORK
The major contribution of the present article is the introduction of a new pipeline based on Mask R-CNN for the delineation of field boundaries over large areas. A tailored version of this instance segmentation model has shown good accuracy over Danish and French regions. Trained with a larger and richer dataset, it could help the full automation of agricultural parcel delineation for further application such as crop type classification. More modifications to the core architecture as well as the pre-processing stage could further improve the pipeline performance. | 5,539.8 | 2020-08-21T00:00:00.000 | [
"Computer Science",
"Environmental Science"
] |
Collecting and utilising crowdsourced data for numerical weather prediction: Propositions from the meeting held in Copenhagen, 4–5 December 2018
In December 2018, the Danish Meteorological Institute organised an international meeting on the subject of crowdsourced data in numerical weather prediction (NWP) and weather forecasting. The meeting, spanning 2 days, gathered experts on crowdsourced data from both meteorological institutes and universities from Europe and the United States. Scientific presentations highlighted a vast array of possibilities and progress being made globally. Subjects include data from vehicles, smartphones, and private weather stations. Two groups were created to discuss open questions regarding the collection and use of crowdsourced data from different observing platforms. Common challenges were identified and potential solutions were discussed. While most of the work presented was preliminary, the results shared suggested that crowdsourced observations have the potential to enhance NWP. A common platform for sharing expertise, data, and results would help crowdsourced data realise this potential.
| INTRODUCTION
Within the atmospheric sciences, "crowdsourced" data is a relatively new term. While the term crowdsourcing was initially defined by Howe (2006) as outsourcing an act to the general public, this definition is no longer restricted to traditional tasks being outsourced. Today, crowdsourcing is more than outsourcing data collection to the general public. Instead, crowdsourcing embraces new data sources, data storage, quality control and utilisation, which requires standard methods and a common terminology.
Direct and indirect observations from non-conventional sources are being investigated for use in the atmospheric sciences. Examples of data sources include Personal Weather Stations (PWSs) (Bell et al., 2013(Bell et al., , 2015Clark et al., 2018), smartphones (Kim et al., 2015;McNicholas and Mass, 2018;Price et al., 2018;Hintz et al., 2019), vehicles (Anderson et al., 2012;Mahoney and O'Sullivan, 2013) and communication networks (Zinevich et al., 2009). Muller et al. (2015) provided a comprehensive review of "crowdsourcing" efforts in the atmospheric sciences. Since this review was published, new advancements have been made with crowdsourced datasets. Some of the most recent advancements include the collection and quality-control of atmospheric pressure observations from smartphones (Kim et al., 2015(Kim et al., , 2016Madaus and Mass, 2017;McNicholas and Mass, 2018;Price et al., 2018;Hintz et al., 2019). Kim et al. (2016) was the first to apply machine-learning methods to bias correct smartphone pressure observations (SPOs). McNicholas and Mass (2018) demonstrated an efficient machine-learning approach to SPO bias correction that benefited from non-meteorological smartphone sensor data. Clark et al. (2018) examined the use of PWSs and made considerable progress in the quality control and use of such data. Examples of successful assimilation of such observations into operational Numerical Weather Prediction (NWP) models are currently few and far between. The integration of observations from PWSs into the NOAA Meteorological Assimilation Data Ingest System (NCEP, 2019) dataset is an early example. Also, in the U.S., the utility of PWSs has been of increasing interest for forecasts of severe convection (Madaus et al., 2014;Carlaw et al., 2015;Sobash and Stensrud, 2015;Gasperoni et al., 2018).
A meeting on the use of crowdsourced data in NWP and weather forecasting was held in Copenhagen 4-5th December 2018 at the Danish Meteorological Institute (DMI), with two main purposes. First, to gather experts within the topics of crowdsourcing and create a network of people working on the subject, and second to discuss common issues encountered with crowdsourced data and how these can be addressed. Researchers from both universities and meteorological institutes attended the meeting, whose experience spanned a variety of subjects, including SPOs, PWSs, vehicular data, and citizen weather reports. The first day was allocated for presentations from the participants, followed by plenary discussion. The second day was allocated for discussions, starting with a sketch of ongoing activities at Institutions and Universities. Two working groups were created who reviewed current research topics for various data sources and data formats. The purpose of this article is to document the propositions and recommendations from the meeting and to inform peers of ongoing activities.
| SCIENTIFIC PRESENTATIONS
C. McNicholas (University of Washington) discussed how measurements of atmospheric pressure could be efficiently retrieved from smartphones and subsequently bias-corrected. Results from a testbed Android app, uWx, revealed that inaccuracies in smartphone location and sensor internal filtering contributed to poor data quality. Correcting these issues facilitated the retrieval of pressure change without the need for post-processing/quality control. Using a machine learning approach, smartphone pressures were bias-corrected to account for large uncertainties in smartphone elevation (McNicholas and Mass, 2018). For each smartphone, a random forest was trained on auxiliary sensor/GPS data to predict and correct pressure errors. On average, bias correction reduced pressure errors by~80%. During post-processing, fewer than 20% of smartphone pressure were discarded. In a real-world case-study bias-corrected smartphone pressures improved analyses and 1-hour forecasts of altimeter setting, 2-minute temperature, and 2-minute dewpoint.
K. S. Hintz (DMI) first presented a study on wind measurements from smartphones, in which the surface roughness length was estimated from the measured horizontal turbulence. In another work, more than 6 million SPOs were collected over 7 weeks through a software development kit installed in a third-party mobile app. These observations were quality controlled and assimilated with 3D-Var in the DMI HARMONIE NWP system Yang et al., 2017). A decrease of bias and no change in root mean squared error was found for a simulation period of nearly 2 months. Examples showing that raw observations can depict current weather was given.
X. Yang (DMI) presented the construction idea behind the operational COntinuous Mesoscale Ensemble Prediction System (COMEPS) (Yang et al., 2017a(Yang et al., , 2017b at DMI used for a routine weather forecasts, which generates a 2.5 km grid resolution, 25 member, Rapid Update Cycle (RUC) like EPS forecast with an hourly update using time lagging. Currently, a proto-type ensemble nowcasting system applying the COMEPS approach is in development, targeting sub-hourly cycling of the high-resolution nowcasting system assimilating high frequency observation data such as radar and crowdsourced data. One of the novel system components in COMEPS is the time-lagged 3D-Var analysis on overlapped observation windows, which appears especially beneficial in nowcasting applications with variational data assimilation, as the setup appears to have better potential to address observation error correlation in time and space, as well as the issue of model spin-up in connection with frequent assimilation cycling.
A. Cress (Deutscher Wetterdienst, DWD), presented the activities of DWD concerning crowdsourced data applications and their use in the local DWD data assimilation system. Within the Fleet Weather Maps Project FloWKar, a collaboration between DWD and the German car manufacturer AUDI AG has been established, to investigate to what extent future environmental observations from vehicle sensors can be combined with existing data sources to improve nowcasting and warnings and therefore make a contribution to the security of future autonomous driving. A complete real-time weather conceptual framework has been established, focusing on the flow and processing of high resolution measurements and weather products and the development of corresponding forecasts. A fast data exchange is followed by quality control according to weather service standards and smart aggregation strategies, integrating all available data into a real-time weather map. Aiming for fast weather forecasting, a data assimilation cycle with a 5-minute update rate is necessary; therefore, an ultra-rapid data assimilation method is proposed. A real-world application employs the high resolution project observations in a 5-minute assimilation cycle for the regional operational weather model COSMO-D2, focusing on the model performance optimisation near the surface and its predictions along road sections in Germany, where the current observation network is not dense enough. First results, comparing car measurements, nearby weather stations and model analysis and forecasts were presented.
E. Mallet and S. Al Ali (Météo France) first gave a brief overlook of crowdsourcing activities at Météo-France. Those activities focus on the use of human observations from "expert" non-professional observers and "citizen" observers, and automated observations collected from PWSs, agricultural networks and connected vehicles. Then, the presentation focused on two ongoing projects: (1) The first action concerns the crowdsourcing module in Météo-France's mobile application that allows mobile users to report the observed weather and to post pictures of the sky. The module provides, without access restriction, a simple entry of almost twenty phenomena to the users, who in their turn will select the observed phenomena and report the observed weather condition on a regular basis. Based on this module, more than 10,000 observations are collected daily, and more than 40,000 in high-risk situations. Visualisation of crowdsourced data is already available to forecasters, and the next step is to feed it to operational databases in order to expand its possible uses. (2) The second action concerns the potential use of vehicle observations for meteorological applications which is the subject of a partnership between Météo-France and Continental. The aim is to infer weather (precipitation and low visibility) and road conditions (dry, wet, slick) at a particular location in time, through the analysis of vehicle data elements (temperature, wiper and headlight statuses, velocity, and the activation of ABS and ESP systems). The experimental campaign started in November 2016 and is still ongoing. The fleet consists of hundreds of vehicles, transmitting data through a connected dongle. Data filtering and quality checking routines were developed, and vehicle observations were evaluated against meteorological data. Machine learning classification algorithms were developed, using data from meteorological observation merging products as references for hydro-meteor discrimination and visibility. The preliminary results were promising and also showed the need to combine multiple parameters in order to successfully derive weather observations. K. O'Boyle (Met Office) presented how The Met Office view crowdsourcing as distinct from citizen science (see section 4). There is a long history of citizen science at the Met Office. The Weather Observations Website (WOW) (wow.metoffice.gov.uk) is the Met Office citizen science portal. WOW has global reach, and is a platform for anybody to submit, share and display their weather observations, either manually or by connecting a PWS using APIs. WOW data is being trialled in nowcasting applications, but is not yet assimilated into NWP. Investigations into other opportunistic observations are ongoing at the Met Office, including collecting data from vehicles.
M. Clark (Met Office) presented on an automated quality control and gridding process for citizen science data. There has been a focus on Met Office WOW data from PWSs to create high resolution surface analyses. Parameter values from each WOW site are constrained to have the same longterm mean as neighbouring official sites, but are otherwise allowed to vary freely, as it is assumed that shorter-term, temporary deviations are the signature of genuine small scale features which are worth retaining in the analysis. A series of case studies have shown that there is value in this approach.
S.L. Dance (University of Reading) gave an overview of the DARE: Data Assimilation for the REsilient City project. This is a UK Engineering and Physical Sciences Research Council (EPSRC) Senior Fellowship in Digital Technology for Living with Environmental Change. The vision for the project is to use "datasets of opportunity", such as CCTV and vehicle observations, alongside scientific observing networks, such as satellite data (Mason et al., 2018) to improve predictions of urban natural hazards such as flooding and high impact weather. There are many potential benefits of such data, including the availability of large numbers of inexpensive observations, in areas where there are people but there may be few sources of scientific observation data. For example, (1) air traffic management reports have potential to provide observations of temperature inversions in the boundary layer (Mirza et al., 2016(Mirza et al., , 2019. (2) In many locations around the world, the population has access to smartphones, but ground-based scientific observations are sparse. Furthermore, there are a number of issues in collecting 'datasets of opportunity' for use in assimilation. These include the need for metadata such as time and location in order to carry out the assimilation, versus data protection for the data provider, who may be a private individual. Other issues include data ownership, intermittency, heterogeneity, data provenance and large data volumes. In order to use such observations in data assimilation, there needs to be an understanding of natural variability in urban areas (where many of these data originate) and the variability that can be resolved by a prediction model (e.g., Waller et al., 2014;Janji c et al., 2017). This was discussed further in the next talk by J.A. Waller. J. A. Waller (University of Reading) presented on the potential to measure temperatures in urban areas using vehicles. Issues related to the assimilation of crowdsourced data were discussed; in particular, the need to understand the data inhomogeneity and natural variability of observation urban areas in order to understand the observation uncertainties. Collaborative work with the UK Met Office, is assessing the potential of temperature observations recorded by vehicles. The preliminary findings showed that the data collection method was not reliable for collecting large temperature data sets. Furthermore, for the initial data sets collected, it was shown that temperature measurements had a negative correlation with the speed of the vehicle. It was concluded that a new data collection technique was required, and a more detailed study was vital before the benefits of assimilating vehicle temperatures could be assessed.
D. Blaauboer (KNMI and EUMETNET) presented shortly the KNMI-activities in the domain of crowdsourcing. These include participation in the WOW project of UK Met Office, application of car data (temperature sensor, wiper data), smartphone data, damage reporting app (to report weather impacts by the public), wind data from hot air balloons. EUMETNET, the grouping of 31 European National MetServices, recognised the emerging availability and application opportunities of crowdsourced data and the Internet of Things among many of its members. Therefore EUMETNET has organised a few dedicated workshops on this subject with the aim to bring experts in this field together, to foster networking and possibly create a platform or programme in near future to develop common applications to the benefit of all.
M. Dahoui (ECMWF) presented an overview of the importance on in-situ data in global NWP. It was shown that there are data gaps in the surface observations received at ECMWF and the potential and challenges for using crowdsourced data to fill these gaps were described. Also, it was stressed that crowdsourced data can be important for verification purposes. A denser network is useful to detect small scale features and rapid changes of the atmosphere, so observations have also the potential to improve the forecast verification aspects leading to a better understanding of model performance. The usage of crowdsourced observations is however very challenging. It was suggested that data collection and pre-processing needs a collaborative effort between NWP centres through coordination of the WMO, the industry and the private sector to improve and unify standards and to agree on best practices. A common and shared use of operationally managed data hubs (such as the MetOffice Weather Observation Website) is a cost-effective solution to manage the diversity of data sources and formats. A good understanding of the error characteristics of the observations is necessary to allow proper data selection and error specification. This requires a comprehensive and standardised description of metadata. Quality control, bias correction and blacklist management require unique identification of a reporting station which makes anonymous reports of less interest to NWP data assimilation unless technological solutions are available to anonymously identify the data or perform most of the quality control and bias correction near the data origin. Legal aspects related to privacy and data usage are also essential to clarify before the operational use of such observations.
| OVERVIEW OF ACTIVITIES
During the meeting, it became clear that there are many activities on-going, with opportunities for collaboration. Table 1 list activities, status and considerations for participating institutions together with ZAMG and Met Norway who agreed to share their current activities. It is seen that especially work with data from private weather stations is an active field of research at many institutions.
| CHALLENGES AND SOLUTIONS
The presentations and discussions identified several common challenges, and some solutions were proposed during the discussion sessions. These follow below: I. Terminology is not agreed upon in the community. A common vocabulary needs to be established to facilitate future collaboration. The term "crowdsourced data" is used differently within the community, and there is no agreement what this term covers and what not. Often crowdsourced data is used as a collective term, for example, citizen-science and third-party data, which is how the term will be treated in this report, though with a recognition that a more precise definition is desirable. i. The Met Office suggested a terminology that clearly separates citizen-science data and crowdsourced data, and also attempts to define associated terms: a. Citizen-science data: Information obtained from a group of people who are invited to participate in a data collection process. b. Crowdsourced data: Information derived from a group of people without their explicit involvement in the data collection process.
c. Opportunistic data: Information derived from nonmeteorological sensors or weather sensitivities. d. Third-party data: Data collected by a third-party organisation using meteorological sensors. However, some similarities are expected between third-party data and the other groups. For example PWS observations might be classified as both thirdparty data and crowdsourced data. ii. ECMWF proposed four main categories of 'crowdsourced' data; private and third party, automated amateur weather stations, smart connected devices (mobile phones and vehicles), and human reporting of the current weather, relating each of these to the ease of utility in NWP. In the terminology proposed by the Met Office (i), there is a clear separation between citizen-science data and 'crowdsourced' data, wherein the ECMWF proposal (ii) the term 'crowdsourced' data is a collective term. It is T A B L E 1 Overview of ongoing and considered activities at each participating institute and institutes that was not present but approved to be included recommended that authors define their usage of these terms. II. Obtaining useful crowdsourced data may involve collaboration with commercial entities, such as manufacturers of PWSs or vehicles. In some cases, collaborations of this kind mark a step change in the way universities and meteorological institutes have previously operated. For professional use, crowdsourced data needs to be as unprocessed as possible when received. Working in collaboration with manufacturers may enable this. Some of the workshop participants have built successful collaborations with commercial entities, taking a "virtuous circle" approach, whereby data is provided by a manufacturer, and in return the meteorological institution provides forecast data or quality controlled observational data. It is crucial that intellectual property rights and data ownership are clear and agreed upon before starting collaborations. III. Law based restrictions on storage of personal data lead to a need to de-personalise crowdsourced data, which can lead to "black boxes". Metadata can be used to help characterise the error of crowdsourced observations, and for bias correction, but the legal constraints regarding privacy and personal data can limit the collection of such metadata. Hence, metadata vs privacy is one issue that must be considered when collecting observations. DMI have invested in legal expertise and are open to sharing the information obtained with the community. This is mainly related to the European GDPR regulation (European Union, 2018). IV. New data sources can potentially produce more observations than current NWP models can realistically handle. New methods, such as those suggested by Dr. X. Yang (DMI), will need to be considered. Tendencies of parameters are not commonly assimilated into NWP; a change in approach may be required to extract maximum value from crowdsourced observations.
i. It was discussed that data streaming could be a way of handle the amount of observations in future, such that, in operational systems, observations that come in are utilised and then thrown away. This may seem somewhat provocative to some as the NWP community are often used to store data for an extended time. However, it was agreed that data streaming could perhaps be only realistic solution currently to overcome issues with data volume. Also near-real communication could perhaps be easier to implement with a streaming approach.
ii. The scale of crowdsourced observations, any reference network, and NWP models will all be different. To make them comparable, methods to deal with multiscale comparisons are required for example, filtering or superobbing. Further, other themes seemed to be well established. There was a general agreement that crowdsourced data can provide useful observations in areas otherwise devoid of observations. It was discussed whether stationary platforms (e.g., PWS) are easier to implement in existing systems than moving platforms (e.g., vehicles, SPOs). In general, stationary platforms are believed to be easier to bias-correct than moving platforms. Also, new data sources should be seen to supplement conventional observation networks rather than a replacement, as trusted observations are required as a reference for new data sources. A nested platform of reference may be a good way of organising networks in the future, for example, SYNOPs used as a reference for the quality control of PWS data, and PWS then used as a more dense reference dataset for observations from mobile platforms.
| CONCLUSIONS AND RECOMMENDATIONS
Much of the work presented at the workshop was at an early, exploratory stage, and many questions remain unanswered. However, a general set of conclusions were drawn from the discussion. Crowdsourced observations are potentially useful for NWP, and are undoubtedly useful for verification and forecasting. Use of crowdsourced observations in nowcasting, or post-processing, is perceived to be easier and less demanding than in NWP data assimilation. There is still much work to do before crowdsourced observations can widely be ingested into NWP models.
It was agreed upon that there is a sliding scale between 'crowdsourced' or "passive" data collection, where an individual's involvement is limited, and "citizen science" or "active" data collection where the individual is explicitly involved. It is generally thought that the lesser degree of interaction required by the participant the higher the volume of data that can be collected. It is not clear if either of the two are of superior quality.
Further, the following recommendations are made. An organised community of those involved in crowdsourcing activities would be beneficial. EUMETNET would provide a good forum for this, however, such a forum should not be restricted to European countries. This forum could be a simple, independent, platform accessible via a website. Regarding vocabulary, it would be beneficial for the community to agree on common terminology related to crowdsourcing. To realise the full potential of crowdsourced data for NWP, issues of data quality, privacy, and availability will need to be addressed. Data quality could be enhanced by prioritising the collection of accurate metadata. Privacy issues should be addressed to determine if, how, and when unique identifiers can be retrieved for quality control purposes. Lastly, efforts to expand crowdsourced datasets by disseminating data operationally and working with private industry should be encouraged. | 5,300.4 | 2019-06-10T00:00:00.000 | [
"Environmental Science",
"Computer Science"
] |
A New Carboxylation Reaction THE VITAMIN K-DEPENDENT INCORPORATION OF H”CO,m INTO PROTHROMBIN*
The bovine plasma zymogen prothrombin contains a number ofy-carboxyglutamic acid residues which are not found in an abnormal prothrombin produced when cattle are given the vitamin K antagonist dicoumarol. These modified glutamic acid residues appear to be formed post-translationally by a reaction which requires vitamin K. It has been shown that postmitochondrial supernates from vitamin K-deficient rats incorporate added H’“CO,-into microsomal proteins upon the addition of vitamin K. This incorporation is dependent upon the presence of the prothrombin precursor in the microsomal preparations, and upon factors which are present in the postmicrosomal supernatant. Most of the radioactive protein which can be obtained from the microsomal pellet by extraction with 0.25% Triton X-100 has been identified as prothrombin and it can be shown that all of the radioactivity is in the amino-terminal activation fragment of prothrombin. This portion of the protein has previously been shown to contain the y-carboxyglutamic acid residues. Hydrolysis of the purified radioactive prothrombin resulted in a loss of 50% of the radioactivity and subsequent chromatography of the amino acid hydrolyzate demonstrated that the remaining
The bovine plasma zymogen prothrombin contains a number ofy-carboxyglutamic acid residues which are not found in an abnormal prothrombin produced when cattle are given the vitamin K antagonist dicoumarol.
These modified glutamic acid residues appear to be formed post-translationally by a reaction which requires vitamin K. It has been shown that postmitochondrial supernates from vitamin K-deficient rats incorporate added H'"CO,-into microsomal proteins upon the addition of vitamin K. This incorporation is dependent upon the presence of the prothrombin precursor in the microsomal preparations, and upon factors which are present in the postmicrosomal supernatant.
Most of the radioactive protein which can be obtained from the microsomal pellet by extraction with 0.25% Triton X-100 has been identified as prothrombin and it can be shown that all of the radioactivity is in the amino-terminal activation fragment of prothrombin. This portion of the protein has previously been shown to contain the y-carboxyglutamic acid residues. Hydrolysis of the purified radioactive prothrombin resulted in a loss of 50% of the radioactivity and subsequent chromatography of the amino acid hydrolyzate demonstrated that the remaining radioactivity was entirely in glutamic acid. These results are consistent with the hypothesis that all of the H'"CO,-was incorporated into the carboxyl groups of y-carboxyglutamic acid residues.
Vitamin K is required for the synthesis of four blood-clotting zymogens: prothrombin, factor X, factor IX, and factor VII. The vitamin appears to function post-translationally (1) by modifying a precursor protein. This precursor has been identified (2) in microsomal preparations from anticoagulant-treated rats, and has now been isolated and partially characterized (3,4). Although it is inactive in prothrombin bioassay systems, this precursor is activated to thrombin by several snake venoms (4), suggesting that the vitamin K-dependent modification is required for the physiological activation of prothrombin rather than in the activity of the thrombin generated. The liver precursor is in many ways similar to the biologically inactive form of prothrombin (abnormal prothrombin) which appears in the plasma of the bovine following administration of the vitamin K antagonist, dicoumarol. Unlike prothrombin, the abnormal prothrombin does not bind Ca2+ ions (5, 6) and this defect is presumably responsible for its failure to activate in the bioassay. As it was possible to isolate (7) a low molecular weight calcium-binding peptide from normal, but not abnormal, prothrombin, it appeared that the vitamin-dependent alteration involved a chemical modification of a specific region of the polypeptide chain. The chemical difference in the abnormal and normal prothrombin has been shown by Stenflo et al. (8) to be the presence of a number of y-carboxyglutamic acid residues in normal prothrombin but not in abnormal prothrombin. This residue has also been identified by Nelsestuen et al. (9) and the characterization has been confirmed by Magnusson et al. (10). These observations suggest that vitamin K functions as part of the metabolic system responsible for the y-carboxylation of specific glutamic acid residues of the liver prothrombin precursor.
We have recently described (11) an in vitro system which converts the rat liver microsomal precursor protein to biologically active prothrombin in response to the addition of vitamin K. This system should serve to test the hypothesis that the vitamin K-dependent, post-translational modification of the precursor involves the carboxylation of glutamic acid residues.
MATERIALS AND METHODS
Treatment ofAnimals-Male 250-g rats of the Holtzman strain were housed in coprophagy-preventing cages (12) and fed a diet low in to obtain a postmitochondrial supernatant which was incubated under the conditions previously described (11) for the in vitro synthesis of prothrombin.
Cycloheximide (100 pg/ml) and H'"CO,-(5 bCi/ml of 59.5 mCi/mmol of Na"CO, (Amersham/Searle)) were included in the incubation medium, and prothrombin synthesis was initiated by the addition of vitamin K, (20 @g/ml). After incubation for 15 min at 37", the suspension was cooled, and the microsomes were removed by centrifugation at 105,000 x g for 60 min. The microsomal pellet was extracted with calcium-free Krebs-Ringer bicarbonate buffer containing 0.015 M potassium oxalate and 0.25% Triton X-100, and the unsolubilized debris was removed by centrifugation as described above. The Triton extract was adsorbed with BaSO, (25 mg/ml). The BaSO, was removed by centrifugation and this pellet was washed and eluted as described earlier (14). Determination of Radioactiuity-Bovine serum albumin (2 mg) was added to 0.2 ml of the Triton X-100 microsomal extract, or 0.2 ml of the BaSO, adsorbed extract and the proteins were precipitated by the addition of 5 ml of 10% trichloroacetic acid. The BaSO, eluate (0.1 ml) was precipitated after the addition of 4 mg of albumin. The precipitates were held at 4' for 30 min and then collected by centrifugation at 3000 x g for 20 min. The supernatant was discarded, the pellet was dissolved in 1 ml of 0.2 M Na,CO, and reprecipitated with 5 ml of 10% trichloracetic acid. After 30 min at 4', the suspension was centrifuged as before, the supernatant was discarded, and the pellet was dissolved in 1 ml of NCS (Amersham-Searle) before transferring the sample to 10 ml of Econofluor (New England Nuclear). The distribution of radioactivity in sodium dodecyl sulfate electrophoretic gels was determined following combustion of the dried gel slices. Radioactivity was determined in a liquid scintillation spectrometer with a counting efficiency of 82% for the protein samples and 22% for the cornbusted gels. To determine the location of the radioactivity in purified "C-labeled prothrombin, the protein was dialyzed against 2 mM potassium phosphate buffer, pH 5.8, concentrated to dryness, dissolved in 6 N HCl and hydrolyzed at 105" in U~CUO for 22 hours. The hydrolyzate was concentrated to dryness and dissolved in 500 ~1 of pH 2.2 sample diluting buffer, and 400 ~1 of the sample was applied to a Beckman model 120C amino acid analyzer equipped with a coIumn of Eeckman UR30 resin. One-minute fractions (180) of column eluate was collected in glass counting vials at a flow rate of 68 ml/hour. Aquasol (10 ml) was added to each vial and radioactivity determined in a liquid scintillation spectrometer at a "C efficiency of 35% and a 3H efficiency of 14% for the double-label samples and a "C efficiency of 68% for the single-label samples. DL-[2-sH]glutamic acid (Amersham/Searle) and [1-"Clnorleucine (New England Nuclear) were added prior to acid hydrolysis to unambigously define the elution position of glutamic acid.
Isolation of Clotting Factor and Bioassay-Factor Xa and factor V were prepared as described previously (15). Phospholipid was fraction II prepared as described by Folch (16). Prothrombin was assayed by the two-stage method of Ware and Seegers as modified by Shapiro and Waugh (17). Factor X was assayed by the one-stage method of Bachman et al. (18).
Correlation of Prothrombin Synthesis and
Carboxylation-The possible existence of a vitamin K-dependent protein carboxylation was investigated by incubating (11) postmitochondrial supernatants prepared from livers of vitamin K-deficient rats in the presence of vitamin K, in the absence of vitamin K and in the presence of both vitamin K and a vitamin K antagonist, chloro-K. During these incubations, de nouo protein synthesis was inhibited with cycloheximide. The data (Table I) indicated that vitamin K was required for optimal H"CO,-incorporation into the Triton X-lOO-extractable microsomal proteins as well as for prothrombin synthesis. Furthermore, chloro-K, which inhibited prothrombin synthesis, also inhibited H'%O,-incorporation. Since the o n 1 y generally recognized requirement for vitamin K is the synthesis of the four blood-clotting proteins, much of the protein-bound radioactivity should have been incorporated into these proteins. BaSO, is a specific adsorbant for the vitamin K-dependent clotting factors, and adsorption of the 4745 (17). c 20 rg/ml. Triton extract with BaSO, indicated that a high percentage of the radioactivity was incorporated into the BaSO,-adsorbable proteins. Adsorption of the extract with BaSO, removed 56% of the radioactive protein from the Triton X-100 microsomal extract and elution from the BaSO, resulted in an increase in specific activity of Y! protein from 65 dpm/A,,, in the microsomal extract to 44,390 dpm/A,,, in the BaSO, eluate. These results suggest a highly specific incorporation of bicarbonate into the vitamin K-dependent clotting proteins. The nature of the non-BaSO, adsorbable proteins has not yet been determined.
If bicarbonate was being incorporated primarily into protein precursors of the vitamin K-dependent clotting factors, then conditions which prevent synthesis of the active clotting factors from these precursors should also prevent H'"CO,incorporation into protein. Previous experiments in our laboratory have indicated that microsomal prothrombin synthesis requires some factor(s) present in the soluble portion of the cell. When microsomes were prepared from the postmitochondrial supernate and resuspended in buffered sucrose without the addition of cytosol (Table II, A), both H'"CO,-incorporation and prothrombin synthesis were inhibited. If the protein carboxylation observed does represent the specific postribosomal incorporation of H'"CO,-into the vitamin K-dependent clotting factors, it should require the presence of the precursors of these proteins. Microsomes from normal, vitamin K-sufficient rats contain very little prothrombin precursor, when compared to microsomes from vitamin K-deficient rats, and because of the low precursor level, microsomes from these rats would be expected to incorporate less H"CO,-than microsomes from the vitamin K-deficient rats. When the extent of in vitro carboxylation was compared in systems derived from normal or vitamin K-deficient rats (Table II, B), the data indicated that the system prepared from normal rat livers incorporated less bicarbonate than that prepared from vitamin K-deficient rats. Although the administration of Warfarin in uiuo blocks prothrombin synthesis, the in vitro synthesis of prothrombin in liver microsomes is not significantly inhibited by Warfarin (11). However, as indicated in Table II, B, mitochondrial supernatants derived from Warfarin-treated rats form less prothrombin in uitro than do supernates from vitamin K-deficient rats. A further correlation between prothrombin synthesis and carboxylation is provided by the observation (Table II, B) that H'"CO,-incorporation as well as prothrombin synthesis is reduced in systems derived from Warfarin-treated rats when compared to systems derived from vitamin K-deficient rats. These data (Table II, B) also indicate that prothrombin synthesis under these conditions may have been inhibited more than H"CO,-incorporation into the Triton extract, and suggest that there may be enhanced carboxylation of some protein other than prothrombin in the presence of Warfarin. The amount of radioactivity which was incorporated in the cytosol was investigated as well as that incorporated into the microsomal pellet. Approximately the same amount of radioactivity was incorporated into the cytosol proteins as into the microsomal extract from the vitamin K-treated system. The amount of radioactivity in this fraction was, however, not dependent on the presence or absence of the vitamin in the incubation medium. Identification of Prothrombin as Radioactive Protein-The degree of correlation between the radioactive proteins of the BaSO, eluate and the vitamin K-dependent clotting proteins was examined by ion exchange chromatography (Fig. 1). Most of the protein in the eluate eluted before any of the radioactivity, but a small amount of protein eluted at the same position as prothrombin which was detected by both two-stage activity and venom activation.
Prothrombin activity and radioactivity appeared to co-chromatograph except for a reproducible shoulder of radioactivity on the trailing edge of the prothrombin peak. This fraction of the radioactivity elutes in the position expected for rat factor X, but bioassay failed to detect any factor X activity. The similar chromatographic behavior of the prothrombin activity and the radioactivity, and the small amount of protein in this region of the chromatogram suggests that much of the radioactivity has been incorporated into prothrombin.
The properties of radioactive protein which eluted from the QAE (quaternary aminoethyl) column with prothrombin were studied further by sodium dodecyl sulfate gel electrophoresis. In this electrophoretic system, rat prothrombin has an apparent molecule weight of 85,000. When the chromatographically purified radioactive protein was subjected to electrophoresis under these conditions, most of the radioactivity was associated with the gel slice corresponding to this molecular weight (Fig. 2). Activation of prothrombin with factor Xa leads to formation of thrombin and two large activation peptides (fragment 1, M, = 23,000 and fragment 2, M, = 13,000) (15,20). All of the y-carboxyglutamic acid residues of prothrombin are reported to reside in the amino-terminal activation peptide, fragment 1 (10). Therefore, if the H'"CO,-was specifi- be subjected to random decarboxylation by acid hydrolysis, and a 50% loss of the label should be observed. The data in Table III demonstrate that about 53% of the radioactivity associated with the prothrombin isolated from the in uitro system was lost following hydrolysis in 6 N HCl. Further, ion exchange chromatography of this hydrolysate revealed the presence of only one 14C peak in the eluate from the amino acid analyzer (Fig. 3), which corresponded to the elution position of glutamic acid, the decarboxylation product of y-carboxyglutamic acid. Addition of [3H]glutamic acid and ['4C]-norleutine to the sample before hydrolysis resulted in a co-elution of 3H and '"C in the glutamic acid position of the chromatogram which was verified by its position on the chromatogram relative to the standard norleucine.
DISCUSSION
The data presented above are consistent with the hypothesis that the vitamin K-dependent step in prothrombin synthesis involves the carboxylation of glutamyl residues in a liver prothrombin precursor protein. There was a rapid, vitamin K-dependent incorporation of H'"CO,-into protein, even when de nouo protein synthesis was blocked. A significant amount of this radioactivity was associated with the vitamin K-dependent clotting factors, primarily prothrombin, and the radioactivity incorporated into prothrombin was located exclusively in the NH,-terminal activation fragment (fragment 1) of prothrombin.
Acid hydrolysis of the in vitro labeled prothrombin resulted in a loss of 50% of the radioactivity, and the remaining radioactivity was associated with glutamic acid residues, which would be consistent with the presence of radioactivity in the carboxyl groups of y-carboxyglutamyl residues. | 3,418.8 | 1975-06-25T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
Biomechanical cues as master regulators of hematopoietic stem cell fate
Hematopoietic stem cells (HSCs) perceive both soluble signals and biomechanical inputs from their microenvironment and cells themselves. Emerging as critical regulators of the blood program, biomechanical cues such as extracellular matrix stiffness, fluid mechanical stress, confined adhesiveness, and cell-intrinsic forces modulate multiple capacities of HSCs through mechanotransduction. In recent years, research has furthered the scientific community’s perception of mechano-based signaling networks in the regulation of several cellular processes. However, the underlying molecular details of the biomechanical regulatory paradigm in HSCs remain poorly elucidated and researchers are still lacking in the ability to produce bona fide HSCs ex vivo for clinical use. This review presents an overview of the mechanical control of both embryonic and adult HSCs, discusses some recent insights into the mechanisms of mechanosensing and mechanotransduction, and highlights the application of mechanical cues aiming at HSC expansion or differentiation.
Introduction
Hematopoietic stem cells (HSCs) refer to a very small amount of cell population sitting at the top of the hematopoietic hierarchy. They are responsible for the production of the full complement of blood and immune cells in the body within a unique microenvironment known as niches [1]. Like other stem cells, HSCs also possess the highly complex and controlled 'SMART' physiological features of self-renewal, maturation (differentiation), apoptosis, resting mode (quiescence), and trafficking (migration), for maintaining hematopoietic homeostasis in vivo [2]. The origin and maturation of definitive HSCs during development involve a series of successive processes that are robustly templated in time and space with micrometer accuracy [3][4][5]. In detail, HSCs bud off from the ventral floor of the dorsal aorta (DA) within the aorta-gonad-mesonephros (AGM) region, umbilical, and vitelline arteries via a carefully choreographed and highly conserved process termed endothelial-to-hematopoietic transition (EHT) [6,7]. Afterwards, they migrate to and develop in sequential anatomical sites of hematopoiesis including the caudal hematopoietic tissue (CHT) in zebrafish or the fetal liver (FL) in mammals, the bilateral thymus, and eventually populate in the kidney or the bone marrow (BM) postnatally.
As the dominant site of hematopoiesis in adulthood, BM offers a favorable microenvironment for HSC homeostasis and/or progenitor maturation [5]. A diagram in Fig. 1 illustrates the origin and development of definitive HSCs within different sites during embryogenesis. The nature of the local HSC niche varies along countless spatial and temporal transitions, providing multiple supportive soluble signals and mechanical cues associated with HSC fate decisions and additional fine-tuning of HSC heterogeneity [5,[8][9][10].
At the molecular level, it is considered that the transcription factor Tal1 plays an indispensable role in acquiring the identity of hemogenic endothelium (HE) whereas Runxl serves as a master regulator of EHT and definitive HSC specification [6,[11][12][13][14]. Both in zebrafish and mice, arterial identity is a prerequisite for aortic HE as definitive HE originates from arterial progenitor [15][16][17][18]. Many mutants with arterial specification defects (e.g., EfnB2-/-mice) also displayed definitive hematopoietic defects [19], while activation of the arterial program in HE promotes definitive hematopoiesis from human pluripotent stem cell (hPSC) [20]. Then, the subsequent activation of Runx1 mediates the suppression of arterial genes and the upregulation of hematopoietic genes in HE, allowing a further commitment towards the hematopoietic fate. If Runx1 is inactivated, the pre-existing arterial programme in the HE cannot be repressed, resulting in HE maintaining integrated within the DA and failing to undergo the EHT process [15].
The replication of this developmental process will definitely promote the generation of bona fide hematopoietic stem/progenitor cells (HSPCs) with BM reconstitution capabilities in vitro. Therefore, it is of considerable importance to holistically understand the precise mechanisms for instructing the HSC program from both developmentally and clinically relevant perspectives. Researchers were fascinated by the fundamentals of the biochemical regulatory paradigm for decades, and until lately, have paid extensive attention to biomechanical signals due to their equally pivotal impact on the phenotypic specification and functional outputs of HSCs. No single article can be comprehensively concerned with all of the HSC regulatory signaling. This review focuses on the mechanistic principles of HSCs at distinct stages of their ontogeny including embryonic and adult HSCs, predominantly drawing on data from studies on zebrafish, mice and pluripotent stem cells (PSCs), which may provide some insights into future corresponding researches.
Mechanics in the regulation of endothelium biopsy-beyond the endothelium
Hematopoietic and endothelial lineages have long been considered to be closely related, both of which have actually been demonstrated by quite a few in vitro studies to be derived from a same bipotential mesodermal precursor called hemangioblast. Nevertheless, the presence or lack of hemangioblast in vivo remains fiercely debated and keeps on polarizing the field of hematopoiesis [11,21]. It has been postulated that the generation of definitive hematopoietic cells in vitro from embryonic stem cells (ESCs) is orchestrated in a stepwise pattern, characterized by discrete developmental stages including the emergence of hemangioblast committed to a DA fate and the formation of a HE intermediate, with EHT being the culmination of these consecutive programming events [11]. Visually, lineage tracing experiments authenticated the endothelial origin of definitive hematopoiesis [6,22]. Further, human endothelial cells rigorously isolated from distinct hematopoietic tissues including aorta, yolk sac, embryonic liver, and fetal BM all Fig. 1 Embryonic development of HSCs in different sites. Intra-aortic clusters giving rise to hematopoietic cells emerge from the ventral wall of the dorsal aorta, from where they enter the circulation, migrate to and colonize in sequential sites including the CHT (red box) or the fetal liver, the bilateral thymus (blue box), and finally the kidney (black arrow) or the BM exhibited blood-forming potential when cultured ex vivo [23].
Based on these reasons and more, HE and HSC can be absolutely perceived to be a specialized subpopulation of endothelium and retain endothelial characteristics to some extent. Endothelial cells possess the capability of sensing and discriminating distinct types of mechanical stimuli and responding with unique biological outputs [24,25]. Therefore, it is tempting to speculate that a number of molecules and proteins involved in endothelial mechanotransduction may also participate in controlling HSC fate. Hence, taking the vascular niche into consideration and appreciating the mechanotransduction process from the angle of endothelium will be of great reference value for the understanding of how mechanical signals regulate HSC biology.
Mechanics in the regulation of HSC fate determination
HSCs residing in specific niches inevitably perceive a variety of mechanical stimuli such as tensile strain, hydrostatic pressure, fluid shear stress, and even mechanical unloading in microgravity [9]. While subject to mechanical stresses from external loads, cells generate and exert intrinsic forces on the extracellular matrix (ECM) and the neighboring cells reciprocally [26]. Hence, forces generated intracellularly and employed extracellularly are not two entirely separate items but a tensegrity model coexisting and influencing each other, whose interwoven effects control hematopoiesis and maintain the homeostasis of HSCs in an organism [26,27].
Intrinsic forces
Intrinsic forces are produced intracellularly in an adenosine triphosphate (ATP)-dependent process by the crossbridging interactions between actin fibers and myosin filaments [26,28]. Effects of matrix biophysical features on the fate decisions (viability, morphology, proliferation, lineage commitment) of cultured primary murine Lin-Sca-1 + c-Kit + (LSK) can be selectively eliminated by disrupting the interplay between actomyosin contractility and integrin activation [29], which is in line with an earlier study highlighting the pivotal role of actin contractility in HSC adhesion to extracellular matrices and matrix sensing [30]. Gene expression analysis of human peripheral blood (PB) CD34 + HSPCs using cDNA arrays manifested the expression of several mechanobiological elements such as alpha-actinin, dynein, and dynamin [31][32][33][34]. Yet, the precise functions of these genes in HSCs remain to be elucidated.
Additionally, cell-intrinsic forces were tightly associated with cellular function and the biomechanical properties of an individual cell inevitably involved the organization of actin cytoskeletal networks and related regulatory cascades [35]. A label-free microfluidic technique taking differences in cell stiffness as a sorting biomarker can efficiently enrich highpurity live cells because dead cells were generally stiffer than the live ones, whose significant practical superiority was validated by increasing the purity of viable nucleated cells from the samples of thawed cord blood (CB) cells [36]. Compared with mature blood cells, normal BM HSCs appear to be more rigid and less compliant in terms of morphology, accounting for their stable retention within the marrow niche and little mobilization into the circulation [37,38]. Moreover, cell contractile forces generated by nonmuscle myosin II (NMII), with myosin IIB (NMIIB) being the major one among three mammal isoforms of myosin IIA (NMIIA)myosin IIC (NMIIC) in human hematopoiesis, contribute to the polarized motility and asymmetric division of adult HSCs underpinning self-renewal and differentiation, which is consistent with earlier reported contributions of NMIIB in the differentiation of megakaryocytes and the asymmetry of erythroid enucleation [39][40][41]. With functional differences between NMIIA conferring survival and MIIB driving differentiation, the programmatic switch of NMII isoforms from B-and-A to only A occurs, corresponding to the differentiation trajectory of HSCs [39].
The nucleus is a mechanosensitive organelle that is semipermeable to transcription factors regulated by cytoplasmic biomechanical signaling and, therefore, cellular mechanics is revealed to be highly dependent on the nucleus [42,43]. Levels of lamins, intermediate filament proteins responsible for the assembly of nuclear structure, seem to control the nuclear tension of hematopoietic cells, which thus results in the discrepancy of trafficking into the bloodstream through the endothelial barrier [37]. Lamins are involved in HSC differentiation as well [44]. Moreover, intrinsic forces can be physically directly propagated to the nucleus through lamin A/C (LMNA), a component of nuclear lamina proteins coupling the linker of nucleoskeleton and cytoskeleton (LINC) complex to chromatin via lamina-chromatin interactions so as to modify chromatin structure and control epigenetic transcription [45]. In leukemia cells, the absence of cytoskeletal mechanical tension and subsequent weak adhesion to BM niches contribute to their chemoresistance and residual disease persistence, prompting leukemia progression and/or relapse [46]. Similarly, in Ptpn21 deletion HSCs, cytoskeleton instability attenuated the quiescence and hematopoietic reconstitution capabilities of HSCs that can be overcome by restoring cellular mechanics [38]. Immediately after, in the following year, the same research team found that the recipient mice inoculated with MLL-AF9-Ptpn21-/-leukemic cells exhibited shortened survival, increased leukemic burden, and more severe leukemic cell infiltration compared with MLL-AF9-Ptpn21 + / + cell recipients. Further data suggested that these phenotypes were independent of the impact on cell signaling but probably a result of cell mechanical alterations (decreased cellular mechanical rigidity and increased cell deformability) in Ptpn21-deleted leukemic cells, strongly implying a biomechanical regulatory role of Ptpn21 in leukemic development and progression [47]. These lines of evidence suggest that in the adult system, cell-intrinsic force plays a vital role in the regulation of HSC morphology, HSC differentiation, and their response to extracellular mechanical stimuli. An important concept arising from recent work of cell mechanics is that long-lived cytoskeletal structures may even act as epigenetic determinants, delivering to and profoundly affecting the behavior of subsequent generations of cells [35]. During embryonic hematopoiesis, it seems that intrinsic force is a likely factor in modulating HSC cell deformation, motility, and migration which underlie their spatio-temporal transitions through distinct anatomical sites. By means of real-time imaging combined with transgenic reporter lines, researchers can clearly visualize the dynamic EHT procedure as morphologically flat HE bends, contracts along with the blood flow orientation until the very end of its departure from the aortic floor, developing into spherical hematopoietic cells [6,7,22]. Investigations into the unique biomechanical traits in zebrafish showed that the EHT process is facilitated by the assembly of rings of actin and myosin proteins into anisotropic contractile circumferential actomyosin around stem cells [48]. Poullet et al. observed the morphological alterations of DA endothelium and their collective migration from the sides down towards the aorta floor prior to HSPCs extrusion, compensating surface reduction of emerging HSPCs and hence ensuring overall aorta integrity. Likewise, the actomyosin contractility around the emerging cells drives the final phase of EHT which precisely refers to their individualization from the aorta into the sub-aortic region [49]. Slight cell deformation was also observed in the intravasation of CD41-GFP low multipotent hematopoietic precursors from the AGM into the posterior cardinal vein (PCV), further signifying the existence of intrinsic forces in this dynamic process [50].
Based on above-mentioned findings, it is highly feasible that HSCs can not only interpret changes in mechanical inputs from outside as variations in the presentation of intrinsic forces, but also directly harness intrinsic forces in themselves as tools for manipulating their fate. Nevertheless, the role and mechanism of cell-intrinsic forces in both the adult and embryonic HSCs have not been studied in great detail as of yet.
Niche geometry
The intricate construction of HSC niches (e.g., BM) serves as a three-dimensional (3D) architectural scaffold and provides a variety of biophysical cues for HSCs and HSC-related accessory cells [26,51,52]. To better support and expand HSCs, 3D scaffolds for BM biomimicry have emerged as a preferable approach that fulfills key mechanical requirements of native niches otherwise obscured in conventional 2D culture systems [53][54][55][56]. An optimal 3D scaffold for HSC support usually presents the following topographical features, namely adequate surface area for cell attachment, high porosity for cell migration and nutrient delivery as well as alterability in scaffold structure for control of cell interactions [57]. A great amount of research has reported that human umbilical cord blood (UCB) HSCs expand much more robustly in 3D scaffolds than in 2D conditions [58][59][60][61][62]. Murine ESCs (mESCs) exhibited increased survival and proliferation and enhanced differentiation into hematopoietic cells when cultured on electrospun 3D polycaprolactone (PCL) nanofiber in comparison to gelatin-coated tissue culture plates [63].
In BM compartment, HSPCs interact functionally with niche cells which are often referred to as mesenchymal stem cells (MSCs) and/or derivatives thereof [64]. Substrate geometrical features (e.g., nanofiber diameter, pore size and density) have multifaceted effects on these BM niche cells. For instance, human BM-stromal cells (hBM-SCs) cultured on a rough surface [arithmetic average roughness (Ra) 11.30 ± 0.43 µm] are more prone to differentiating into osteocytes than those on a smooth surface (Ra 0.05 ± 0.01 µm), with higher secretion of osteogenic-related protein Laminin-5 (Ln-5) and stronger activation of Ln-5 binding integrins [65]. On a microgrooved bearing surface partially mimicking the physiological reticulated microenvironment, mouse BM-derived MSCs showed a twofold to threefold increase in cell proliferation and expressed higher levels of pluripotency-related markers versus a standard 2D culture [66]. Within a certain diameter range (74-148 nm), the ability of TiO2 nanotubes to promote the osteogenic differentiation of MSCs strengthened with the increase of nanosize [67]. Micro/nano hierarchical structures generated by different nanotopographies (nanoneedle, nanosheet, and nanorod) and micropatterns of different sizes (4 µm, 12 µm, and 36 µm) gave rise to significant differences in the osteogenic differentiation potential of hBMSCs and the angiogenesis of human umbilical vein endothelial cells (HUVECs) through macrophage immunomodulation [68]. Substrate fiber orientation, random or aligned, also is a key factor directing stem cell fate [69].
It seems that a similar geometrical machinery conferring cell fate commitment operated in ex vivo cultures of HSPCs as aminated polyethersulfone (PES) nanofiber meshes (a diameter of 529 nm) among PES-based substrates modified by different chemical treatments (amino, hydroxyl, and carboxyl, respectively) exerted the most powerful and positive effect on the adhesion and expansion of human UCB HSPCs [58]. In addition, aminated nanofibers with different spacers, by which amino groups were conjugated to nanofiber surface, also resulted in significant differences in the adhesion and expansion of cryopreserved human UCB HSPCs [59].
As for embryonic niches, in addition to multiple biochemical cues that have been reviewed in other excellent papers [70,71], physical interactions between HSCs and stromal cells are part of the AGM microenvironment, explant studies confirmed that tissues located ventrally but not dorsally to the DA promotes AGM HSC activity in light of instructive hedgehog signaling from ventral tissues, suggesting that positional information in the AGM compartment plays an important role in the development of functional HSCs and progenitors [72,73]. Similarly, the abilities of AGM endothelial cells to support HSCs differ based upon their distinct location, as the ventral subregion-derived populations support both HSC maintenance and differentiation while the urogenital subregion-derived populations facilitate HSC maintenance yet fail to induce HSC activity [73][74][75], implying that cooperative regulation among biochemical signals and physical adhesions is required for embryonic HSC activity.
Biomechanical properties of the ECM
Like ECM diversity within other tissues, the structural and physical properties of HSC niches such as stiffness, matrix ligand type, and spatial distribution of adhesive ligands present significant anatomical variations due to their heterogeneities [76]. Taking the BM microenvironment as an example, the endosteum region is replete with abundant fibronectin (FN) and seems to be comparatively stiff (Young's modulus of 40-50 kPa) while the perivascular space presents a high content of laminin and thus is reported to be softer (3 kPa) [64,77]. The central medullary region mainly composed of adipocytes and fatty marrow is more compliant (3 kPa) [64]. Analysis of the intact BM of porcine models indicated that the marrow is viscoelastic, with gradations in effective Young's modulus ranging from 0.25 to 24.7 kPa [78]. The spatial variations of BM niche in cellular and extracellular matrix (ECM) components, soluble, physical, and biomechanical factors are intertwined to create functional niches [1,79] (Fig. 2).
The cellular morphology of HSCs is closely associated with matrix stiffness because HSCs remain largely round on soft substrates but more scattered on stiffer ones [29]. Moreover, HSCs react to stiffer substrates with increased cell adhesion and motility, which can promote the exit of HSCs from the niche [82]. Colony-forming units (CFUs) assay manifested that more multipotent CFUs (CFU-EM and CFU-GEMM) were generated on stiff (> 44 kPa) relative to soft (3.7 kPa) FN-coated substrates although the effect of matrix ligand cues cannot be totally excluded. It was reflected that matrix ligand type had a selective but significant impact on the lineage specification of HSCs as HSCs cultured on FN-, collagen-, and laminin-coated substrates displayed totally different commitments to myeloid lineage [29]. An analogical inclination was found in ex vivo cultures of mouse BM-derived hematopoietic progenitor cells (HPCs) (LSK) with different matrix stiffness (a shear storage modulus of 50-800 Pa) as Chitteti et al. advocated that higher matrix stiffness facilitated the clonogenicity of LSK cells but lower matrix stiffness seemed to be more related to cell proliferation and differentiation [83]. Moreover, substrate elasticity greatly influences HSPC expansion [30]. Nanopatterns of cell ligands on the matrix also are important physical factors affecting HSC actions as the nanometer-scale spacing between the integrin ligands of the matrix was found to be correlated with HSPC adhesion and subsequent cell signaling transduction [84].
Biomechanical forces
Adult HSCs residing within the BM or trafficking into peripheral vessels and embryonic HSCs colonizing in discrete anatomical regions at different stages of development are poised to experience biophysical forces [27,85]. In particular, one of the most important findings concerns the role of blood flow in definitive hematopoiesis [86,87]. Ever since the initiation of heartbeats and the immediate establishment of blood circulation, vascular endothelial cells, and hematopoietic cells are constantly subjected to hemodynamic forces [87][88][89]. In general, three types of fluid mechanical forces which are shear stress, circumferential strain as well as hydrostatic pressure are generated because of pulsatile blood flow going through a vessel. Shear stress is the frictional force tangential to ECs while circumferential strain refers to the force perpendicular to the flow direction [90,91].
With regard to embryonic HSCs, previous studies have unraveled that not only hematopoietic precursors and mechano-responsive vascular endothelium are of developmental and anatomical relevance [92,93], but also the onset of blood flow and the appearance of hematopoietic cells are temporally connected, suggesting the possibility of blood flow within the "vascular niche" as a local modifier of HSC development [94]. As expected, this original assumption was confirmed in seminal works in zebrafish and mice, and shear stress was the most widely investigated one among the three types of fluid mechanical forces [48,86,87,89,[95][96][97][98]. At about the same time, the researches of North TE and Adamo L provided the first experimental evidence that blood flow drives HSC formation [86,87]. Subsequently, Wang et al. advocated the parallel results in zebrafish [97]. Zebrafish and mouse embryos absent of blood circulation both exhibited a significant reduction in the number of HSCs and severe defects in definitive hematopoiesis [86,87,97], one of which, however, can be rescued if exposed to shear stress [86]. Moreover, the application of external wall shear stress (WSS) in ESC cultures and murine embryos induced hematopoietic commitment and enhanced the expansion of hematopoietic progenitors [99]. Most recently, it has been revealed that the impact of the circumferential strain component of blood flow on HSC development is important as well and seems to be conserved between zebrafish and human [95].
Blood flow also contributes to the organization of contractile circumferential actomyosin during EHT and HSPC homing towards the BM [48,100]. In addition to the luminal hemodynamic forces, DA is constantly exposed to the outer compressive stresses exerted by the surrounding tissues because of embryonic development [101]. The FL is also highly vascularized, yet little is known about the patterns of mechanical forces in this hematopoietic organ for support of HSPC expansion [102].
Adult HSCs sheltered in the BM may not be exposed to blood flow directly. However, BM niche cells do experience fluid flow and could affect HSCs via paracrine signaling. For example, endothelial cells and pericytes might be impacted by the relatively sluggish fluid flow in the blood vessels that feed the medullary cavities of bones throughout the skeleton, further regulating the cycling and quiescence of HSCs [80]. Fluid flow in the lacunar-canalicular network of the bone around osteocytes produces shear stresses of 6-50 dyne/cm 2 , which is implicated in conveying nutrients and signaling elements to osteocytes as well as their mechanical activation [85]. Moreover, it has been reported in mice that a minimal amount of HSCs circulating in the blood flow can experience shear stress that exceeds 600 dyne/cm 2 in some regions of the aortic walls, a value much higher than the magnitude experienced by cells in human [102]. Mechanical loading, on the other hand, is required for proper BM HSC differentiation. Hematopoietic disorders have been widely reported in humans during exposure to microgravity (spaceflight, for instance), including leukocyte proliferation, reduced number and activity of T-lymphocytes and natural killer (NK) cells, megakaryocyte loss and erythrocyte retention in the BM Fig. 2 The logistic model of discrete niches in the BM. Adult HSCs are naturally localized in three-dimensional microenvironment of BM which is organized into distinct cellular niches, mainly including the endosteal and the perivascular niches. The endosteal niche, adjacent to the endosteum of the trabecular bone, contributes to maintaining quiescent HSCs while the perivascular (more specifically, arteriolar and sinusoidal) niche, mainly composed of blood vessels and perivascular stromal cells, more activating cell cycle and initiating cell proliferation and differentiation. However, quiescent HSCs asso-ciate specifically with small arterioles in the endosteal space, and the most primitive and long-term HSCs are maintained in the perisinusoidal niche [79][80][81]. Note the presence of biochemical, biophysical and biomechanical factors crosstalking with each other in the in vivo environments. LepR, leptin receptor-expressing; CAR cell, cxcl12abundant reticular cell; MSC, mesenchymal stromal cell; SCF, stem cell factor; FGF1, fibroblast growth factor 1; TGFβ, transforming growth factor β; NG2, neuron-glial 2; OSM, oncostatin M compartment [103][104][105][106], possibly because of the abnormal differentiation potential of BM HSCs under conditions of reduced gravitational mechanical loading [106].
Mechanosensors and mechanotransduction
On the basis of the above-mentioned research findings, HSCs are somehow endowed with the ability to detect and discriminate a variety of mechanical constituents and convert into different cell actions. However, no consensus is currently reached on how the sensing apparatus and transduction of mechanical signals operates in HSCs. At present, the discovery of numerous promising hints and the proposal of several plausible hypotheses may contribute to unveiling the mystery of how mechanical signals shape the behavior and function of HSCs.
Cilium
As a hairy-like and microtubule-based (apical membrane) sensory organelle, primary cilia widely exist in mammalian cells such as ECs, embryonic hematopoietic progenitors and almost all human blood and BM cells (97-99%) [107][108][109]. Cilia have long been considered as the only vestigial evolutionary remnant until being proven by growing evidence to indeed function as a key signaling point decoding miscellaneous mechanical and chemical stimuli in the microenvironment [108,110]. Notably, calcium channels and receptors are abundant in the ciliary membrane, which further points out that cilia are a communication hub for signal transduction [111,112]. Human MSCs can sense their mechanical environment through primary cilia, which is required for their osteogenic response and controlled proliferation [113]. Disruption of this mechano-sensory organelle crippled the pro-osteogenic effect of mechanical signals [114]. This is important because both osteoblastic cells and MSCs are essential components of BM niches that support HSCs [115]. Besides, fluid-flow sensing in vascular endothelial cells depending on primary cilia was shown to regulate the biosynthesis of nitric oxide (NO) [116,117], which is a wellknown essential modulator of HSC functions [118]. As BM HSCs are often found in close proximity of the vasculature [119], whether mechanical stimulation of NO production through primary cilia is sufficient to further influence the outputs of HSCs remains unclear but is an intriguing possible mechanism for regulation of HSC fate.
As for developmental HSPCs, a canonical "9 + 0" axoneme 3D ultrastructure of primary cilia was directly visualized in zebrafish vascular endothelium in the AGM region, and a primary cilia-dependent Notch signaling axis was found to be required for HE specification. Primary ciliadysfunction embryos exhibited severe HSPC defects that can be prevented by the overexpression of the Notch intracellular domain (NICD) [108]. Furthermore, Notch target gene (ephrinB2a), on the other hand, was downregulated in blood flow-deficient embryos [87,97], in combination with previous evidence showing that endothelial primary cilia can mediate blood flow in the zebrafish vascular development [107] and above-mentioned studies demonstrating that blood flow is a positive regulator of HSPC development, it is reasonable to hypothesize that primary cilia may act as a mechanotransducer relaying signals from blood flow to embryonic HSPCs. These events strongly suggested the affirmative role of cilia in mechanosensing and mechanotransduction in spite of remaining to be definitively demonstrated in HSCs.
Nitric oxide
As a diffusible gas transmitter generated from arginine by nitric oxide (NO) synthases (NOS), NO exerts a broad effect on cellular biological activities including mechanotransduction [87]. When encountering blood flow, vascular endothelial cells are capable of producing NO, functioning as an essential vasodilator in the regulation of vascular tone [117]. Both fluid shear stress and vertical mechanical stretch can trigger the rapid production of NO [120][121][122]. Three NOS genes were found in the mammalian genome, referring to the neuronal, endothelial, and inducible NOS isoforms (nNOS, eNOS, and iNOS, respectively), each of which can be reliably detected in mouse BM [118]. Aleksinskaya et al. provided the first experimental confirmation of free NO radicals in rodent BM using a NO spin trapping and electron paramagnetic resonance spectroscopy. Besides, eNOS is the dominant source of basal NO (66%) and the iNOS isoform also accounts for a significant proportion (23%) [123]. Moreover, various biological steps in adult HSCs/HSPCs are regulated by NO signaling and the modulation depends on HSC source because opposite effects can be observed when HSCs of different origin are compared. For example, it induced BM HSC proliferation and myeloid differentiation and reduced their capacity of long-term reconstitution [124] but promoted the homing and engraftment of CB HSC [125]. Therefore, the exploitation of NO-releasing agents or the pharmacological activation of NO-dependent intracellular pathways to bolster the number or activity of HSPCs are suggested to be a promising strategy suitable for therapeutic applications [125]. However, another study demonstrated that iNOS-deficient mice were easily mobilized and their BM-derived mononuclear cells were endowed with intensified homing and engraftment [126]. NO depletion in human PB HSCs led to their shift from differentiation to proliferation [127].
Physiological functions involving NO have also been described in developing HSCs. As a process resembling embryonic HSC budding, NO-induced endothelial podokinesis plays a permissive role in mediating the execution of the vascular endothelial growth factor (VEGF)guided program of directional endothelial cell movement by interfering cell-ECM adhesive interactions [128]. With regard to definitive hematopoiesis, the researchers hypothesized that NO produced locally by endothelial cells must also affect HSC formation since DA is the requisite de novo site for HSC emergence [129,130]. As expected, a wide spectrum of independent studies has successively corroborated that NO signaling is required in vascular niche for blood flow-dependent HSC generation during embryogenesis. Ectopic NO strengthens hematopoiesis and mitigates hematopoietic defects resulting from cardiac dysfunction, while abrogating NO signaling would diminish the prohematopoietic effects of blood flow [86,87,97]. Parallel results were reported in phospholipase C gamma 1 mutants (plcg1-/-) that the perturbations of arterial specification and HSPC formation caused by the absence of blood circulation can be prevented by Ginger-induced robust NO upregulation [131]. In zebrafish embryos, NO synthase (nNOS and iNOS) appears to be directly activated by klf2a, which is required for blood flow-dependent HSC maintenance [97]. Further, NO was found to lie downstream of Runx1 based on the ability of NO antagonists in significantly attenuating the expansion of hematopoietic progenitors induced by shear stress without affecting the upregulation of Runx1 [86].
To sum up, in embryonic HSCs/HSPCs, NO often serves as a definite downstream player of blood flow that mediates its positive effect on HSPC formation. Its impact on adult HSCs/HSPCs, however, is largely obscure due to quite contradictory conclusion among different literature, which we think might be a result of distinct analyzed HSC types. As NO is a locally acting signaling molecule and HSCs of different origin (BM HSCs, CB HSCs, and PB HSCs) occupy totally different niches.
Mechanically gated ion channel
Plenty of flow-modifying agents that regulate the formation of AGM HSCs are indeed well-known modulators of ion channels. Thereinto, Ca2 + -channel blocker nifedipine and Na + /K + fluxes modulator glycoside digoxin both enhance HSC formation, while BayK8644, a potent Ca2 + -channel activator, diminishes the number of HSCs [87]. As a major class of ion channels, cationic stretch-activated channels (SACs) are capable of sensing mechanical forces with high sensitivity and wide dynamic range and permeable to calcium (Ca2 +), a significant second messenger implicated in cell fate decisions [132,133]. Thereinto, piezo channels can be triggered by the NMII-dependent intracellular traction forces in response to mechanics like substrate elasticity and matrix topography, governing the mechanosensitive fate of stem cells [134,135]. Calcium flux has been documented in the AGM-derived cells exposed to WSS, stimulating intracellular calcium signaling that directly potentiates the production of prostaglandin E2 (PGE2) responsible for hematopoietic potential modulation, in agreement with the identification of calcium signaling as the second most enriched pathway in WSS vs. static cultures of AGM-derived cells [89]. The deformation of primary cilia like bending induced by mechanical stimuli causes intracellular Ca2 + oscillation followed by modifications in calcium signaling cascades [110]. Moreover, it is well appreciated that eNOS binds calmodulin, whose activity is regulated by Ca2 + [136]. Many events sensitizing eNOS to Ca2 + can stimulate the release of NO [137].
Crosstalk with known signaling pathways controlling HSC development
In addition to unique mechanosensation and mechanotransduction, the activation of multifarious well-established developmental HSC regulatory signaling pathways was identified in endothelial cells exposed to force, suggesting their potential interplay within HE and/or HSCs [138,139]. These classic regulatory transcriptional programs include PGE2, Wnt, Hedgehog, Notch, bone morphogenetic protein (BMP), and VEGF signaling. Expectedly, ingenuity pathway analysis showed that Prostaglandin, Wnt (especially noncanonical Wnt) and Notch signaling all manifest varying degrees of upregulation after being exposed to WSS and elevated PGE2 production is obvious in the AGM mediated by calcium flux [89]. PGE2 production can be induced by WSS to control the expansion of hematopoietic populations in the developing embryo. BMP was identified as a downstream target of shear stress-protein kinase A (PKA)-cAMP response element-binding protein (CREB) pathway for promoting HSC emergence [98]. The rescue of arteriogenesis and hematopoiesis by ginger treatment in aforementioned VEGF pathway mutants plcg1-/-displaying complete disruption of blood flow are BMP and Notch-dependent [131]. Shear stress promotes the formation of a mechanical sensor complex composed of VEGF receptor 2 (VEGFR2 also called FLK1), vascular endothelial (VE)-cadherin and β-catenin [140]. These wide-ranging possibilities of complicated crosstalk under mechanical conditions provide a cooperative control system working in concert to dictate HSC fate.
Cell-cell and cell-ECM adhesions
Cells are not just as a single entity with innate cytoarchitecture and cytoskeleton, but also in the condition of concatenating adjacent cells and the ECM via cell-cell and cell-ECM adhesions [26]. Likewise, both in embryonic and adult HSCs, interactions between HSCs and ECM/ supportive cells are essential microenvironmental constituents of various HSC niches closely relevant to HSC fate [74,[141][142][143]. Here, we will emphasize some key adhesive molecules or structures that are known to play a crucial role in HSCs and/or the mechanotransduction process.
During development, the engagement of Notch signaling in HE specification requires posterior lateral plate mesoderm (PLPM) cells migrating over the somite boundary and their close physical contact with somatic cells via ITGB1-mediated adhesion to FN [144,145]. Within FL hematopoietic compartment, some HSPCs were observed in both the luminal and parenchymal aspects of sinusoidal endothelial cells ECM comprised of laminin and FN, interacting with sinusoidal endothelial cells through endothelial protein C receptor (EPCR) [146]. Time-lapse live imaging of HSPCs in the zebrafish embryo also revealed striking physical anchorage of HSPCs to perivascular endothelial cells in the CHT niche microenvironment which further orient their mitotic divisions [143]. Imaging of the mouse BM uncovered that most quiescent HSCs are adjacent to arterioles [146]. Malignant HPCs like leukemia blasts often exhibit aberrant adhesive structures and signaling and that targeting adhesion signaling is considered as a potential strategy of rational antileukemia therapy [147][148][149].
These junctions are often sites for mechanical convergence capable of sensing and conveying physical forces, of which cadherin-mediated cell-cell adhesions and focal adhesions (FAs) are the most remarkable ones [150,151]. Specially, VE-cadherin expressing cells stands for a primitive HSC population [152,153]. Besides, EHT was regarded as a partial epithelial to mesenchymal transition (EMT) process featured by a transitional phenotype of post-EHT cells situated in the intra-aortic clusters with decreased VE-cadherinmediated endothelial cell-cell and MMP2-mediated cell-ECM interactions [154,155] Biomechanical forces derived from blood flow have been reported to promote the dissociation of the post-EHT HSPC clusters into individual HSPCs [87]. Previous findings highlighted the junctional stability of cadherins to withstand mechanical forces and their mechanosensitive ability for variations in substrate rigidity [153,156,157]. In VE-cadherin-null endothelial cells, several flow-responsive actions were absent [140]. As another mechano-responsive receptor localized to cell-cell junctions in endothelial cells, platelet endothelial cell adhesion molecule-1 (PECAM-1) became activated under fluid shear stress, transmitted mechanical signals to VE-cadherin and activated VEGFR2 [158]. CD44-mediated cell rolling interactions underpinning lymphocyte trafficking and HPC homing were strengthened by tensile mechanical force through inducing the rapid allosteric transition of CD44 to a highaffinity state [159]. Similarly, HSPC homing was directed by the rolling adhesion of HSPCs to endothelium via selectins upon shear stress. Adhesion reorganization within HSPCs was observed during this process, corroborating previous studies demonstrating that mechanical stimuli play a positive role in regulating the composition and kinetics of adhesive junctions [100,[160][161][162].
Cell-ECM interactions enable cells to sense and react autonomically to the mechanical cues of their context. Also present in HSPCs, cell-ECM contacts are mechanical sensors and/or mechanotransducers of matrix elasticity [30]. The intracellular transmission of mechanical stimuli like topography and forces is attributed to the effect of cell-ECM attachment. In brief, mechanical stimuli activate integrins at cell-ECM adhesion and intrinsic forces are utilized by cells to form mature FAs, a common load-bearing anchorage site, acting as mechanosensitive rheostats to drive single-cell mechanical homeostasis [163][164][165]. Postulated mechanisms of mechanotransduction include but are not limited to the specific type of integrin receptors expressed in cells and ligands present in the ECM, given the nonspecific characteristics of mechanical inputs [26,159]. As described for anchorage-dependent cells, it has also been revealed for HSCs that multiple ECM proteins such as FN, laminin and collagen provide structural form and modulate their behavior possibly through integrins [64,[166][167][168], because it is well-established that these ECM proteins shared common integrin recognition motif termed RGD (Arg-Gly-Asp) [85]. Also, many substrates do actually elicit mechanosensitive responses in HSCs through integrins [39,82,169], which may be a likely mechanism for the regulation of HSCs by distinct matrix stiffness. Notably, integrin aIIb (CD41) featuring low-level expression in developmental emerging HSCs in the AGM region is specifically used as a nascent HSC marker [170,171]. Within FL hematopoietic niche, the interaction between HSPCs and ECM is dependent on β 1-integrin present on the HSPCs binding to vitronectin and FN generated by hepatoblasts [142].
In addition, the recruitment of FA kinase (FAK) to FAs is required to relay external mechanical information into cells. The ratio of phosphorylated FAK to total FAK which symbolizes the activity of FAK dramatically increased with substrate stiffness. Blocking of FAK activity in myoblasts led to abrogated stretch-induced alignment and differentiation [172,173]. FAK plays a predominant role in the activation of the Rho family of small GTPases (Rho-GTPases) [174] which are mechanotransducers responsible for relaying signals from blood flow to YAP in the embryonic HSPC production [95]. Besides, cell-ECM interactions have a bilateral effect on mechanotransduction, while matrix stiffness or loading influences the behavior of cells which in turn exert traction forces on the ECM and secrete ECM remodeling proteins like matrix components or proteases to strengthen or degrade the ECM and enhance or cleave adhesive interactions [175,176]. The HSC niche is particularly dependent upon ECM remodeling proteins to control HSC quiescence 1 3 and mobilization and hematopoiesis [154,177,178]. All in all, although the role of cell-cell and cell-ECM interactions in HSCs has been studied extensively, a detailed picture of adhesion force transduction is still lacking.
Mechano-responsive transcription factors
Mechanical effects can be converted into protein-level through mechano-responsive transcription factors (TFs). Yes-associated protein (YAP) and Transcriptional coactivator with PDZ-binding motif (TAZ) are two well-appreciated mechanics-induced transcriptional coactivators [179]. In most cases, YAP/TAZ activity is restricted to cells experiencing mechanical stresses [180]. YAP activation can be triggered by different types of mechanical cues such as substrate stiffness, cyclic stretch, and shear stress through facilitating its sub-cellular translocalization from the cytoplasm to the nucleus [95,179,181]. YAP is also required for the mechano-morphogenetic process by controlling actomyosin-mediated tissue tension [182,183]. Goode D. K and his colleagues observed a precise temporal nuclear localization of YAP in endothelium just before the EHT process during murine hematopoietic development. Further functional validation experiments identified that TEAD/YAP interaction is a stage-specific regulator necessary for early hematopoietic specification [184]. Most recently, it has been shown that YAP activation and the up-regulation of YAP target genes are sensitive to the cyclic stretch and inform HE commitment towards HSPC fate, for the first time confirming a connection between biomechanical cues and YAP in determining HSC fate [95].
KLF2 serves as another crucial mechano-activated TF whose expression mirrors the onset of fluid shear forces in the developing mouse embryos [185], and is an immediate responder to shear stress in mouse ESC-derived CD41 + Flk-1 + cells and during ESC differentiation towards hematopoietic and endothelial potential [86,99]. Likewise, it was found that the expression of its zebrafish ortholog, klf2a is dramatically reduced or even absent in the vasculature without blood circulation. In addition, klf2a is a pivotal mediator in blood flow-induced HSC production [97].
As a member of the basic leucine zipper (bZIP) TF family, cAMP response element-binding protein (CREB) can also be activated by miscellaneous mechanical loadings [186][187][188] and trigger the endothelial and hematopoietic differentiation of ESCs via their recruitment to the Etv2 promoter [189]. Moreover, it has lately been described to be abundantly expressed in the AGM and act as a downstream effector of fluid shear stress affecting the EHT process and HSC emergence through a PKA-CREB-BMP signaling pathway [98]. Roughly in agreement with this experimental data, Ncx1 heartbeat mutants and static cultures of AGM exhibited a significant reduction in the phosphorylation of CREB and prostaglandin E2 (PGE2)-cAMP-PKA signaling axis mediated the effects of WSS on definitive HSCs [89].
Cytoskeleton
As a cellular interconnected scaffold composed of three main polymers including actin filaments, intermediate filaments and microtubules, cytoskeleton forms intricate structural networks with different architectures [35]. It is reported that the primary machinery of cellular force-sensing and forcegenerating comes from changes in the cytoskeleton [35,190,191]. Cell-intrinsic forces are generated through either the concerted polymerization or slide of actin filaments along the bipolar filaments of NMII [190,192]. The suppression of NMII activities attenuated the expansion of LSK cells conferred by matrix elasticity [30]. Moreover, the extent of cytoskeletal contractility was proportional to the degree of adhesion strength [193,194]. Therefore, cytoskeleton has important implications for mechanical transmission as a hub of communication between cells and the external physical microenvironment [179,180,195]. When external mechanical stimuli were directly exerting on the cytoskeleton or on the transmembrane adhesive receptors that were connected to the cytoskeleton, cytoskeleton remodeling and cytoskeletal tension rearrangement occurred under the action of a myriad of actin-binding proteins [196,197]. Endothelium bearing steady laminar shear stress exhibited a typical phenotype of "apical stress fibers" enriching robust actin-and myosin-containing filaments in the apical cell membrane because of cytoskeleton-related gene activation [24]. Similar "apical constriction" event also occurred in EHT cells which may rely on the activity of actomyosin recruited at the circumferential actin belt controlled by myosin regulatory light chain 9 (Myl9) [48].
Apart from focusing on the structural changes of cytoskeleton, researchers have long observed the upregulation of cytoskeletal proteins induced by forces [198]. In addition, cytoskeleton was identified as a key regulatory input for YAP/TAZ due to the abrogation of YAP/TAZ activity by Cofilin, CapZ, Gelsolin, and other F-actin-capping/severing proteins and the enhancement of YAP nuclear translocation by cortical actin bundling in response to shear stress [180,181]. Another research on mouse kidney development also revealed that CDC42 Rho-GTPase, a well-established promoter of F-actin polymerization, contributes to the nuclear retention of YAP [199]. Rho-GTPase mediated blood flowinduced YAP activation and HSPC production in zebrafish embryos and in vitro [95]. Defective cytoskeletal architecture in HE or EHT undergoing cells followed by the abrogation of blood flow resulted in the severe impairment of their morphodynamics and thus higher susceptibility to cell death, which may explain a significant decrease in the number of HSPCs previously reported upon blood flow obliteration treatment [4,48,200].
Below, we mainly highlight the key molecular mechanisms involved in the regulation of HSC actions under mechanical conditions, illustrated in Fig. 3.
In-vitro mechanical mimicry platforms for HSC support
Research aiming to recapitulate the developmental processes of HSCs in vitro relies heavily on the establishment of robust and reproducible methods, one of which is the effective biochemical and biomechanical simulation of in vivo milieus. However, common HSC supportive strategies are confined to biochemical exposure like the delivery of various growth factors and cytokines, with only meager attention paid to the effects of biomechanical cues. Systems of this kind enhance HSPC proliferation but induce substantial differentiation [201,202]. To circumvent this limitation, mechanical engineering and materials science where biomechanical cues such as substrate elasticity/rigidity, micropattern, nanotopography, and externally applied forces are well-designed have thrived to be more conducive to achieving sustained expansion of HSCs while keeping their stemness and multipotency [56,203]. Among them, specially fabricated 3D nanofibers were most frequently reported in the literature [56,[203][204][205], which are often hierarchically structured scaffolds constituted by different biomaterials including natural polymers such as fibrin, collagen [205], tropoelastin [30] and silk fibroin [60], synthetic polymers such as PCL (an extensively used scaffold material in tissue engineering), hydroxyapatite (HA, one dominant component of bone), polyethylene terephthalate (PET), PES, hydrogel, polyurethane (PU), and poly L-Lactic (PLLA), ceramics and a hybrid of them. Electrospun nanofiber ceramics have been lately reported as bone mimicking materials [206]. Selected studies carried out in the last 5 years on the Fig. 3 Schematic representation of molecular mechanisms translating biomechanical cues into the cellular genetic program in HSCs. Biomechanical inputs from external loads directly stimulate mechanosensors such as mechanically gated ion channels, adhesion receptor-ligand bonds, cytoskeleton and primary cilia. PECAM-1, VEcadherin and VEGFR2 constitute a mechanosensor complex together. Intrinsic forces are generated under environmental mechanical constraints, transmit to neighboring cells through junctional interfaces, and consequently elicit cellular mechanoresponses. For example, blood flow governs the heterogenous organization of circumferential actomyosin and actin/junction contacts characterizing HSC emer-gence via a Myl9-dependent mechanism. Besides, intrinsic forces can directly pass on to the nucleus through LMNA, affecting chromatin structure and thereby controlling epigenetic processes. Biomechanical cues cooperate with biochemical signals. For instance, the expression and/or activation of mechanics-responsive TFs including YAP, KLF2, and CREB induced by fluid forces drives HSC programming. Interactions between these mechanical molecules are common, one of which is that PGE2 and NO production have been revealed as the downstream events of calcium flux triggered by mechanical stress that involved in hematopoietic regulation expansion and differentiation of HSCs utilizing synthetic nanofibers are compiled and highlighted in Tables 1 and 2. All these 3D nanofiber platforms have been adopted as analogs of the BM niche. By altering the ratio of distinct constituents, crosslinking levels, concentrations, bond with other components, and the density of cell adhesion ligands, biomaterials are mechanically tunable to generate nanofibers equivalent to those of the natural ECM (in size, structure and elasticity) [207].
Besides, the application of fluid shear stress mimicking cardiovascular forces within the AGM niche is another manageable means to attain optimal mechanical conditions, which can be readily included into scalable bioreactors. Flow experiments exposing cells (differentiating ESCs, embryonic HE or HSCs) to a defined pattern of laminar shear stress can be generated by a 2D adherent parallel plate configuration or through the assembly of a microfluidic platform [96,99,208]. Different mechanical parameters like varying modes (pulsatile or continuous) and magnitudes (1-10 dyn/cm 2 ) of fluid shear stress can be investigated predictably and systematically, among which a shear stress of 5 dyn/cm 2 is a value roughly amounting to the physiological magnitude experienced by cells in the dorsal aorta of E10.5 mouse embryos [86]. By seeding cells on a stretchable micropost array (SµPA) cytometry, Weng et al. employed varying degrees of defined static equibiaxial cell stretches [163]. Alginate beads with osteoblasts encapsulated were used to bio-mimic the low 3D stiffness of the BM niche [209]. The application of intermittent hydrostatic pressure (IHP) also is a widely used approach to simulate the BM mechanical microenvironment [69,209,210].
Conclusion and perspective
The acquisition of sufficient functional HSCs in vitro prevails as a holy grail in the field of hematopoiesis that holds great promise for clinic-scale therapeutics. The identification of specific cues during HSC development and the fundamental grasp of mysteries in the underlying mechanism will be of help to decipher key points required for prolonged proliferation and pluripotency of HSCs. Significantly, the incorporation of biomechanical signals is a groundbreaking replenished theory that has revolutionized the scientific community's current understanding of HSC regulatory signal types. This manuscript attempts to provide an integrated picture of the biophysical processes and probable mechanisms driving the phenotypic and functional outputs of HSCs. Besides, the practical aspects of mechanical signals are outlined with the focus on the mechanical mimicry platforms and the application of mechanical forces in an attempt to derive or expand HSCs in vitro.
Albeit certain mechanosensors and mechano-activated signaling pathways have been uncovered in several cell types and processes, some critical unanswered questions should be emphasized, such as, how natural fluctuations in the pattern, magnitude and duration of mechanical stimuli result in the discrepancy of genetic/epigenetic variations in HSCs, the precise definition of mechanical components presented in synthetic niches, and several pieces of evidence proved that biomechanical activation can successfully specify HSCs in vitro, but how long mechanical effects will last or whether they will be retained in transplant recipients, all of which remain to be explored and are interesting avenues for future research. In addition, aforementioned molecules or structures involved in cellular mechanotransduction and downstream events are highly interconnected rather than mutually exclusive. Several quintessential examples should be cited that, the primary cilium is actually a microtubular extension of cytoskeleton, whose basal body region is particularly a microtubule organizing center [221]. Cilium bending induces cytoskeletal deformation and membrane stretching at the base of the primary cilium [222], initiating extracellular Ca2 + influx through calcium channels in the ciliary membrane [223].
Cell junction receptors and associated proteins generally represent a mechanical linkage between the ECM and actin cytoskeleton. In the face of mechanical stimuli, adhesion complexes actively facilitate cytoskeleton assembly and stress fibers made of F-actin support the maturation and stabilization of focal adhesion. Therefore, the integrins-FAs-F-actin axis obviously works as an integrated whole in the process of mechanotransduction [46,196,197,[224][225][226]. In this respect, the regulation of biomechanical signals is a multifactorial event that can hardly be replicated by the manipulation of a single signaling axis, which may be why the interpretation of results obtained from synthetic niches seems to be challenging. Due to the complexity of these mechanical interactions, deeper mining is required to not only reveal the integrated characterization of various mechanical regulatory networks in a temporal and spatial manner at the cellular level, but also dissect the relative contribution of individual signaling in the related process.
In conclusion, we have summarized how biomechanical cues are pivotal for offering robust and instructive principles in HSC fate determination. Nevertheless, mechanical signals as regulators of HSC biology are still in their infancy, and the identification of several known mechanosensitive and mechanotransductive components suggests the possibility of more awaiting discovery. Fortunately, it is not naive to assume that researchers may be one step closer to fully appreciate the overall perspective of the mechanical regulatory paradigm within HSCs in the future because of the burgeoning experimental techniques ranging from high spatiotemporal resolution imaging to CRISPR/Cas9 gene-editing Table 2 State-of-the-art nanofiber technology-based niche models for HSC differentiation
Conflict of interest
The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 11,807.6 | 2021-07-07T00:00:00.000 | [
"Biology",
"Engineering"
] |
Reliable Computation of Robust Response Tori on the Verge of Breakdown
We prove the existence and local uniqueness of invariant tori on the verge of breakdown for two systems: the quasi-periodically driven logistic map and the quasi-periodically forced standard map. These systems exemplify two scenarios: the Heagy–Hammel route for the creation of strange nonchaotic attractors and the nonsmooth bifurcation of saddle invariant tori. Our proofs are computerassisted and are based on a tailored version of the Newton–Kantorovich theorem. The proofs cannot be performed using classical perturbation theory because the two scenarios are very far from the perturbative regime, and fundamental hypotheses such as reducibility or hyperbolicity either do not hold or are very close to failing. Our proofs are based on a reliable computation of the invariant tori and a careful study of their dynamical properties, leading to the rigorous validation of the numerical results with our novel computational techniques.
Introduction.
The goal of this paper is to present a new methodology to provide rigorous proofs of the existence and local uniqueness of (fiberwise hyperbolic) invariant tori in quasi-periodic systems, even in cases in which the available perturbative theory does not apply.The methodology is suitable for computer-assisted proofs and consists in checking the hypotheses of a validation result based on the Newton-Kantorovich theorem [27].As an application of the methodology, we prove the existence and local uniqueness of invariant tori on the verge of breakdown in two scenarios: the Heagy-Hammel route to strange nonchaotic attractors (SNA) [32] in a quasi-periodically driven logistic map and the breakdown of saddle tori [26] in a quasi-periodically forced standard map.
Organization of the paper.In this introductory section we present an overview of the paper, including the rigorous validating results of existence of invariant tori in several examples and a brief discussion of the methodology.In section 2 we summarize the theoretical framework necessary for the computer-assisted proofs and present a validation algorithm.In section 3 we present the Fourier models for the rigorous manipulation of Fourier approximations (see also Appendix A) and the implementation of the validation algorithm.Sections 4 and 5 report the proofs of the rigorous validations which have to be carried out after accurate numerical computations.
1.1.Robust response tori in quasi-periodic systems.The long-term behavior of a dynamical system is organized by its invariant objects.Hence, it is important to identify the robust invariant objects that persist under perturbations of the system.In applications, we can produce numerical approximations of these objects and we may wish to turn the nonrigorous calculations into theorems.Hence, the question is to establish whether a numerical approximation persists as an invariant object of the dynamical system and to provide rigorous error bounds.
In this paper, we address this question for a particular class of dynamical systems and invariant objects.The systems we consider are quasi-periodically forced, that is, coupled with an irrational rotation, and the invariant objects are invariant tori carrying this irrational rotation.These tori are the response to the quasi-periodic forcing and are geometrically described as graphs of the state variables over the coupled angles describing the quasi-periodic motion [59].Since it has been known for a long time that persistence of invariant manifolds is closely related to the concept of normal hyperbolicity [21,34,50,57], here we consider the analogous concept, tailored for skew products over rotations.Roughly speaking, an invariant torus is fiberwise hyperbolic if the linearized dynamics on the normal bundle is exponentially dichotomous, that is, the normal bundle splits into stable and unstable bundles on which the dynamics is uniformly contracting and expanding, respectively.Notice that the tangent dynamics is dominated by the normal dynamics, since the former presents zero Lyapunov exponents.This implies that fiberwise hyperbolic invariant tori are robust and are as smooth as the system [28].
Most of the results regarding the existence of invariant objects in the literature are in fact perturbative [21,34,50,57] and provide rather pessimistic estimates of the persistence of the invariant objects when applied to concrete examples.In this paper we adopt the functional framework described in [27], which leads to an a posteriori result based on the Newton-Kantorovich theorem.Hence, the rigorous validation of numerical computations consists in checking the hypotheses of this theorem.Notably, the applicability of Newton's method for computing response tori is related with the property of fiberwise hyperbolicity.The methodology is suitable for validating invariant tori that are very close to breakdown.
Reliable computations on the verge of breakdown.
The transition from regular to irregular motion is a difficult mathematical problem which arises in various fields, such as solid state physics, chemical reaction dynamics, climate dynamics, and neuroscience.In systems under quasi-periodic forcing, the transition can be understood as the phenomenon of breakdown of response invariant tori.A main problem is providing rigorous bounds of the parameters for which smooth invariant tori do exist, close to the estimated thresholds of breakdown, since we may need rigorous delimitations of the boundaries between the regular motion and the irregular motion.In this paper we report the application of computer-assisted proofs of the existence of invariant tori in two scenarios: the Heagy-Hammel route to an SNA [32] and the nonsmooth breakdown of saddle invariant tori [26].
Rigorous validations in the Heagy-Hammel route.
In quasi-periodic dissipative systems, it has been observed that an attracting smooth torus may nonsmoothly bifurcate into an attracting object of complicated geometry (not even continuous) but still carry a nonchaotic (in fact quasi-periodic) dynamics, the SNA.The discovery of this extremely interesting behavior [25] (see also [33]) produced an explosion of numerical and experimental studies re-porting mechanisms for the birth of SNA which still resonates today (see, e.g., [22,55] and references therein).Further theoretical studies with rigorous explanations and mathematical proofs of some of these mechanisms have been considered in the mathematical literature (see, e.g., [6,7,30,35,40,60,61]), and all of them involve the collision of invariant tori.The Heagy-Hammel route [32] falls into this category.
In the Heagy-Hammel route, a period 2 attracting torus (born in a period doubling bifurcation) collides with its companion repelling torus, producing an SNA.In this transition, the repelling torus is preserved, while the period 2 attracting torus is destroyed.This situation has been observed in numerical experiments on the following quasi-periodically driven logistic map: (1.1) where ω = 1 2 ( √ 5 − 1) and a and D are parameters.In the following, we fix D = 0.1 and let a > 0 vary.Numerical experiments (see subsection section 4.1 for further details) show that for a ∈]a p , a c [, with a p 3.141875 and a c 3.271383, there is a period 2 attracting torus.See Figure 1.This period 2 attracting torus is born in a period doubling bifurcation at a = a p , and it is destroyed at a = a c when colliding with the repelling torus.The repelling torus survives the collision and in fact exists for a ∈]a p , +∞[.
In this scenario, a key role is played by the noninvertibility of (1.1).Interestingly, the role of noninvertibility in global bifurcations was already noted in [1,2].In the present example, the linearized dynamics of the period 2 attracting torus can be either reducible (which means that it is invertible) or nonreducible.The linearized dynamics degenerates when the torus crosses the critical curve {z = 1 2 }, at a r 3.17496.When parameter a approaches the threshold a c , the unstable dynamics around the repelling torus must be more apparent on the period 2 attracting torus, since both objects approach each other.In the threshold a c , the closure of the SNA must contain some repelling orbits [60].Hence, even though the normal dynamics around the period 2 attracting torus is attracting on average (the Lyapunov exponent is negative), it may be locally expanding.This local expansivity and the degeneracy of the linear dynamics are a major drawback in the rigorous validation of the period 2 attracting curve for a close to a c .
The following proposition asserts that the computations of invariant tori in Figure 1 are reliable.In particular, the period 2 attracting torus exists up to a relative distance which is less than 7.3 • 10 −4 of the (numerically) estimated value of the breakdown.This proposition is proved in section 4. (ii) For every parameter a = 3.265, 3.268, 3.269 there exists a locally unique period 2 invariant attracting curve.
Rigorous validations on the verge of a hyperbolicity breakdown.
We present reliable computations and validations of saddle invariant tori and their invariant stable and unstable bundles in quasi-periodically forced systems.We consider the case in which saddle tori break nonsmoothly (in a sort of nonsmooth Hamiltonian saddle-node bifurcation).The phenomenon of breakdown of saddle invariant tori in quasiperiodic conservative systems is poorly understood and, as far as we know, only two numerical studies have reported mechanisms and conjectured a theoretical framework [26,29].The phenomenon consists in the nonuniform approach to the stable and unstable bundles, deteriorating as the parameters approach a certain threshold.In other words, the projectivizations of the invariant bundles show the typical collision mechanism of creation of SNA observed, e.g., in the Harper map [30,41].Moreover, the corresponding Lyapunov multipliers are away from 1.As suggested in [29,26], the nonuniform collision of the bundles leads to a lack of uniform hyperbolicity in the torus and to its breakdown when the parameters cross the threshold.
The model we consider in the present paper is the quasi-periodically forced standard map, defined as where we fix ω = 1 2 ( √ 5 − 1), and κ, ε are parameters.In the following, we fix κ = 1.3 and let ε vary.For ε = 0 there exists an invariant saddle torus.Numerical experiments (see subsection 5.1 for further details) suggest that there is a limiting value ε c ≈ 1.2352755, the critical parameter value, where the saddle torus breaks up: its invariant stable and unstable bundles collide in a nonsmooth manner while the maximal Lyapunov multiplier remains far from zero.
In the nonsmooth breakdown scenario, for κ = 1.3, we prove the existence of the saddle torus up to a bound that is at a relative distance less than 4.3 • 10 −7 from the estimated threshold of breakdown.This is part of the following proposition.(ii) For the range of parameters ε ∈ [0, 1.167434] there exists a continuous family of invariant saddle curves.
In this bifurcation the computation of invariant tori and their stable and unstable bundles is difficult, since these objects are highly deteriorated and close to breakdown, and simple iteration algorithms of computation do not apply.Notice we have validated Figures 2(c) and 2(d), in which the bundles present SNA-like behavior.
The methodology.
Our methodology represents an advance on the results, numerical algorithms, and experiments presented in [27,28,29], which are the inspiration for this paper.In these references, the dynamical characterization of the condition of invariance of a torus leads to a functional equation that fits in the framework of the Newton-Kantorovich theorem [38]; see Theorem 1 in [27] (Theorem 2.8 in the present paper).We emphasize that the nondegeneracy conditions of the Newton-Kantorovich theorem correspond to hyperbolicity properties of the approximate invariant tori.Therefore, starting with an approximate solution of the invariance equation and an approximation of the hyperbolicity properties (i.e., of the stable Downloaded 02/05/13 to 161.116.168.89.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpand unstable bundles), one can use rigorous interval arithmetic [53,39] to verify the hypothesis of the constructive existence theorem (see Theorem 2.8), which consists in checking several a posteriori bounds.The verification of these bounds leads to the proof of the existence (and local uniqueness) of a true solution of the invariance equation and hence of the true invariant torus and its stable and unstable bundles.In the situations explained in this paper, in which tori are about to break, having accurate and efficient numerical methods is essential to be able to produce approximations that pass the validation test.In this paper we use Fourier methods [29] and rational approximation of frequencies (computing periodic orbits of approximate periodically forced systems) to obtain these approximate solutions.These numerical methods are tailored for the specific class of invariant tori we consider.(See, e.g., [8,9] for general numerical methods to compute normally hyperbolic invariant manifolds.)An alternative topological approach for validating the existence of invariant sets of normally hyperbolic type has been considered in [11], which is based on the method of covering relations [65].These methods work for more general dynamical systems but cannot be used to prove the (local) uniqueness of the invariant sets.Moreover, the properties of normal hyperbolicity are checked by using cone conditions, which are extremely difficult to verify for the examples considered here.Our functional analysis flavor for computer-assisted proofs in dynamical systems problems using Newton-like methods in fact has a long history that goes back to the proof of the Feigenbaum Downloaded 02/05/13 to 161.116.168.89.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpconjecture in unimodal maps [45,46], the proof of the universality in the period-doubling cascade for area-preserving maps [20], and the proof of the existence of critical invariant tori in Hamiltonian systems [43].See also the inspiring chapter 7 in [17] and the review [44].Also, the recent papers [23,4] deal with rigorous computer-assisted validations of zeros of nonlinear functionals defined in infinite dimensional Banach spaces in the context of evolution equations.In these references, the linearizations of the corresponding functionals are compact operators, which have a relatively simple spectrum, making it relatively easy to verify the nondegeneracy conditions required to apply a Newton-like method.In contrast, the operators arising in the problems presented here are noncompact and their spectra are sets of annuli centred at 0 [51,15] (the inner annulus could be a disk if the dynamical system is nonreversible), which makes the (direct) computation of spectra relatively difficult.Fortunately, checking the applicability of Newton's method is equivalent to checking that 1 is not in the spectrum, and this is just rephrasing the condition of hyperbolicity.In this paper, a natural Banach space for use in the parameterizations of tori and their bundles is the space of continuous periodic functions.A suitable method for rigorously enclosing continuous periodic functions is to use (truncated) Fourier series, bounding the size of the truncated tails.(Other manifolds or other dynamics could require other Banach spaces and other types of approximations.)In particular, the Fourier model we use to manage Fourier series rigorously is a trigonometric polynomial with interval coefficients plus an interval error.We emphasize that suitable Fourier and Lindstedt (Fourier-Taylor) models are ubiquitous in computer-assisted proofs in KAM theory and renormalization theory [13,19,18,42,43,49].Our validation algorithms for proving the existence of invariant tori were implemented using our own C++ library to manipulate Fourier models together with the rigorous interval library FILIB++ Interval Library; see [48].All the validations presented here were tested with several types of computers working under several operating systems, although we report only the results obtained with a machine with Intel Core2 Quad CPU Q9550 at 2.83 GHz working under Debian, using one of the processors.
1.4.Notation.R n denotes the n-dimensional real space, and e 1 , . . .e n represents its unit vectors that form the standard basis.For i = 1, . . .n and v ∈ R n , π i v is the ith component of the vector v. L(R n ; R k ) is the space of linear maps from R n to R k , identified by the set of k × n matrices.The space of endomorphism of R n , identified by the set of square n × n matrices, is L(R n ) = L(R n ; R n ), and GL(R n ) is its subgroup of automorphisms, i.e., the group of invertible n × n matrices.I n represents the n × n identity matrix.L(R n , R m ; R k ) denotes the set of bilinear maps from we consider the induced norms in the spaces of linear maps and bilinear maps previously mentioned.For instance, if we consider the maximum norm in T = R/Z denotes the one-dimensional torus.The translation on the torus of frequency ω ∈ R is the map t ω : T → T defined as t ω (θ) = θ + ω.
R n × T denotes a trivial bundle over T (with projection π : R n × T → T).We assume that this bundle is endowed with a Finslered norm, i.e., a norm | • | θ on each fiber R n × {θ} that depends continuously on θ.We typically omit the subindex θ when the fiber is understood (or if the norm does not depend on θ).A strip in the bundle is a set D ⊂ R n × T such that Downloaded 02/05/13 to 161.116.168.89.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php For a vector space Z and a Finslered norm in the trivial bundle Z × T over T, we identify the space of continuous sections of the bundle with the set of continuous functions σ : T → Z, C 0 (T; Z), endowed with the supremum norm σ = sup θ∈T |σ(θ)| θ .
A validation algorithm for robust invariant tori.
In this section we review some definitions and results on fiberwise hyperbolic invariant tori (FHIT) for skew products over rotations.In particular, we state Theorem 1 in [27] for existence and local uniqueness of FHIT, which is the basis for our validation algorithms of existence and local uniqueness of FHIT.
FHIT. A (discrete) quasi-periodic system (with frequency ω
where Throughout this paper, we assume that F is C 2 with respect to z.The bundle map (2.1) induces a graph transform functional Notice that the graph of a continuous map K : T → R n is a torus that is a copy of the base T, and it is invariant under the skew product (F, t ω ) iff We will slightly abuse notation and refer to K as a torus.
The linearized dynamics around a torus K is given by the vector bundle map where We also refer to (M K , t ω ) as the cocycle induced by (F, t ω ) and Notice that DF(K) = M K .We will suppress the dependence of K when it is clear from the context.The relation between the dynamical properties of cocycles and the spectral properties of the associated transfer operators has been intensively studied in the literature; see, e.g., [51,58,34,50,47,16].These are important to describe the dynamical and functional properties of invariant objects and dynamical systems.We define now (both dynamically and functionally), the main geometric object of this paper.Downloaded 02/05/13 to 161.116.168.89.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpDefinition 2.1.A FHIT of the system (2.1) is an invariant torus K : T → R n such that its corresponding cocycle (M K , t ω ) is uniformly hyperbolic, that is, there exists a continuous decomposition of the vector bundle R n × T in a Whitney sum S ⊕ U of two invariant bundles S and U , such that M restricted to U is invertible, and there exist constants C > 0 and A FHIT of the system (2.1) is an invariant torus K : T → R n such that its corresponding transfer operator M K is hyperbolic, i.e., its spectrum has empty intersection with the unit circle {λ ∈ C : |λ| = 1}.
It turns out that both definitions are equivalent.The stable bundle S and the unstable bundle U of a cocycle (M K , t ω ) are constructed from the spectral projections associated to the transfer operator M K (and the spectral gap in the unit circle).The width of the spectral gap is given by the hyperbolicity constant λ that measures the hyperbolicity of the cocycle.The uniformity of the hyperbolicity property around the torus is given by the uniformity constant C, which is related to the norms of the spectral projections (and hence with the shapes of the bundles).If the spectrum of M K is inside the unit circle, then we say that the torus K is an attractor and U is the zero bundle.If the spectrum of M K is outside the unit circle, then the torus K is a repeller and S is the zero bundle.Otherwise we will say that the torus K is a saddle.
Remark 2.3.We emphasize that there is a bootstrap in the regularity of a FHIT, and even though in Definition 2.1 we assume that it is continuous, in fact it is as smooth as the system [28].
Remark 2.4.Most of the above also works with slight modifications for invariant graphs of skew products over homeomorphisms.But fiberwise hyperbolic invariant graphs are in general less regular than the system.
Remark 2.5.Since ω is irrational, the spectrum of the transfer operator M is a set of annuli centered at 0, and the inner annulus is a disk if the corresponding cocycle is noninvertible.This is also true in the generality of invariant graphs of skew products over homeomorphisms with a dense set of aperiodic points [51].
Remark 2.6.Transfer operators are bounded but noncompact operators.This fact makes the computation of their spectrum difficult [36].
In our methods of computation and validation of invariant tori, the crux is that a sufficient condition to apply Newton's method to solve the invariance equation from an approximate solution K 0 is the invertibility of the bounded linear operator DF(K 0 )− I = M K 0 − I, and this is implied (in fact equivalent, since the rotation ω is irrational) by the hyperbolicity condition.Moreover, as a consequence of the implicit function theorem, FHIT are robust (persist) under perturbations of the system [28].
Reducibility.
The standing hypothesis in our methodology is that the torus K is fiberwise hyperbolic and the corresponding stable and unstable bundles are trivial (i.e., given by global frames).Hence, we can define a matrix valued map P : T → GL(R n ), whose first n s columns of P , P s parametrize the stable bundle S (of rank n s ) and the last n u columns of P , P u parametrize the unstable bundle U (of rank n u ).Since the bundles S and U are invariant, then (2.4) where Λ s : T → L(R ns ) and Λ u : T → GL(R nu ).In other words, a main assumption is that the cocycle (M K , t ω ) is reducible to a block diagonal cocycle (Λ, t ω ), with Λ = (Λ s , Λ u ).Λ s and Λ u give the dynamics on the stable and unstable bundles, respectively.Remark 2.7.Nontriviality of rank 1 bundles can easily be overcome with the double covering trick.See [29] for examples of computation of invariant tori with nonorientable bundles.
Using suitable adapted norms, one can bound the norms of the block diagonal cocycle That is, the condition number C = P P −1 of the adapted frame P is the uniformity constant of the hyperbolic splitting.
An important situation in which linearized dynamics is very simple and estimates of the hyperbolicity constants can be, in principle, easily obtained is when the cocycle (M K , t ω ) is reducible to a constant cocycle (Λ 0 , t ω ) (possibly using the double covering trick).The linearized dynamics is then equivalent to iterating the constant matrix Λ 0 , but the problem is obtaining the suitable adapted frame.We emphasize that invertible rank 1 cocycles are reducible to constants, under Diophantine conditions of the frequency ω, while noninvertible rank 1 cocycles are not; see section 4.There are many other situations in which cocycles fail to be reducible (see, e.g., [33]), implying a complex behavior of the linearized dynamics [29,26].
A validation theorem.
In the previous subsection we grasped the relation between hyperbolicity and the applicability of Newton's method.From Theorem 1, p. 12, in [27], Newton's method for finding FHIT converges quadratically, provided that the initial approximations of the torus and its invariant bundles are fairly accurate.The following is a reformulation of such a theorem (see [27] for the proof), which is the theoretical core of the validations done in this paper.
Theorem 2.8.Let R n × T be the Finslered trivial bundle over T.
Assume we are given (1.1) a continuous map K : T → R n , parameterizing a torus; (1.2) two continuous matrix-valued maps P 1 , P 2 : T → L(R n ), giving adapted frames; Downloaded 02/05/13 to 161.116.168.89.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php r); and assume that (4.3) r 0 ≤ r.Then, there exists a unique continuous map Then, there exist a continuous matrix-valued map P * : T → GL(R n ) and a continuous blockdiagonal matrix-valued map The main idea behind Theorem 2.8 is to consider an adapted frame, given by the matrixvalued map P 1 , in which the hyperbolicity properties are checked.This frame P 1 encodes the approximations of the stable and unstable bundles.In more detail, the conditions to be checked in Theorem 2.8 state the following: (1) From the initial data, K, P 1 (and P 2 ) one constructs an adapted system of coordinates (v, θ) on a neighborhood of the torus, a strip DP 1 (K, r): ).This depends on the error bound in the approximate inverse P 2 , the quality of the conjugacy to the block diagonal cocycle (Λ, t ω ), and the hyperbolicity property viewed in the adapted frame.(3) The approximate invariance of the torus is estimated by ρ in (3.1).If ρ is small enough, then there will be a true invariant torus nearby.The first step is checking (3.3).( 4) Points (3.3) and (4.3) check the Newton-Kantorovich hypothesis for the validation of the existence and local uniqueness (in DP 1 (K, r 1 )) of a FHIT K * nearby K.An upper bound of the distance between K * and K, in the P 1 adapted frame, is given by r 0 .(5) Checking point (5.3) leads to a validation of the invariant bundles codified in P * , providing rigorous upper bounds of the distance between the adapted frames P 1 and P * (and hence, between the approximate and true invariant bundles) and estimates of the hyperbolicity properties.Remark 2.9.One can perform the bounds of Theorem 2.8 in the original coordinates, that is, compute ρ 0 and b 0 such that (1.5 ) for each (z, θ) These estimates lead to (crude) bounds of ρ and b.In particular, inequality (3.2) in Theorem 2.8 is rephrased as where C = P 2 P 1 is the condition number of the adapted frame.Notice that h grows with the square of the hyperbolicity constant C/(1 − λ).Hence, the weaker the hyperbolic properties, the much harder it is to pass the Newton-Kantorovich test.Remark 2.10.Theorem 2.8 is stated using C 0 norms.One can state a similar theorem using norms with higher regularities (e.g., C r , Sobolev, analytic).In this paper we have only considered (and implemented) validations using C 0 norms.Hence, although the FHIT K * is as smooth as the skew product and the bundles are as smooth as its differential, we only measure the distance of the invariant objects to the approximately invariant objects using C 0 norms.We plan to come back to this problem in the future.
Remark 2.11.Theorem 2.8 works, with minor changes, if T is replaced by a general compact metric space and t ω : T → T is replaced by any homeomorphism.
Implementation of the validation algorithm.
In this section we explain implementation issues of computer validations of FHIT in skew products over rotations, based on Theorem 2.8.Since the base manifold of the skew product is a torus, and the base dynamics is a rotation, we use Fourier polynomials to approximate the periodic functions to model the components of the approximate invariant tori and bundles of the input data for the algorithm (and this is the reason for assuming triviality of the bundles).
Theorem 2.8 assumes a Finslered norm.In our present implementation, we have considered the sup norm on each fiber.Hence, instead of considering adapted (Finslered) norms, we consider suitable adapted frames.Downloaded 02/05/13 to 161.116.168.89.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php The core of the implementation is a set of routines to rigorously manage periodic functions and enclose them in Fourier polynomials plus error intervals.These are what we refer to as the Fourier models and are briefly introduced in subsection 3.1 and in Appendix A.
The validating computer program has to verify, from an approximately invariant torus and approximately invariant stable and unstable bundles (e.g., computed numerically or using perturbative arguments), all the hypotheses of Theorem 2.8.Notice that the checking has to be done only once.Since we will apply the computer programs in situations in which tori are about to break (see sections 4 and 5), we prioritize the accuracy over the speed of the computations.
Fourier models.
Here we detail the definition of Fourier models, assuming the reader is familiar with interval computations [52,62,64].In what follows, when we refer to interval we mean a compact interval.Given a (compact) interval J = [a, b].The modulus of an interval is |J| = max(|a|, |b|).Following the standard findings in the literature, the result of an operation with intervals is an interval that encloses the result.This is what one can do when implementing interval operations in a computer.
Given an interval J, the image of J under the Fourier model Ĝ is defined as Ĝ(J) = G(J) + R( Ĝ), where G(J) is the interval image of J under the trigonometric polynomial with interval coefficients G.That is, Ĝ The computer implementation of Ĝ(J) obtains an enclosure E of the result, i.e., Ĝ(J) ⊂ E. In order to avoid large overestimations, especially in cases in which the functions f ∈ Ĝ behave wildly, we subdivide the interval J and compute the enclosures of the subdivisions.
Validation of FHIT.
Here we show how Theorem 2.8 can be implemented, via Fourier models, in order to validate some initial data as a good approximation of a FHIT and its Downloaded 02/05/13 to 161.116.168.89.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpinvariant subbundles for a given continuous skew product (F, t ω ) : with respect to z.The Finslered norm we consider in R n × T is the sup norm on each fiber.
We assume that we can effectively compute the enclosures of the components of the compositions of F (z, θ), D z F (z, θ) and D 2 z F (z, θ) with Fourier models.That is, we can substitute z by a (vector) Fourier model K if for each θ ∈ [0, 1], K(θ) ⊂ D θ (a fact that can be rigorously checked by using interval arithmetic).
The input data of the validation algorithm follows: (0) Compute, e.g., numerically or using perturbative arguments, the trigonometric polynomial approximations of an invariant torus (K), the adapted frame (P 1 ) and its inverse (P 2 ), and the dynamics on the invariant bundles (Λ = diag(Λ s , Λ u )).References [27,29] include algorithms and numerical computations of invariant tori and their bundles.
The order m of the approximations depends on the decay of the coefficients of the Fourier expansions (and hence on the quality of the initial data).We take m in such a way that the size of the discarded term is below a given threshold (say, 10 −6 ).The validation algorithm mimics the statement of Theorem 2.8.Here are the steps: (1) From the input data, derive the Fourier models K, P1 , P2 , Λ = ( Λs , Λu ).
, and check if r 0 ≤ r.If not, the torus is not validated and the algorithm stops.After validating these steps, the torus is validated, meaning that there is a unique invariant torus K * in the strip DP 1 (K, r 1 ), where r 1 is a lower bound of (1 − λ − σ − τ ) (1 + √ 1 − 2h)b −1 and r.Moreover, the torus K * is contained in the strip DP 1 (K, r 0 ).Remark 3.3.Bound b (and subsequently h) depends on the radius r of the strip.This choice has consequences on the estimates of the error radius r 0 , which should be not greater than r, and the uniqueness radius r 1 .In our actual implementation, the choice is taking 2(1 − λ − σ − τ ) −1 ρ ≤ r (hence r is not given, but computed!), which ensures (if h < 1 2 ) that r 0 ≤ r.By tuning r one can improve r 0 and r 1 .
The final step of the validation algorithm is checking the invariance of the adapted frame, as follows: (5) Compute the upper bound of μ using σ, τ, λ, λ, b, r 0 .Check if μ < 1 4 .If not, the adapted frame is not validated, and the algorithm stops.Else, there is an invariant frame P * , codifying the stable and unstable bundles of K * .The upper bounds r P and r Λ are rigorous estimates of the distances between the approximate and true adapted frame and hyperbolic dynamics, respectively.Downloaded 02/05/13 to 161.116.168.89.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
Validation of a family of FHIT.
Here we show the procedure to validate the existence of a family of FHIT of a one-parameter family of skew products (F s , t ω ) : Consider the interval [a, b] = I ∪ J, where I and J are closed intervals, and let K i , P 1,i , P 2,i , and Λ i for i = I, J be the initial data of the validation algorithm for the (interval) skew products (F i , t ω ).In order to check that the corresponding validated tori belong to the same family we proceed as follows: (0) Apply the validation algorithm explained in subsection 3.2 to the (interval) skew products (F i , t ω ), i = I, J.Besides the Fourier models corresponding to the initial data, Ki , P1,i , P2,i , and Λi , the validation algorithm produces bounds ρ i , σ i , τ i , r 0,i , r 1,i , h i .
(1) Construct the Fourier model ÊI,J = P2,J If this holds, the two initial data approximate the same family of FHIT; if not, this family has not been validated.
Example 1:
Computer validations for noninvertible skew products.In this section we report computer validations of existence of invariant tori for a noninvertible map, the quasiperiodically driven logistic map (1.1), proving Proposition 1.1.Special emphasis is placed on validation of nonreducible tori for values close to their breakdown.Note that in this context, the concept of nonreducible torus is equivalent to the noninvertibility of the transfer matrix along the torus.
Recall from (1.1) that the quasi-periodically driven logistic map is the skew product where here we fix ω = 1 2 ( √ 5 − 1) and a and D are parameters.In this section, we fix D = 0.1 and let a > 0 vary.The critical curve, where derivative of F with respect to z vanishes, is C = {z = 1 2 }.We start with a numerical exploration of the model, then we face the computational problems produced by noninvertibility and deterioration of uniform hyperbolicity, and, after careful numerical computations of the input data for the validation algorithm, we apply such an algorithm to validate the existence of period 2 attracting tori for parameter values very close to the threshold.
Numerical exploration. Figure 3(a)
shows the bifurcation diagram of the invariant objects, while Figure 3(b) shows the corresponding Lyapunov multipliers.A particularly simple case is the zero-curve x a (θ) = 0, for which the Lyapunov multiplier can be analytically computed (see, e.g., [37] . Hence, for D = 0.1, the zero-curve K 0 is attracting if a < a t and repelling if a > a t , and in a t = 2(1 + √ 0.99) −1 there is a transcritical bifurcation.Now, let's explain the other invariant curves and their bifurcations, labeled in Figure 3(b).This is done through a numerical exploration.(A) a ∈ (0, a t ).There is a reducible repelling curve K 1 (a), and its cocycle is reducible to a constat Λ 1 (a) ∈]1, +∞[.In fact, as a → 0, K 1 (a) ∼ a−1 a and hence goes to −∞, and Λ 1 (a) → 2. As a → a t this curve tends (uniformly) to the zero curve K 0 , and Λ 1 (a) → 1.At a = a t = 2(1 + √ 0.99) −1 there is a transcritical bifurcation (with K 0 ).(B) a ∈ (a t , a r,1 ).K 1 (a) is a reducible attracting curve, and its cocycle is reducible to a positive constant Λ 1 (a) ∈]0, 1[.At a = a r,1 1.854419, the curve K 1 (a) is tangent to the critical curve C.
(C) a ∈ (a r,1 , a r,2 ).K 1 (a) is a nonreducible attracting curve, since its transfer matrix vanishes at the points in which K 1 (a) intersects the critical curve C. At a = a r,2 2.406952, the curve K 1 (a) is again tangent to the critical curve C.
(E) a ∈ (a p , a c ). K 1 (a) is a reducible repelling curve, and its cocycle is reducible to a negative constant Λ 1 (a) ∈] − ∞, −1[.There is also a period 2 attracting curve K 2 (a) (see Figures 4(a) and 4(b) for the corresponding Lyapunov multipliers).At a = a c 3.271383, the period 2 attracting curve collides in a nonsmooth way with the repelling curve, bifurcating in an SNA.
(F) a ∈ [a c , ∞).The reducible repelling curve K 1 survives after the collision and it exists for all these values.There is also a strange attractor, a geometrically complex attracting object that comes from the destruction of K 2 .
We have focused our study in region (E), very far from the perturbative regime.This is known as the Heagy-Hammel fractalization route to SNA.Figures 5(a) and 5(b) show these invariant objects before and after the collision at a = a c .Remarkably, the main ingredient in this route is the loss of reducibility that the period 2 attracting curve suffers at a = a r 3.17496, related to the nonsmooth collision with the repelling curve at a = a c 3.271383.
We have validated both the repelling curve and the period 2 attracting curve in region E. Of course, our techniques can also be applied to analyzing cases (A), (B), (C), and (D).In the smooth bifurcations at a = a t (in (A)-(B)) and a = a p (in (D)-(E)) the invariant tori are reducible.The attracting torus in region (C) is not reducible, but it is far from destruction.Hence, the study in region (E) close to a = a p is an example of system close to smooth (reducible) bifurcation and close to a = a c is an example of system close to nonsmooth (nonreducible) bifurcation and can be applied to the other cases.
Remark 4.1.Nonreducibility region (C) separates region (B), in which the torus is attracting with a well-defined positive eigenvalue, and region (D), in which the torus is attracting with a well-defined negative eigenvalue.In region (C) we cannot define an eigenvalue, since the cocycle is not reducible to constant.But we can define a Lyapunov multiplier (which in reducible cases is the absolute value of the eigenvalue), which in the region (C) does not cross 0. Hence, the sign of the eigenvalue jumps (from + to −) in region (C).Interestingly, this phenomenon of jump of the sign has been observed in other quasi-periodic systems [29,26].
Numerical computation of the initial data.
In this section, we describe how to compute the initial data K, P 1 , P 2 , Λ for attracting curves of the noninvertible one-dimensional Downloaded 02/05/13 to 161.116.168.89.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpskew product (F, t ω ).Similar methods can be applied for repelling curves by using a right inverse of the map (i.e., one of the branches of the inverse of (F, t ω )).We will also present methods to deal with noninvertible transfer matrices.
The approximately invariant torus K can be computed using the simple iteration algorithm, since the invariant torus is attracting.The number of iterations needed to obtain a good approximation depends heavily on the modulus of the Lyapunov multiplier.In our computations, the number of iterations does not exceed 10 10 .
More challenging is the computation of the initial data P 1 , P 2 , Λ, since even though the transfer matrix M is contracting "on average," it can be locally expanding.The condition of invertibility of the transfer matrix plays a key role in this computation.We have considered two methods in order to overcome these computational problems.
Reducibility and almost reducibility to constant coefficients.The goal of the reducibility method is to reduce the transfer matrix to a constant Λ, which satisfies (4.4) for a suitable transformation P 1 (θ) = 0.If M (θ) is invertible for all θ ∈ T, this equation is solved by taking logarithms and solving the small divisor equations obtained by matching the Fourier coefficients.
If M (θ) has zeroes, (4.4) has no continuous solutions.Hence, we cannot reduce M (θ) to constant coefficients.To overcome this difficulty, we consider the modified equation for a suitable function η : T → [0, 1] and a sufficiently small ε > 0.
One choice for the function we obtain that P 1 , P 2 = P −1 1 , and Λ satisfy equation Remark 4.2.In numerical computations these equations are solved by matching Fourier coefficients up to a finite order, even though the analytical solution of small divisors equations involve the smoothness of the transfer matrix and Diophantine properties of the rotation ω.These are intermediate computations to produce initial data to be validated by our computer programs.
Numerical comparison of both methods.The Lyapunov metric method and the almost reducibility method have been tested, among others, for the period 2 attracting curve of the quasi-periodically driven logistic map with D = 0.1 and a = 3.250.In this case, the transfer matrix is noninvertible, hence nonreducible to constant.See Figure 6 to check differences between the two methods.In Figure 6(a) we can see that the dynamics of the linear cocycle is locally expanding in some regions (but it is globally contracting), while in Figures 6(e) and 6(c) the linear cocycles are locally and globally contracting.Notice that the Fourier coefficients of the reduced matrix Λ(θ), Figure 6(d), decay slowly when using the Lyapunov metric method, while they decay exponentially fast when using the almost reducibility method, Figure 6(f).
Computer validations.
Motivated by the previous numerical study (see subsection section 4.1), we have validated the invariant curves appearing in the bifurcation diagram in Figure 3(a) up to values of a close to the smooth bifurcations (A)-(B) (transcritical) and (D)-(E) (period doubling) and the nonsmooth bifurcation (E)-(F).We report here in detail the existence of the repellor in regions (E) and (F) and the existence of the period 2 attracting curve near the nonsmooth bifurcation (E)-(F).
Invariant curves in regions (A), (B), (C), and (D) have been validated using no more than 20 Fourier modes.The validations near the smooth bifurcations have been performed obtaining results similar to the ones reported below for the repellor.
To summarize the validations that we will present in detail, we have the proofs in the next subsections.
Proof of Proposition 1.1: Validation of the repellor.
Here we explain the validation of the repelling curve.First, we validate analytically the existence of this curve for a ∈ (4.6, ∞) and then, via computer-assisted proofs, we validate it for a ∈ (3.157065, 5) and check that the two families match.
Analytic validation.For the analytic validation, it is convenient to consider the following right inverse of (F, t ω ): We apply the validation algorithm with the following initial datum: - In the following, we consider the bound , from which we obtain h = (1 − λ) −2 bρ.Fixing D for a > 0 sufficiently big, we obtain h < 1 2 and then there is a unique invariant torus close to initial data K.In particular, for D = 0.1, we obtain the crude lower bound a > 4.6 (for which h < 0.45).
Computer validation.After showing the existence of the repelling curve for values a > 4.6, we proved (computer-assisted) the existence of the family of the repelling curve for 3.157065 ≤ a ≤ 5.At a = 5 the Lyapunov multiplier is 2.962531, while at the end of the validation, a = 3.157065, the Lyapunov multiplier is 1.016861.This validation has been done, using expression (4.1), by computing the initial data using the algorithms presented in subsection section 4.2 with 30 Fourier modes (This choice of number of modes is done in order to ensure that the discarded modes are of magnitude less than 10 −8 .)We emphasize that the width of the intervals of validation shorten as they approach to the period doubling bifurcation value a 3.143.The algorithm stops when the width of the intervals is less that 10 −6 , reaching a = 3.157065.See Figure 7(a).
Remark 4.3.In this computation we apply the validation algorithm 2800 times and the time of computation is around 307 minutes.This means that each validation step, which consists in computing the initial data, validating the existence and uniqueness of a FHIT near it, and then checking the matching, takes around 6.5 seconds.
In order to show how the upper bounds of the validation algorithm behave near the bifurcation value, we apply the validation algorithm for values a = 3.16+0.01•jwith j = 0, . . ., 184, using 30 Fourier modes.The results are displayed in Figures 7(b), 7(c), and 7(d).
Remark 4.4.While the numerically computed initial data is produced with a nonrigorous estimate of the error of order 10 −14 , and although the validations are done using the FILIB++ library, which operates with intervals in double precision, the rigorous error bounds achieve order 10 −10 .
Proof of Proposition 1.1:
Validation of the nonreducible period 2 attracting curve.The goal in this subsection is to validate nonreducible period 2 attracting curves near the predicted nonsmooth bifurcation value a * ≈ 3.271.To do so, we considered the second iterate of the driven logistic map (4.1): First, we perform a numerical study of the regularity of the initial data: the torus K, the transformations P 1 and P 2 , and the normalized cocycle Λ.Since the associated transfer matrix M is noninvertible, we use the almost-reducibility method to compute P 1 , P 2 , and Λ. Figure 8 shows, with respect to parameter a, a numerical estimate of the maximum slope of the computed initial data.Note that P 1 is the initial datum with the biggest slope.For example, at a = 3.265 the slope of P 1 is 4.3 • 10 4 , while the slopes of the torus and the Downloaded 02/05/13 to 161.116.168.89.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpnormalized cocycle are 2.4 • 10 1 and 3.07 • 10 3 , respectively.Notably, at a = 3.269 the slope of P 1 is 4.25 • 10 6 .Hence, P 1 is used in order to determine the number of Fourier modes in the validation process, because it is the initial datum with the biggest Fourier coefficients.We choose the number of modes so that the discarded ones are of magnitude less than 10 −8 .Figure 9 shows the initial data K (and M ), P 1 and Λ for a = 3.265 and a = 3.269.Notice that a small change in the value of a leads to a dramatic change in the initial data.
The validation results for different values of the parameter a are shown in Table 1.The initial data used as input is computed with high accuracy because at these parameter values, Downloaded 02/05/13 to 161.116.168.89.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php the period 2 attracting curve is near the repellor curve.Note also that in all these validations the time computation depends heavily on the regularity of the initial data, because less regularity implies the use of more Fourier modes to represent the initial data, which implies more computational time.Remark 4.5.Note that in Table 1, for the parameter value a = 3.269, we do not have an estimation of r Λ .This is because the validation algorithm cannot compute it due to the fact that the upperbound μ (see point (5.2) of Theorem 2.8) is bigger than 1 4 .This means that although we could validate the existence (and local uniqueness) of the period 2 invariant torus, we could not validate the distance between the initial data Λ 0 and the transfer operator of the truly invariant torus of the system.
Example 2:
Computer validations on the verge of the hyperbolicity breakdown of a saddle torus.In this section we report computer validations of existence of saddle tori on the verge of their hyperbolicity breakdown for the quasi-periodically forced standard map (1.2).This phenomenon was described in [29,26] for similar models.
An interesting problem is to approach as closely as possible the limiting value ε c , the critical parameter value, and study the obstructions to fiberwise hyperbolicity.We perform a numerical exploration and we find that the bifurcation mechanism around ε c depends on κ.Particular examples are as follows: (i) For κ = 0.3, then ε c ≈ 1.3364054, and there is a smooth bifurcation: the hyperbolicity is lost because the maximal Lyapunov multiplier goes to 1 as ε goes to ε c , but the invariant subbundles collide smoothly.Also, the invariant tori are smooth at the bifurcation.Similar behavior happens for close values of κ.
(ii) For κ = 1.3, then ε c ≈ 1.2352755, and there is a nonsmooth bifurcation: the hyperbolicity is broken down because the invariant bundles collide nonuniformly as ε goes to ε c , and the maximal Lyapunov multiplier stays far from 1.The invariant tori loose their smoothness at the bifurcation.Similar behavior happens for close values of κ.
In this paper we report results for κ = 1.3.Figure 10 shows the observables (Lyapunov multiplier and minimum distance between the invariant bundles) near the breakdown.Figure 2 shows the invariant tori and their invariant bundles for several values of the parameter ε.As an illustration of the numerical computations on the verge of the breakdown, Table 2 shows the estimated values of the bounds of the validation algorithm for several values of the parameter ε close to ε c .
Computer validations.
In this section we report computer validations of the invariant tori for the nonsmooth bifurcation scenario for κ = 1.3 with ε c = 1.2352755.This is a challenging example because the invariant subbundles near the bifurcation are quite wild (SNA behavior in the projectivized cocycle; see [30]).Thousands of Fourier modes are needed in order to obtain good initial data for the validation algorithm.
Proof of Proposition 1.2.
For proving point (i) in Proposition 1.2, we validate tori K ε for all the values of ε in the proposition.Note that the difference between the predicted breakdown value ε c and the last validation ε = 1.2352 is less than of order 8•10 −5 .These results are reported in Figure 11.We observe that as ε increases, the upper bounds of the validation algorithm h and r 0 , which measure the quality of the approximate invariant torus, increase, while the lower bound of r 1 , which measures the size of the uniqueness strip, decreases.We also observe that the upper bounds μ and r Λ , which measure the quality of the approximate invariant bundles, increase.The number of Fourier modes required in the validations goes from 0 to 1280.
We use the validation algorithm for families of FHIT to prove point (ii) in Proposition 1.2.The validations in the parameter interval ε ∈ [0, 1.167434] have been performed with Fourier models of order less than 2000.The main problem for validating the family further is that the width of the parameter intervals required in the algorithm is too small, of order 10 −6 .This happens because as the family approaches the bifurcation, the invariant tori and their associated initial data (P 1 , P 2 and Λ) changes dramatically.(See, for example, the behavior of the Lyapunov multiplier in Figure 10.)Finally, we prove point (iii) in Proposition 1.2.The validation of the initial data for the values ε = 1.235270, 1.235273, 1.235275, with Lyapunov multipliers Λ = 1.442582, 1.441463, 1.440193, illustrates the applicability of the validation algorithm in cases that are extremely close to the nonsmooth bifurcation.The obtained results are shown in Table 3.Note that the difference between 1.235275 and the predicted bifurcation value, 1.2352755, is less than 5.3 • 10 −7 .
Final comments.
In the validation examples we show that it is important to understand the dynamics around the invariant tori in order to obtain successful validations.A key role is played by the matrix-valued maps P 1 and P 2 , which give an adapted frame where the hyperbolicity conditions are checked, and the hyperbolicity constant λ.In fact, the condition number P 1 P 2 1−λ gives a measure of the quality of the hyperbolicity.The bigger it is, the harder the validations.Note that this condition number is big when the invariant torus is Downloaded 02/05/13 to 161.116.168.89.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php
Table 3
Validation results of invariant tori of the quasi-periodically forced standard map for three ε values near the predicted breakdown.Note that the order of the Fourier models and the time of validation increase as ε increases.Compare these rigorous results with the nonrigorous estimates given in Table 2 near the boundaries of the uniform hyperbolicity.Moreover, the computation of the adapted frame is difficult in cases in which the dynamics is nonreducible or when the torus is about to break.
In the quasi-periodically driven logistic map, we saw that in order to validate the existence of the period 2 attracting curve it is important to establish whether the linearized dynamics around the torus is reducible.The linear behavior determines the possible adapted frames P 1 , P 2 and also the possible parameterization of the dynamics on the bundles, given by Λ.The nonreducible case is the most difficult case to work on.We used different methods to study this case and then compared them.Finally, we applied it to see the results and effectiveness of the validation, obtaining good results for accuracy.In the quasi-periodically forced standard map, we saw that the linearized dynamics, as long as the FHIT exists, is reducible, but when the invariant tori approach a nonsmooth breakdown, we observed that this reducibility condition blows up.We therefore studied the effectiveness of the validation algorithm near this blow-up and obtained that the validation algorithm can be applied near the breakdown and thus deals successfully with this singular behavior.
We emphasize that our methodology can be applied to many other examples for validating invariant tori and produce rigorous bounds on the thresholds, from smooth bifurcations (saddle-node, period doubling, Hamiltonian saddle-node) to nonsmooth bifurcations or breakdowns (nonsmooth saddle-node and nonsmooth pitchfork bifurcations to SNA [24,35,54], SNAs in Harper maps), and also for computing enclosures of the spectrum Schrödinger operators [30,29].The examples reported in this paper present certain characteristics that make them more difficult to deal with.The bifurcating objects of the SNA mechanisms mentioned above are attracting and reducible tori, while in the Heagy-Hammel mechanisms reported in this paper the attracting tori are nonreducible.We emphasize that nonreducibility is an essential feature of some fractalization mechanisms producing false SNA [10,31,37], but computer-assisted proofs in those cases are even more challenging since the tori wrinkle substantially.
The nonsmooth Hamiltonian saddle-node mechanism of a saddle torus is difficult to deal with.For instance, the smooth Hamiltonian bifurcation in which a saddle torus smoothly bifurcates to an elliptic torus has been observed in the quasi-periodically forced standard map for κ = 0.3.In this case the validation can be easily carried out up to values close to Downloaded 02/05/13 to 161.116.168.89.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php the estimated critical value ε c using less than a hundred Fourier modes, in contrast to the thousands of Fourier modes needed to validate invariant tori close to nonsmooth breakdowns.
The computational time of the validation algorithms depends heavily on the regularity of the initial data and hence their number of Fourier modes.The most time-consuming computations with Fourier models are the product and the evaluation.Although the times reported in this paper correspond to computations with a single processor, we have also used the library OpenMP (see [14]) in order to have parallel computations (by distributing the product and evaluation routines on the processors).
All the validations were performed using double precision with the aid of the interval package library FILIB++.This, of course, has its limitations, and if we want to validate invariant tori more precisely, then we will need a multiprecision library, but the procedure for validating the invariant tori remains the same.An example where a multiprecision library is needed is [12].
The models worked out in this paper have simple analytic expressions.But our validation algorithms can be applied to more general models, as long as we can evaluate the map (and its first and second derivatives).For instance, for a skew product flow, we can consider its Poincaré map with the variationals [5,63].
Appendix A. Operations with Fourier models.Here we detail some implementations of compositions with Fourier models with elementary functions that are combinations of finitely many arithmetic operations and compositions with simple functions (or intrinsic functions [56]) such as the power function, the exponential function, or the trigonometric functions.Since we have to truncate the results, let us start with the following definition.The arithmetic operations with Fourier models are defined as follows.Addition and subtraction of two Fourier models Ĝ and Ĥ is defined componentwise: Ĝ + Ĥ = (G(θ) + H(θ), R( Ĝ) + R( Ĥ)), Ĝ − Ĥ = (G(θ) − H(θ), R( Ĝ) − R( Ĥ)).
If J is an interval, we define the multiplication of Ĝ with J as J • Ĝ = (JG(θ), JR).
In order to bound the order of the Fourier models through the operations in a computation, we in fact compute enclosures of the products.For instance, if Ĝ and Ĥ are two Fourier models of order m, their m-product is the m-enclosure of the product, i.e., ( Ĝ • Ĥ) ≤m .
Once we have defined the arithmetic operations with Fourier models, compositions with polynomials are straightforward.Enclosures of the compositions of Fourier models with simple Downloaded 02/05/13 to 161.116.168.89.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpfunctions, such as the exponential, power function, logarithm, etc. can be performed with the aid of the corresponding Taylor polynomial approximations (and Lagrange error bounds).We explain here the composition with the sine and cosine functions, which are the ones that appear in our examples.
Given > 0, let S (x), C (x) be the Taylor polynomials of degree of the sine and cosine functions, respectively.Let
Proposition 1 . 1 .
Consider the skew product (1.1) with D = 0.1.(i) For the range of parameters a ∈ (3.157065, ∞) there exists a continuous family of invariant repellor curves.
Figure 1 .
Figure 1.On the left, period 2 attracting tori (red) and repelling tori (blue) in the Heagy-Hammel route.On the right, the linearized dynamics of the period 2 attracting tori.
Figure 3 .
Figure 3. Bifurcation diagram of the invariant curves and their Lyapunov multipliers, with respect to parameter a. Red represents a repelling curve and blue an attracting object.See text for details.
Lyapunov multiplier of the period 2 attracting curve in region (E).The peaks correspond to variations of the number of zeroes of the transfer cocycle[37].
Lyapunov multiplier of the repelling curve in regions (E) and (F).There is no trace of the nonsmooth (E)-(F) bifurcation of the period 2 attracting companion around a = 3.271.
Figure 4 .
Figure 4. Lyapunov multipliers of the invariant and periodic curves with respect to parameter a.
Figure 5 .
Figure 5. Graphical representation of the Heagy-Hammel route.See text for details.
Modes of the transfer matrix M (θ).
Figure 6 .
Figure 6.Graphical comparison of the computed reduced Λ(θ) of the period 2 attracting curve for a = 3.25 and D = 0.1.
r0 and r1 values of the validations.
Figure 7 .
Figure 7. Data obtained of the validations of the repelling curve for D = 0.1.
Figure 8 .
Figure 8. Maximum slopes of the period 2 attracting curve K (in red), its P1 transformation (in green), and the normalized cocycle Λ (in blue) with respect to parameter a.
Figure 9 .
Figure 9. Graphs of the initial data close to the breakdown of the period 2 curve.
Minimum distance between subbundles.It approaches 0 as ε gets close to εc.
Quality of invariant tori: r0 and r1.
Order of Fourier model.
Figure 11 .
Figure 11.Data output obtained from the validations of the invariant tori and their invariant bundles for κ = 1.3 with respect to ε. See text for details.
In these coordinates, and in this strip, the norm of the second differential of F is bounded by b.Cocycle (DF (K(θ), θ), t ω ) is approximately reducible (via P 1 and P 2 ) to the block diagonal cocycle (Λ(θ), t ω ).(2) Point (2.4) is the verification of the hyperbolicity property (and of the invertibility of P 1 r).Downloaded 02/05/13 to 161.116.168.89.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php This function achieves its maximum value 1 when the transfer matrix vanishes and decays rapidly outside its zeroes.
phpCopyright © by SIAM.Unauthorized reproduction of this article is prohibited.
Table 1
Downloaded 02/05/13 to 161.116.168.89.Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.phpValidation results of the period 2 invariant torus of the driven logistic map for different values of a close to breakdown.
Table 2
Numerical estimates (nonrigorous) of the bounds of the validation algorithm for the initial data producing the validated results. . | 14,427 | 2012-04-12T00:00:00.000 | [
"Mathematics",
"Physics"
] |
Operational Implementation of Satellite-Rain Gauge Data Merging for Hydrological Modeling
Systems exposed to hydroclimatic variability, such as the integrated electric system in Uruguay, increasingly require real-time multiscale information to optimize management. Monitoring of the precipitation field is key to inform the future hydroelectric energy availability. We present an operational implementation of an algorithm that merges satellite precipitation estimates with rain gauge data, based on a 3-step technique: (i) Regression of station data on the satellite estimate using a Generalized Linear Model; (ii) Interpolation of the regression residuals at station locations to the entire grid using Ordinary Kriging and (iii) Application of a rain/no rain mask. The operational implementation follows five steps: (i) Data download and daily accumulation; (ii) Data quality control; (iii) Merging technique; (iv) Hydrological modeling and (v) Electricity-system simulation. The hydrological modeling is carried with the GR4J rainfall-runoff model applied to 17 sub-catchments of the G. Terra basin with routing up to the reservoir. The implementation became operational at the Electricity Market Administration (ADME) on June 2020. The performance of the merged precipitation estimate was evaluated through comparison with an independent, dense and uniformly distributed rain gauge network using several relevant statistics. Further validation is presented comparing the simulated inflow to the estimate derived from a reservoir mass budget. Results confirm that the estimation that incorporates the satellite information in addition to the surface observations has a higher performance than the one that only uses rain gauge data, both in the rainfall statistical evaluation and hydrological simulation.
Introduction
The renewable contribution of the electric energy matrix in Uruguay has been increasing steadily during the last decades, with hydroelectric, wind and solar components that have different inherent variability and predictability. This poses both a challenge and an opportunity to optimize planning at different embedded timescales and, ultimately, dispatch. The interconnected Electric System Simulator (SimSEE [1]) is used for these purposes [2], from the management of the spot market to long-term analysis of the evolution of the generation capacity, with intermediate seasonal and nested weekly planning. Particularly, in the case of the hydroelectric generation, we used a coupled hydrological and electric system modeling approach in order to generate and process a hydrological ensemble forecast for the largest reservoir of the system [3]. The ability to forecast the hydrological inflow contributes to the optimal use of each energy source, with the corresponding economic and environmental benefits.
In most applications that use operational hydrological models, the spatial and temporal variability of precipitation constitutes one of the dominant factors and with greater associated uncertainty. In this context, remote sensing products are ideal instruments to through comparison with an independent historic rain gauge dataset. Furthermore, a hydrological application was implemented using the GR4J model [24] at daily time step and compared to the estimated "theoretical" inflow to the hydroelectric reservoir. We expect that this work will contribute to the understanding of the reliability of the latest NRT satellite-based precipitation products and provide a reference for their applications in operational hydrological simulation and water resource management.
Study Area
The upper Rio Negro basin, in northeastern Uruguay, has a surface area of about 40,000 km 2 ( Table 1), taking Gabriel Terra hydroelectric plant (G. Terra) as its closure point. Downstream G. Terra in the Rio Negro, we find Baygorria and Constitución hydroelectric plants. The binational (Argentina-Uruguay) hydroelectric plant Salto Grande, in the Uruguay River, completes the total hydroelectric capacity that collectively represents a third of the current installed power in the country's electric system and contribute with more than the 50% of the mean total generated electricity [25] with large interannual variability. In the Uruguayan system, the main storable resource is the water in the hydropower reservoirs, particularly in G. Terra, since it has the highest storage capacity ( Table 2). Considering the growing pressure for water demand from both agricultural and forestry expansion, together with the continuous increase of electric energy demand, this highlights the need for adequate tools for the management of water resources in the Rio Negro basin [26]. Figure 1 shows the location of existing hydroelectric plants (black triangle) and the delimitation of G. Terra basin. We also included the location of the rain gauges available at NRT (red square), used for the operational implementation of the merging approach and the historic rain gauge data (blue dot) used for validation purposes. Both datasets are presented in Section 3.
Rain Gauge Data Available in Near-Real-Time
Precipitation data available in NRT, used for the operational implementation of the merging technique, comes from the public electric utility (UTE) network. After a data quality control, we selected 19 automatic stations within the Rio Negro basin. The data quality control included the identification of missing data and outlier values, the implementation of plausibility checks based on Scherrer et al. (2011) [27], as well as the evaluation of the accumulated and mean annual rainfall, the average number of wet days (having nonzero rainfall) and the length of the longest dry spell. The period analyzed is 31 January 2010 to 31 May 2020. Daily rainfall totals are taken at 1000 UTC. Figure 1 shows the location of the selected pluviometric stations (symbolized with red squares). It highlights that they are not uniformly distributed, which surely influences the performance of the merging technique to be implemented.
Historic Rain Gauge Data for Validation Purposes
The historic reference data used to evaluate the proposed methodology comes from a relatively dense and uniformly distributed network of 95 stations provided by UTE, the National Institute of Meteorology (INUMET), the National Institute of Agricultural Research (INIA) and the National Water Agency (ANA) of Brazil. Figure 1 presents the spatial distribution of these stations (blue dots) in comparison with the location of the automatic stations selected for the operational implementation of the merging approach (red squares).
The validation period is 1 February 2017 to 31 May 2020, during which the satellite rainfall estimates selected (presented in Section 3.3) are also available. Daily rainfall totals are taken at 1000 UTC.
Satellite Rainfall Estimates
In view of the new generation of global precipitation satellite products, which integrate multiple platforms and previously existing algorithms, with high spatial and temporal resolution and better performance than the predecessor products [17,28], the following products were selected: GSMaP: Global Satellite Mapping of Precipitation [10,11] of the Japan Aerospace Exploration Agency (JAXA), version GSMaP-Gauge-NRT v7 (https://sharaku.eorc.jaxa.jp/ GSMaP/). IMERG: Integrated Multi-satellitE Retrievals for GPM, Global Precipitation Measurement [12] of the National Aeronautics and Space Administration (NASA), version Level 3 V06, NRT Late Run (https://pmm.nasa.gov/data-access/downloads/gpm).
We also took into account in the selection the data latency and accuracy of the different available products. Table 3 presents the main characteristics of the selected datasets (spatial and temporal resolution, data latency and period of availability). Although both products are available on an hourly frequency, the present study is limited to daily precipitation totals since this is the information needed for the hydrological modeling (presented in Section 4.2). As a first exploration to evaluate the satellite-rainfall estimates at daily time step, the root mean squared error (RMSE) and the probability of detection (POD) for a precipitation threshold of 5 mm were calculated for both, GSMaP and IMERG, against the observed records. To this end, we only considered those grid boxes of the satellite grid that contained at least one gauge observation for the specific day (collocated gauge-satellite data pairs). Table 4 presents the results obtained for the period 1 February 2017 to 31 May 2020. Additionally, the coverage of each satellite product for the entire period is included, as expressed as the percentage of pixels × day of available data. It shows that, in both cases, the satellite estimates present a satisfactory performance. Both products have a very good coverage in the analyzed period (close to 100%).
However, as mentioned earlier, previous experience with this type of products in the study area [22,23] confirms the need to implement a bias removal scheme based on available surface observations prior to any application.
As an example, Figure 2 shows the comparison of the rain gauge observations and satellite-rainfall estimates for a particular day (15 December 2019).
Other Data
The following datasets are used for the hydrological modeling (presented in Section 4.2): Precipitation forecast: a 14-day ensemble precipitation forecast is obtained from the Global Ensemble Forecast System (GEFS v11.0) produced by the National Centers for Environmental Prediction (NCEP-NOAA). The ensemble is composed of the control run and 20 perturbed members and has a spatial resolution of 1 • × 1 • [29].
Potential evapotranspiration (PET): the mean annual cycle of PET was calculated from the records of 9 meteorological stations belonging to INUMET and INIA for the period 1991-2015, using the Penman-Monteith method.
Amount of water storage capacity (SC) in the soils present in the G. Terra basin: the SC for each soil type was obtained from the CONEAT soil map at scale 1:40.000 [30] of the Office of Natural Resources of the Ministry of Livestock, Agriculture and Fisheries of Uruguay (DGRN-MGAP). Then, it is weighted by area to obtain a representative value for each sub-basin. A digital elevation model (DEM) of the Shuttle Radar Topography Mission (SRTM-NASA) with a resolution of 90 m was used to perform watersheds delineation and characterization.
Additionally, for the evaluation of the hydrological model, we used the daily series of estimated inflow to G. Terra reservoir provided by UTE (represented with grey dots in Figure 3). This series is called "theoretical" since it consists of an estimation based on a water balance in the reservoir and it is not a direct observation. Specifically, the estimated "theoretical" inflow is obtained (indirectly) from the water surface elevation at the dam and the turbinated and discharged flows. Therefore, this estimation is sensitive to the representation of the reservoir and the effect of the wind on its surface. Indeed, Figure 3 shows negative inflow values, which may be due to the compensation effect of excessively high values particular of the methodology (possibly associated with the action of the wind in the reservoir). Figure 3 also includes the time series of the 7-days filtered estimated "theoretical" inflow (blue line), considering that the model is used as a tool to support the decision-making of the weekly dispatch.
Merging Approach
The satellite-rain gauge data merging technique considered is based on the universal model of spatial variation [31,32]. As one of the hybrid geostatistical models, Regression Kriging (RK) is a spatial interpolation technique that combines a deterministic model (regression) with a statistical model (Ordinary Kriging of the regression residuals). It uses a deterministic model to estimate a value of the variable (precipitation) by using actual ground measurements to calibrate a model for the satellite estimates and then refines the estimate analyzing the residuals for spatial correlation; finally, it combines the statistical fitting and deterministic modeling [33].
The 3-step proposed model is summarized as follows ( Figure 4). Regression of the station data on the satellite data using a Generalized Linear Model (GLM). A GLM model is implemented in order to fit the satellite estimates to the rainfall observations at station locations. For the GLM, we use a spatially correlated residual structure that is fit to the available data. For each day, we calculate both exponential and spherical spatial correlations and choose the one with the highest Akaike information criterion (AIC). Several regressors alternatives were tested, including both satellite products and each one separately. Based on the performance statistics obtained (not shown), we decided to use, for each day the individual satellite product, either IMERG or GSMaP, with the highest Pearson correlation between the rain gauge observations and the collocated satellite values.
Interpolation of the regression residuals at station locations to the entire grid using Ordinary Kriging. Once the regressed satellite estimation is obtained, we calculate the error (residual) between it and the observations at the station locations. Then, the interpolation of the regression residuals to the entire grid is done through Ordinary Kriging [34], which exploits the spatial correlation in the residuals and this is added to the regressed satellite estimation in order to obtain the "unmasked" merged product.
Application of a rain/no rain mask (RNR mask). We apply an RNR mask to the merged product to prevent overestimation of the occurrence of rainfall in the interpolated field. The mask is obtained using the same merged precipitation estimate (RK) technique but switching the target observations to binary rain/no rain observations. Satellite estimates are used as regressors to forecast this binary field with the same RK technique described in 1 and 2. We use a threshold of 0.3 in the output of RK, a continuous field, to delimitate rainy region for the mask. Finally, the unmasked product is multiplied by the RNR mask to obtain the final masked merged product. Figure 5 shows an example of the application of the RNR mask for a given day (6 July 2017). The middle column corresponds to the unmasked OK and RK products while the rightmost column shows the masked versions. As can be seen, there is a large purple region of slightly positive values (zeros are transparent) in the unmasked product, while in the masked products this region is forced to zero. The merging algorithm in this study was written in R and is available on GitHub [35,36].
Rainfall-Runoff Model
The G. Terra basin (39,500 km 2 ) was discretized into 17 sub-basins with areas smaller than 7000 km 2 . Figure 6 presents the delimited catchments and their characteristics including: basin area (km 2 ), slope (%), concentration-time (Tc) (h) and SC (mm). To simulate the hydrological inflows to G. Terra reservoir we use a daily hydrological model (GR4J) coupled with a hydrological transit model (Muskingum). The GR4J model is a daily lumped four-parameter rainfall-runoff model developed by Perrin et al. (2003) [24]. The Muskingum model [37] is a two-parameter hydrologic flood routing method, based on the storage continuity equation.
In a previous study, Narbondo et al. (2020) [38] present a successful application of the GR4J daily rainfall-runoff model at 13 watersheds of Uruguay. They proposed an improved regionalization approach to predict runoff time series in ungauged catchments at country scale. Particularly, they found the optimal set of parameters of the GR4J model and, in addition, they found the relationships between them and watershed-physiographic factors. Table 5 shows the description of the "GR4J-Muskingum" model parameters and the values adopted in each case following these recommendations.
Goodness-of-Fit Indicators
The performance of the merged precipitation estimate (RK) was statistically evaluated through comparison with an independent rain gauge network, relatively dense and uniformly distributed (referred to as "historic data" in Figure 1). We also included in the evaluation the estimation based on the Ordinary Kriging interpolation from NRT rain gauge observations (OK) at the same grid as the satellite data (0.1 • × 0.1 • ), which serves as the baseline for comparison with the merged product. Both estimates (RK and OK) were compared with the rain gauge observations belonging to the historic reference dataset. The performance statistics used for the comparison are the mean error (ME), the RMSE, the frequency bias (FBS), the POD and the false alarm ratio (FAR) for a precipitation threshold of 5 mm [39,40].
Furthermore, several verification indices were used to quantitatively assess the hydrological utility of the precipitation estimates based on the estimated "theoretical" inflow to G. Terra reservoir (Figure 3), including the difference of total accumulated inflow (∆V), the Nash-Sutcliffe efficiency (NSE), the Kling-Gupta efficiency (KGE), the coefficient of determination (R 2 ) and RMSE [41]. Additionally, we also conducted a first-level catchment water balance using the runoff ratio (RR), defined as the ratio of the precipitation that contributes to runoff [20]. The RR values calculated using the different outputs from both estimates (RK and OK) were compared to known values from the literature [42].
In all cases, the period analyzed is from 1 February 2017 to 31 May 2020.
Operational Implementation
The 5-step operational implementation of the coupled hydrological and electric system modeling approach is presented next (Figure 7). Data download and daily accumulation. The required precipitation input data are adequately collected: records of NRT stations, GSMaP-NRT, IMERG-NRT Late Run and GEFS ensemble forecast. Daily rainfall totals are accumulated at 1000 UTC.
Data quality control. Prior to the merging algorithm, a data quality control from both NRT rain gauges and satellite estimates is performed based on the Climate Data Tools (CDT-IRI) [43]. Data quality control focuses on outlier detection for the purpose of elimination of data contamination, including the implementation of spatial-plausibility checks based on Scherrer et al. (2011) [27]. The threshold values used in the controls were adjusted manually, looking to eliminate the most obvious suspicious values in the available historical data set.
Merging technique. The satellite-rain gauge data merging technique is implemented in order to obtain the RK precipitation estimate over the Rio Negro basin.
Hydrological modeling. Based on the RK estimate and the GEFS precipitation ensemble forecast, the GR4J rainfall-runoff model is implemented at the 17 sub-catchments of the G. Terra basin. The runoff output is then routed along the river network using the Muskingum model to simulate the daily inflow ensemble to G. Terra reservoir.
Electricity-system simulation. The simulated inflow ensemble is integrated to the existing model of the interconnected electric system (SimSEE), particularly into the synthesizer model (CEGH), through biases and noise attenuators per time step adjusted through maximum likelihood [44].
The implemented model was integrated into SimSEE's on June 2020 and has since run under the responsibility of the Electricity Market Administration (ADME) of Uruguay. The application (called VATES) is continuously updating and executing a SimSEE Room with the representation of the Uruguayan generation system, in order to obtain the dispatch of the following seven days with hourly detail. The results and information relevant to the operation are published automatically on ADME's website [45]. They also provide the required statistical information for the design of exchange offers with neighboring countries and the energy spot market. Table 6 presents the comparison of the performance metrics for the OK (stations only) and the RK (merged product) precipitation estimates. The results obtained are global values, integrated both spatially (among the 95 stations in the Rio Negro basin) and temporally (averaged over the analyzed period). Overall, both estimates have a good performance but RK performs slightly better. This indicates an improvement in the accuracy of the precipitation estimation by the incorporation of satellite data. Figure 8 shows the spatial distribution of RMSE at the reference data locations obtained with both estimates, OK and RK, averaged over the analyzed period. As can be seen, both maps exhibit similar distribution patterns but there are some differences on the border with Brazil, where there are practically no NRT stations (see Figure 1). The RK estimate in that region has a better performance than OK with differences in RMSE between 10% and 20%.
Rainfall Model Performance
As an example, Figure 9 compares the interpolated precipitation fields obtained with OK (stations only) and RK (stations and satellite) for a given day (15 December 2019). Black dots represent the NRT rainfall observations used for the interpolation. As expected, both maps present a similar spatial distribution of daily precipitation but, particularly in the region towards Rio Grande Do Sul (Brazil), the two estimates show significantly different values, with OK showing amounts of the order of 40 mm and RK of 25-30 mm. In this region, the OK estimate does not appear to be natural, with uniform high values in a smooth zone that gradually fades, as a result of the weighted sum estimation, rather than irregular zones with intensely high peaks as observed in the RK estimate. This highlights the advantage of satellite data in the representation of spatial rainfall variability, particularly in data-sparse regions, as is the case here.
Rainfall-Runoff Model Performance
In this section, the precipitation estimates are incorporated into the hydrological model that runs operationally in ADME (see Section 4.2) and the performance is assessed as compared to the estimated "theoretical" inflow to the hydroelectric reservoir ( Figure 3). The "GR4J-Muskingum" model was forced by the two precipitation estimates (OK and RK) using the model parameters presented in Table 5 to simulate the daily inflows to G. Terra reservoir for the period 1 February 2017 to 31 May 2020.
The simulated and "theoretical" hydrographs are shown in Figure 10 and the statistical comparisons are summarized in Tables 7 and 8. Considering that the model is used as a tool to support the decision-making of the weekly dispatch, we also included the comparison of the 7-days moving average inflows (Figure 11). Figure 11. Comparison between theoretical and simulated (Ordinary Kriging and Regression Kriging) 7-days filtered inflows to G. Terra reservoir.
As shown in Figures 10 and 11, both simulations have generally good agreement with the "theoretical" streamflow both at daily and weekly time step, although overestimation and underestimation of the peaks are evident in some cases.
Both estimates present a slight underestimation of total accumulated inflow, with a difference of −5.3% and −2.5% for OK (station only) and RK (merged product) output respectively (Table 7); which correspond to a very good performance according to Moriasi et al. (2015) [41].
DINAGUA [42] reports an annual runoff ratio (RR) between 0.37 and 0.43. This reference value is very close to those obtained with the OK and RK estimates from both the estimated "theoretical" inflow (RR Est) and the simulated inflow (RR Sim) ( Table 7). On the one hand, when using the "theoretical" inflows to calculate the RR, we verified that the performance of the precipitation estimates is satisfactory. On the other hand, when considering the simulated inflows series, we confirmed that the hydrological model achieves a good representation of the rainfall-runoff transformation process, regardless of the precipitation estimate considered.
As context, we present the general performance ratings for the adopted statistics recommended by Moriasi et al. (2015) [41], simulated inflow using both estimates (RK and OK) have a satisfactory performance at daily time step (0.60 < R 2 ≤ 0.75 and 0.50 < NSE ≤ 0.70) and a good performance at weekly time step (0.75 < R 2 ≤ 0.85 and 0.70 < NSE ≤ 0.80). Furthermore, for all statistics considered, the RK simulated inflow has a better performance than the OK, for both daily and weekly time step.
Discussion
These results confirm that, as expected, the estimation that incorporates the satellite information in addition to the surface observations (RK) has a higher performance than the one that only incorporates the rain gauge data (OK), both in the rainfall statistical evaluation and hydrological simulation of the basin.
However, the magnitude of the improvement in the rainfall estimation is relatively small as expressed by the global indicators shown in Table 6, averaged both in time and space. Figures 8 and 9 already suggest that the magnitude of the original error with OK and improvement with RK, might be larger in the upper part of the basin, where the density of rain gauges is notably lower and, given the lack of stations on the other side of the water part, extrapolations are required to cover the basin. This is verified in Table 9 where we limit the RMSE indicator to the higher sub-basins (see Figure 6). We limited the analysis to RMSE because it is the most robust statistic, as well as the most relevant for the application and does not require the definition of a precipitation threshold like FBS, POD and FAR. Table 9 also includes the percentage of improvement achieved with RK, which increases from 3% for the global indicator to approximately 20% in the more poorly monitored border sub-basins. While synoptic frontal systems are prevalent through the year in the region and are responsible for most of the rainfall, convective scale storms become a relevant contributor to precipitation totals during the warm season. Of course, many times the latter are embedded in the former generating the multiscale structure of precipitations fields. However, it is well known that precipitation daily totals decorrelate with distance faster in the warm season as compared to the cold one [22]. This motivated an analysis of the seasonality of the improvement in skill when the satellite estimates are incorporated (RK). Table 9 shows the basin averaged RMSE limited to the warmest semester (October through March) and the peak of the warm season: December-January-February (DJF). Even with relatively high density of surface observations, as is the case on average in the region of study, the impact of incorporating satellite information increases as the precipitation field acquires larger amplitude in smaller scales during the warm season, from 3% up to 11%.
These analyses give an insight of the potential for improvement in skill that can be obtained with the merging methodology proposed as a function of rain gauge density and characteristic of the precipitation systems.
Summary and Conclusions
In this study, we developed and implemented a methodology that combines rain gauge observations and satellite-rainfall estimates at daily time step to improve the rainfall monitoring in NRT. The proposed methodology involves 3 steps: (1) regression of station data on the satellite estimate using a Generalized Linear Model, (2) interpolation of the regression residuals at station locations to the entire grid using Ordinary Kriging and (3) an application of a rain/no rain mask. The merged precipitation field thus obtained is then used in a hydrological modeling of the Rio Negro basin whose output is, in turn, coupled with an electric system modeling that guides planning and dispatch decisions for the following seven days.
The performance of the proposed merged precipitation estimate was statistically evaluated through comparison with an independent historic rain gauge dataset. The incorporation of satellite information enhances the representation of spatial variability, particularly in data-sparse regions with reductions in RMSE of up to 20%, although the overall improvement is statistically marginal.
As far as the operation of the energy system is concerned, it is the input to the reservoirs that most directly affect the electric system simulations and, in turn, management optimization. The GR4J hydrological model, with a daily time step, was implemented at 17 sub-catchments of the G. Terra basin with routing up to the reservoir. Model performance was assessed comparing model output to the estimated "theoretical" inflow to G. Terra computed from a mass budget to the reservoir and rendered satisfactory statistics: 0.60 < R 2 ≤ 0.75 and 0.50 < NSE ≤ 0.70. The estimation that incorporates the satellite information in addition to the surface observations has a higher performance, for all statistics considered, compared to the one that only incorporates the rain gauge data.
In an operational setting, simplicity and robustness of the implementation are as important as accuracy. All steps are currently implemented and run on a daily basis at the Electricity Market Administration (ADME): data download and quality control, merging algorithm, hydrological modeling and electric system simulation. The presented implementation improves the estimation of the precipitation field and carries that information all the way to the decision-making stage, with its corresponding socio-economic and environmental benefits. | 6,286 | 2021-02-18T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Elaboration and Characterization of a New Heavy Metal Sensor Functionalized by Extracellular Polymeric Substances Isolated from a Tunisian Thermophilic Microalga Strain Graesiella sp.
The present study aimed to develop and characterize new heavy metal sensors functionalized by extracellular polymeric substances (EPSs) isolated from a Tunisian thermophilic microalga strain Graesiella sp. The elaborated sensor showed a highly homogeneous character and revealed a microstructural lamellar arrangement, high crystalline nature, and several functional groups. Electrochemical impedance spectroscopy (EIS) and acoustic wave sensing were used as sensing techniques to explore the ability of microalgae-EPS-functionalized sensors to detect cadmium and mercury as heavy metals. For impedimetric measurements, a two-dipole circuit was adopted and showed good-fitted results with a low total error. The acoustic sensor platforms showed good compatibility with EPS in adjacent water. For both EPS-functionalized sensors, metal ions (Cd2+, Hg2+) were successfully detected in the concentration range from 10−10 M to 10−4 M. Impedimetric sensor was more sensitive to Cd2+ at low concentrations before saturation at 10−7 M, while the acoustic sensor exhibited more sensitivity to Hg2+ over the full range. The results highlight a new potential alternative to use microalgae EPSs as a sensitive coating material for the detection of heavy metals. However, its use in a real liquid medium requires further investigation of its selectivity in the presence of other compounds.
Introduction
Contamination of land and water remains a serious environmental issue since a large mass of toxic substances, such as heavy metals, is being released into the environment by both natural and anthropogenic sources. Heavy metals are inorganic compounds that persist for centuries in ecosystems since they are nonbiodegradable and have been proven to accumulate in living beings, thus affecting the reproductive, neurological, and immunological systems of both humans and animals [1].
Many techniques are used for heavy metal quantification but are often complex and expensive, with other intrinsic issues such as long steps of preconcentration and analysis [2,3]. In recent years, the development of biosensors has gained increasing interest due to their high sensitivity, selectivity, and accuracy [4]. The development of whole-cell and cell-free biosensors for the detection of heavy metals has raised increased interest. Microalgae such as Arthrospira platensis and Chlorella Vulgaris were used in various studies to develop whole-cell biosensors for the control of toxic pollutants in aquatic environments [5][6][7]. Immobilized microalgae cells were coated on sensor electrodes by alternating deposition of polyelectrolyte multilayers using layer-by-layer (LBL) deposit methods [5,8].
Several microalga strains showed their ability to bind several heavy metals [9]. This ability was attributed to extracellular polymeric substances (EPSs), also called exopolysaccharides, which are released by several microalgae into the surrounding environment [10][11][12]. The richness of microalgae EPSs in uronic acid and sulfate groups gives them negative surface charges [13], favoring the complexation of positively charged metal ions. Various functional groups (carbonyl, carboxyl, and hydroxyl) and protein substituents are also involved in this complexation process [14,15]. Due to their structural complexity containing hydrophilic and hydrophobic groups, EPSs can absorb and retain water, which gives them gelling properties, increasing their ability to adsorb various pollutants by simple inclusion [13].
In addition, EPSs extracted from certain microalgae showed adhesive properties [15] and pseudoplastic flow [16], which are advantageous characteristics for biosensor applications, especially when mixed with other materials. In our previous work, cyanobacteria EPS was used as a monolayer coating material for gold sensors, and successful detection of microplastics in water was performed [10].
In previous work, results showed that Chlorophyta Graesiella sp. cultured under controlled laboratory conditions or natural temperature and light conditions released high amounts of EPSs into the culture medium. These EPSs, characterized as heterosulfated polysaccharides composed mainly of polysaccharides (80%) and proteins (14%), presented a high crystalline and anionic nature with high emulsifying, flocculating, and film-forming properties [17,18].
In this work, we combined the advantages of the physical and chemical characteristics of Graesiella sp. EPS, the simplicity of their extraction, and their deposition method to elaborate new heavy metal sensors. The proposed functionalized sensors membrane was characterized using FTIR, AFM, X-ray, and HPSEC technics. Electrochemical impedance spectroscopy and acoustic wave techniques were used to study the biosensor's analytical performances for detecting heavy metal ions, particularly Hg 2+ and Cd 2+ .
Thus, the objective of this paper, before further investigation for a selective sensor in a real liquid medium, consisted of exploring the use of microalgae EPS as sensitive bioreceptors for heavy metals.
Extraction and Elaboration of the EPS-Membrane-Forming Solution
Extracellular polymeric substances (EPSs) were obtained from the cultivation of Graesiella sp., as mentioned in previous work [9,16].
The membrane-forming solution was obtained, as described by Gongi et al. [9], by dissolving 1 mg of lyophilized EPS in 1 mL ethanol solution (99% purity, purchased from Sigma Aldrich, St. Louis, MO, USA). The obtained solution before its deposition was characterized using zeta potential for its electrical potential and high-pressure size exclusion chromatography for its molecular weight. The zeta potential measurements were performed in triplicate at 25 ± 1 • C by measuring the dynamic electrophoretic mobility of the water-dispersed particles. All measured electrophoretic mobilities were converted into zeta potential using Smoluchowski's formula with a sizer Nano ZSP (Malvern) [19].
The molecular weight of the deposited EPS solution (coating solution) was analyzed by high-performance size exclusion chromatography (HPSEC) with a refractive index detector. The solution was filtered through a 0.22 µm filter (Sartorius, Bohemia, New york, USA) before injection. A Shodex OH-Pak SB-805 column following an OH-Pak SB-G guard column (8 mm × 300 mm, Japan) was used at 25 • C, and the column was eluted with phosphate buffer (50 mmol L −1 ) at a flow rate of 0.8 mL min −1 . Then, the injection volume was 20 µL. Standard dextran (0.3 g L −1 ; Sigma 31,390) of molecular weights 5, 13.5, 500, 1500, 30,000, and 60,000 kg mol −1 were used to build a calibration curve. The molecular mass of the EPS sample was extrapolated according to the calibration curve of standard dextran.
Sensor's Elaboration
The ability of microalgal EPS as a sensitive membrane was investigated using two types of sensors ( Figure 1): a gold electrode for electrochemical detection and a Love wave sensor for acoustic detection. The gold electrode used for electrochemical detection consisted of (100)-oriented, ptype (3-5 Ω cm) silicon wafers of thickness 1 mm, thermally oxidized (800 nm thick silicon oxide layer), coated with titanium (adhesion layer, 30 nm thick) and gold (300 nm thick) deposited by evaporation under vacuum, then cut into 1.2 × 1.2 cm 2 squares.
The Love wave sensor (500 µm thick) used for acoustic detection consists of a dual delay line on a piezoelectric substrate (quartz) covered by a rigid overlay (SiO 2 ) acting as a guiding layer with interdigitated transducers (IDTs) to generate and receive the acoustic wave [5].
Both types of devices were provided by the "Laboratoire d'Analyse et d'Architecture des Systèmes" (LAAS, CNRS, Toulouse, France). The functionalization of the sensors was performed by deposition of the EPS solution (1 mg EPS mL −1 of ethanol) using a self-assembled monolayer technique. The immobilization of the sensitive membrane was performed by spin-coating on the cleaned gold surface electrode for electrochemical detection and by drop-casting on the Love wave sensor surface for acoustic detection. Both sensors were dried for 24 h in an oven (30 • C). The thickness values, determined using a surface profiler (Alpha-Step IQ), were 10 nm for the EPS-functionalized gold sensor and 22 nm for the EPS-functionalized Love wave sensor. These values are within the range of biological sensors (5 to 50 nm [20]).
Surface Characterization of the Sensitive Membrane
The structural analysis of the sensitive membrane was performed by Fourier transform infrared (FTIR) and X-ray diffraction (XRD) spectroscopies and surface microstructure characterization by atomic force microscopy (AFM) and scanning electron microscopy (SEM).
FTIR analyses were carried out using an attenuated total reflectance Fourier transform infrared spectroscopy (ATR-FTIR Perkin-Elmer). The spectra were recorded at room temperature in the range of 500-4000 cm −1 with a resolution of 2 cm −1 . The Spectrum Suite ES software was used for FTIR data treatment.
Diffraction data on both gold electrode and Love wave sensors were collected with an X-ray diffractometer CICECO Empyrean (JDX 3532; PANanalytical, Hong Kong, Japan). Data analysis was performed using CrysAl-isPro Software System. The angle range of diffraction was observed from 0 to 50 • .
The surface topography of the EPS-functionalized Love wave sensor was assessed with AFM ( Figure 2). The AFM measurements were carried out using a Bruker Innova atomic force microscope at a frequency of~300 kHz. Images were analyzed using the software Gwyddion-64. The top view and cross-section morphology of the sensor membrane were inspected by scanning electron morphology SEM with an HR-FESEM SU-70 Hitachi microscope (Prior Scientific, Rockland, MA. 02370, U.S.A). First, the sensor was cryofractured by immersion in liquid nitrogen and fixed on the SEM support using double-sided adhesive tape, and then the sensor was coated with 5 nm thick gold and observed under an accelerating voltage of 5.0 kV and absolute pressure of 60 Pa.
Electrochemical Impedance Spectroscopy "EIS" Measurements
Electrochemical impedance spectroscopy measurements were carried out at room temperature (20 ± 3 • C) in an electrochemical cell connected to a computer-controlled impedance analyzer (FRA32M, Autolab, Metrohm, Herisau, Switzerland). The electrochemical impedance spectroscopy measurements were performed in ammonium acetate (0.04 M, pH 6.8), used as a background electrolyte, in a conventional electrochemical cell containing a three-electrode system, ensuring the electrode's stable positioning and the solution's stirring. EPS-coated gold electrode (0.125 cm 2 ) was the working electrode, a platinum plate (0.282 cm 2 ) was the counter electrode, and a saturated electrode (Ag/AgCl/KCl) served as the reference electrode ( Figure 3). The amplitude excitation sinusoidal signal was 10 mV, and the frequency was scanned in a range of [10 −3 Hz, 10 6 Hz], as described by Gongi et al. [9].
Nyquist diagrams were recorded with increasing metal concentrations ranging from 10 −10 M up to 10 −4 M. All our measurements were performed in triplicates (n = 3) at a negative bias of −0.3 V. This value allows an improved definition in the Nyquist plot, being sufficiently low to reduce any corrosion phenomenon [4].
In this study, we used Nova 1.5 software (dedicated to impedance measurement), which is programmed to average the three replicates' measurements (n = 3) and calculate their standard deviation. The equivalent circuit parameters for the electrolyte interface of the EPS gold sensor were interpreted after testing several models of the equivalent circuit.
The impedimetric responses of the EPS-functionalized electrodes to the metal ions Cd 2+ and Hg 2+ were investigated.
Acoustic Love Wave Test Cell
The sensor platform used in this work was previously described by Tamarin et al. [21]. Briefly (Figure 4), the Love wave sensor unit (quartz + SiO 2 guiding layer) consisted of two acoustic delay lines, with input and output interdigitated transducers each; one line was used as a reference and the other one for the measurement. The sensor unit was implanted in an experimental setup with a test cell ensuring electrical connections to the electronic readout circuit and a PDMS microfluidic chip localizing the aqueous sample on the surface of the sensor to prevent any contact with the electrical pads. A volume of 250 µL of EPS solution was injected into the PDMS chip and then dried for 24 h to obtain a thin EPS layer on the surface of the acoustic sensor.
Acoustic Sensor Analytical Performances
Acoustic measurements for the electrical sensor were performed in air and water with a vector network analyzer (VNA; Anritsu MS4623B, Allen, TX 75013, U.S.A.). Results were expressed from real-time monitoring of the acoustic sensor transmission response (scattering parameter S21) in terms of the gain (insertion losses) and phase of the propagated acoustic wave between input and output IDTs. The frequency range of interest around the acoustic resonance was about 118 MHz.
Real-Time Monitoring of Cellular Response to Heavy Metal
Similar measurements were carried out with a computer-controlled analyzer (Copper Mountain planar 304/1) to real-time monitor the response to heavy metals. The cadmium and mercury aqueous solutions at different concentrations were injected, and the relative insertion loss (dB) and phase ( • ) were determined and compared with the reference Love sensor response. All measurements were carried out in a controlled room to eliminate the effect of a variation in temperature or humidity. The responses of the Love wave sensor in air and in deionized water, as an adjacent medium, were considered references. Data represent the mean of three replicates (see Section 3.3.1 (Figure 13)).
Sensitivity for Heavy Metals by EPS Acoustic Sensor
To go deeper into the analysis of the Love wave sensor response and determine the sensitivity, the insertion losses variation (∆Il) and relative frequency shift at fixed phase ( were calculated respectively according to Equations (1) and (2) as follows: Il meas and Il ref are respectively the measured insertion losses with increasing concentrations of heavy metals and in DI water at the resonance frequency, while f meas and f ref are respectively the measured frequency with increasing concentrations of heavy metals and in DI water at an equiphase point 0 • near the resonance frequency.
Metal Solutions
Two types of metals, Cd (II) and Hg (II), were tested in this study. The stock solutions were prepared by dissolving Cd (Cl) 2 (H 2 O) 4 and Hg (Cl) 2 (Sigma-Aldrich, St. Louis, MO, USA) in distilled water. Aqueous metal solutions of concentrations ranging from 10 −10 M up to 10 −3 M were obtained by successive dilutions. All glassware was acid-washed before use to avoid the binding of metal.
Statistical Analyses
Statistical analyses were performed with SPSS ver. 20.0 professional edition. The impact of heavy metal on the variation in (dB) and ( • ) at all concentrations was evaluated with St-test's t-test, and p-values of <0.05 were statistically significant.
Results and Discussion
The HPSEC elution profile ( Figure 5) showed a single, symmetrical narrow peak, verifying the homogeneity of the EPS solution. Based on the calibration curve of the elution retention times of the standard dextran, the average molecular weight (Mw) of the EPS-based membrane solution was estimated to be 7.82 × 10 6 g mol −1 . This value is in the range of the EPS molecular weight reported in numerous studies [22]. In general, the variation in the molecular weight of EPS could be explained by differences in strains, fermentation factors, and EPS structures [23]. The zeta potential of the EPS membrane solution represents an index of intensity of electrostatic attraction between particles, showing that the Graesiella EPS were of anionic nature. The zeta potential was evaluated at −40 ± 2 mV. The negative charge may be due to the presence of anionic groups and to the presence of uronic acids [24]. Consequently, the EPS solution could be quite reactive with chemical species such as cadmium and mercury [25].
Atomic Force Microscopy (AFM) Analysis
The surface topography of the EPS-functionalized sensor is shown by the tapping mode 3D and 2D atomic force microscopy (AFM). The AFM profile displayed a crinkled and wrinkled structure with irregular blocks with a maximum height of 100 nm ( Figure 6A). The 3D AFM images ( Figure 6B) reveal that the EPS-functionalized sensor presented a pointed compact structural feature without protrusions. This structure can enlarge the active area of the sensor and help to immobilize selective materials including cadmium and mercury [26,27].
FTIR Analysis
The presence of functional groups at the surface of the sensor was verified by FTIR measurements, as shown in Figure 8. FTIR spectroscopy showed little change in functional groups with respect to Graesiella sp. native EPS [24]. Broadband between 2900 and 3000 cm −1 is attributed to the stretching vibration (ν) of O-H or νC-H groups, characteristic of the hydroxyl and alkyl functionality of carbohydrates. The absorption observed at 1040 cm −1 could be related to the bending vibration (δ) of N-H and the νC-N, indicating the existence of amino acids from peptides/proteins. The small peak at 1229.82 cm −1 suggested the presence of sulfated groups, confirming the heterosulfated polysaccharides nature [28,29].
X-ray Diffraction (XRD) Analysis
The X-ray diffraction (XRD) patterns of the EPS sensor membrane (Figure 9) exhibited numerous intense and sharp diffraction peaks ranging from 9 to 50 • . Such a result indicates a highly crystalline nature, unlike that found in previous work [9], in which the amorphous nature characterized the sensor surface functionalized by cyanobacterial EPS. The crystalline nature is thought to give the EPS membrane a stronger interaction between the different structural components [16].
Electrical Circuit Model
The electrochemical measurement is sensitive to changes at the interface of the electrode and the medium [4]. The EPSs gold sensor was modeled by an equivalent circuit ( Figure 10) composed of two dipoles in series with electrolyte resistance (Rs). The first dipole (CPE EPS//Rm) models the electrochemical phenomena occurring at the membrane/electrolyte interface, with CPE EPS being the EPS membrane capacitance and Rm its resistance. The second dipole (CPE dL//Rct) describes the electron transfer impedance between the bulk and the electrode's surface, with CPE dL the constant phase element of the charge transfer and Rct the electron transfer resistance. The same equivalent circuit was used to describe several biosensors, for example, those based on immobilized bacteria [3,5] or based on a monolayer EPS membrane [9].
The Nyquist diagrams ( Figure 10) obtained for the gold electrode after the EPS deposition showed good agreement between the measured data and the fitting curves with chi-square values (χ 2 ) of 0.02, indicating that this equivalent circuit is suitable and meaningful for this electrochemical system.
In the case of the two-dipole circuit, the total impedance of the constant phase elements ZCPE modeling the behavior of the interface is expressed by Equation (3), combining the CPE of the EPS membrane and that of the electrode surface [3,5,9]. Such an equation was used to decorrelate the impedance spectroscopy data parameters relative to each part of the equivalent circuit, as reported in Table 1.
where Q is a constant parameter, j is the imaginary number, ω = 2πf is the angular frequency, and α is a correction exponent (0 < α < 1).
Effect of the Heavy Metal's Concentration on the Impedance of the EPS-Functionalized Electrode
Electrochemical characterization of the EPS-functionalized sensors was investigated by plotting Nyquist diagrams resulting from electrochemical impedance spectroscopy (EIS) measurements of both mercury and cadmium at concentrations ranging from 10 −10 M to 10 −4 M (Figure 11). The shape of Nyquist plots is related to the difference in the electrical signal formed due to the binding of heavy metal ions (Cd 2+ and Hg 2+ ) at the surface of the EPS-functionalized sensor. EPS acetate spectra (calibration curve) recorded before heavy metal injection showed the same pattern with high stability. An increase in the amplitudes of the Nyquist plots was observed in relation to an increase in metal concentration ( Figure 11).
For the two heavy metals tested, the impedance parameters fitting the experimental data are shown in Table 1. Both the capacitance of the EPS membrane (CPE EPS) and that of the interface charge transfer (CPE dL) showed small variations with increasing metal concentration. Moreover, the quasi-stability of the correction exponent (α dEPS) and (α dL) near the unit value probably indicates no structural modification at the EPS membrane surface as well as at the electrode/membrane interface.The EPS-functionalized sensor showed high stability, a lower limit of detection, and affinity toward mercury and cadmium when compared with several gold electrodes [30,31].
EPS membrane resistance Rm significantly increased with increasing metal concentration (Figure 12). At low concentrations (10 −10 to 10 −7 M), the slope of the variation in Rm was more acute in the case of cadmium Cd 2+ than with mercury Hg 2+ , indicating greater sensitivity toward cadmium ions. However, unlike Hg 2+ , in which no charge saturation was observed up to 10 −4 M, a saturation of the sensor membrane occurred from a Cd 2+ concentration as low as 10 −7 M. Thus, obtained results confirm a higher range of mercury detection using the EPS sensor.
The value of Rm is correlated to the rate of exchange between the solution of metal cations and the negative charge of the surface of the EPS membrane. The rate exchange varies as a function of the metal species and its concentration in the parent solution, as well as its affinity with respect to one or other of the functional groups, including sulfate ones, of the membrane and their availability [32]. In consideration of the complexity of the EPS and its diversity in functional groups [14,15], further investigations are needed to understand these interactions in depth. In the case of other membranes, such as the demineralized lignite membrane, the exchange rate varied more rapidly with the concentration of Cd 2+ than that recorded for Hg 2+ , and the adsorption kinetics of cadmium was almost twice that of mercury [33]. However, in other instances, Bhattacharjee et al. [34] demonstrated that Hg 2+ ions are thiophilic and bind more easily to sulfate groups than Cd 2+ .
EPS Membrane Influence
The impact of the EPS membrane on the performance of acoustic sensor platforms was investigated in terms of the gain, also referred to as insertion loss (dB), and phase ( • ) of the device transmission response using VNA.
Measurements were performed in air with the EPS-functionalized sensor (with EPS membrane) and compared with a noncoated sensor (without EPS membrane). The shift in insertion losses measured between the coated (−70.05 dB) and noncoated (−49.3 dB) Love wave sensors was −21 dB, revealing significant attenuation due to the presence of EPS ( Figure 13). Commonly, a polymer membrane on the acoustic sensor surface impacts the wave propagation in a way that strongly depends on the material characteristics, especially its viscoelasticity.
Moreover, underwater acoustic measurement commonly reveals an additional attenuation of the acoustic wave in the order of −10 dB of losses [21]. In our case, and surprisingly, a good recovery of the "dB" responses with adjacent water can be observed ( Figure 13), suggesting that EPS acts as a guiding layer for the acoustic wave, allowing enhanced propagation with water as a detection medium contrarily to air. Indeed, the acoustic attenuation shifted from −70.1dB (air) to −44.7 dB (water). This phenomenon could be attributed to the high potential of EPS to absorb and trap water molecules mainly through hydrogen bonding, modifying the mechanical properties of the material and thus reducing the acoustic wave attenuation [34], in agreement with crystalline behavior. Further investigations are required to deepen our understanding of the mechanisms involved. Nevertheless, Graesiella sp. EPS appears to be a potential candidate material for the coating of an acoustic sensor for heavy metal detection.
Metal Detection by EPS-Functionalized Acoustic Sensor
The insertion losses at the operating frequency of the device were tracked to detect the impact of increasing heavy metal concentration, varying from 10 −11 M to 10 −3 M. The results presented in Figure 14 indicate that at the lowest mercury concentration tested ([Hg 2+ ] = 10 −11 M), no significant variation in insertion loss was observed (about −0.03 dB). However, while the concentration of mercury increased from 10 −10 M up to 10 −3 M, the gain peak values decreased distinguishably compared with the peak corresponding to the deionized water (DI, blank). The loss was −3.3 dB at 10 −10 M and −10.8 dB at 10 −3 M ( Figure 14A). On the other hand, the equiphase frequencies were slightly shifted toward lower frequencies with increasing mercury concentration. The frequency value at 0 • in phase changed from 117.28 MHz at 10 −10 M to 117.25 MHz at 10 −3 M, which led to a phase shift of 30 kHz ( Figure 14B). The gain (dB) and phase ( • ) spectra realized with the EPS-functionalized acoustic sensor for the detection of cadmium metal are illustrated in Figure 15. Similarly, the results clearly show that an increase in the cadmium concentration induces a decrease in the acoustic wave amplitude ( Figure 15A). Again, the frequency also shifted toward low frequencies, as can be observed from the phase curves ( Figure 15B), corresponding to a decrease in the acoustic wave velocity. No significant variation in the insertion loss was observed (about −0.1 dB) at a concentration of cadmium of 10 −11 M. The additional insertion losses evaluated with reference to water were −0.7 dB at 10 −10 M to −3.6 dB at 10 −3 M. The frequency value at 0 • in phase shifted from 117.558 MHz at 10 −10 M to 117.553 MHz at 10 −3 M, representing a six times lower detection limit for cadmium than mercury ions.
Sensitivity toward Heavy Metals by EPS-Functionalized Acoustic Sensor
In this study, statistical analyses were performed using a student test to investigate the impact of the heavy metal concentration on the variation in dB and the frequency shift. As can be observed in Figure 16, the Graesiella-EPS-functionalized acoustic sensor exhibited an especially significant dB loss with mercury when compared with cadmium at all tested concentrations (p < 0.05). The Love wave electrode without EPS showed no sensitivity toward heavy metals, even at the highest concentration. In this regard, such EPS-functionalized sensors are able to detect cadmium and mercury with higher sensitivity toward mercury. That could be related mainly to the higher molecular weight of mercury when compared with cadmium. It was indeed shown that the adsorbed mass affects the response of the acoustic Love wave sensing and, hence, improves sensitivity [21,35].
Conclusions
This paper highlights the affinity of extracellular polymeric substances (EPSs) of thermophilic microalgae to bind heavy metals and induce surface-changing properties that can be detected by electrochemical impedance spectroscopy and/or acoustic wave sensing. The gold electrode and Love wave sensors used in this study for electrochemical and acoustic sensing showed god analytical performance and a low detection limit of 10 −10 M. In summary, EPS biosensors could be a potential alternative tool for the detection of low concentrations of heavy metals and, in particular, aqueous Cd (II) and Hg (II). The advantages of using EPSs from microalgae as sensor bioreceptors lie in their ease of obtaining and in their natural biodegradable character. Based on the variety of functional groups, with differences between several EPS from different algae, it can be envisioned as a new basis for a multisensory device associated with the appropriate signal processing, especially based on new tools of recent advances in artificial intelligence and machine learning. However, further investigations are needed on the possible interactions in the presence of a mixture of different metals, as well as in the presence of other organic contaminants, if one aims for their use in water biomonitoring. | 6,251 | 2023-01-01T00:00:00.000 | [
"Environmental Science",
"Chemistry",
"Biology"
] |
Role of X-Ray Crystallography in Structural Studies of Pyridyl-Ruthenium Complexes
This on X-ray Crystallography is a compilation of current trends in the use of X-ray crystallography and related structural determination methods in various fields. The methods covered here include single crystal small-molecule X-ray crystallography, macromolecular (protein) single crystal X-ray crystallography, and scattering and spectroscopic complimentary methods. The fields range from simple organic compounds, metal complexes to proteins, and also cover the meta-analyses of the database for weak interactions.
Introduction
It is extremely important for chemists to establish the structure of new compounds. A more accurate understanding of the structure leads to the construction of appropriate reaction systems. For example, although the well-known platinum complex, [PtCl 2 (NH 3 ) 2 ], has two geometrical isomers (cis and trans in Fig. 1), only the cis-isomer (so-called cisplatin) exhibits prominent antitumor activity (Reedijk, 1996;Rosenberg et al., 1969). Characterization of most structures of simple organic compounds is usually carried out by various spectroscopic measurements including NMR, IR and MS. On the other hand, detailed structural characterization of coordination compounds is achieved by X-ray diffraction methods, which can provide not only information on the geometrical structures, but also full bond parameters. In particular, single crystal X-ray diffraction is the most powerful tool for the detailed structural analysis of crystalline coordination compounds. The aim of this chapter is to demonstrate particular advantages of X-ray structural analysis when compared with other techniques on coordination compounds. Herein, X-ray analysis of a variety of mononuclear ruthenium complexes containing pyridyl substituents is mainly described.
Coordination geometries and coordination modes
Some isomeric pairs of [Ru(tpy)(L)Cl] n+ type complexes (tpy = 2,2':6',2''-terpyridine, L = asymmetrical pyridyl-based bidentate ligands in Fig. 2) have been prepared and structurally characterized as precatalysts to investigate the effect of isomeric structural features on the catalytic epoxidation process (Chowdhury et al., 2011;Dakkach et al., 2010). In the complex [Ru(tpy)(1)(OH 2 )] 2+ , the distal isomer exhibits better activity because it contains a pyridine C-H bond nearly parallel to the Ru-O bond, whereas for the proximal isomer this position is occupied by a C-CH 3 group and thus exerts a much stronger steric effect (Dakkach et al., 2010). In the complex [Ru(tpy)(2)(OH 2 )] + , on the other hand, the proximal isomer has been established to be an excellent catalyst for the chemoselective epoxidation though limited differences in electronic structural features exist between the isomeric pair (Chowdhury et al., 2011). These examples indicate that it is important to clearly distinguish the molecular structures of the compounds. In this section, representative examples of ruthenium complexes which possess asymmetrical multidentate ligands or multiple donor atoms are surveyed.
Coordination geometries of azopyridyl complexes
2-Azopyridyl derivatives (Fig. 3) behave as mono-, bi-and tridentate ligands, thus a wide variety of mononuclear complexes can be prepared. For example, 2-phenylazopyridine (3), which represents the most fundamental azopyridyl compound, is a bidentate ligand that can coordinate to a metal ion through the lone pairs on the pyridine and the azo nitrogen atoms, thereby forming a stable chelating 5-membered ring. Since the bidentate ligand (3) lacks a two-fold symmetry axis, there are five possible isomers in [Ru(3) 2 Cl 2 ] (Fig. 4) (Bao, K. Krause & R. A. Krause, 1988;Goswami, Chakravarty & Chakravorty, 1981;R. A. Krause & K. Krause, 1982Velders et al., 2004). Among them, α, β and ε adopt the cis-geometry with respect to the two chlorido ligands, whereas γ and δ are the trans-geometry. Besides the ε isomer, the structures have been determined by X-ray crystallography: the molecular structures of two [Ru(3) 2 Cl 2 ] isomers (α-and βisomers) were published in 1984 (Seal & Ray, 1984), the third (γ) and the fourth (δ) isomers were reported in 2000 and 2004, respectively (Velders et al., 2004. Ruthenium(II) complexes containing both Ru-C bonds and 3 as supporting ligands were synthesized by substituting chlorido ligand(s) in [Ru(3) 2 Cl 2 ] with CO or CN - (Oyama, Takatsuki & Fujita, 2010). The molecular structure of [Ru(3) 2 (CO)Cl] + determined by X-ray crystallography is shown in Fig. 5(a). The geometry of the complex corresponds to the αisomer shown in Fig. 4. The carbonyl C-O and Ru-Cl bond distances, and the Ru-C-O bond angle are consistent with those observed for closely related complexes ([RuL 2 (CO)Cl] + : L = bidentate pyridine-based ligands), whereas the Ru-C bond distance is longer than those in similar complexes (Clear et al., 1980;Kepert et al., 2004). This is because of the presence of two 3 ligands with azo moieties. In general, the Ru-N(azo) bond distances are shorter than those of the Ru-N(pyridine) bonds in 3 in the ruthenium complexes (Velders et al., 2004). However, one of the Ru-N(azo) bonds is very long; even longer than the distances of the Ru-N(pyridine) bonds. This is caused by the trans influence of the CO ligand. The crystal structure of [Ru(3) 2 (CN) 2 ] is shown in Fig. 5(b). Unexpectedly, the two CN groups are trans t o e a c h o t h e r , a n d t h e δ-form (shown in Fig. 4) is observed. A δ arrangement is very rare, and was only recently reported for δ-[Ru(3) 2 Cl 2 ] (Velders et al., 2004). In the δ-isomer, the two azo bonds (in mutual trans positions) should compete with each other for the Ru(II) 4d electron density, resulting in relatively long Ru-N(azo) bonds in [Ru(3) 2 (CN) 2 ] with respect to the equivalent bonds in γ-[Ru(3) 2 Cl 2 ] (Velders et al., 2004). This confirms that the Ru-N(azo) bond order decreases in [Ru(3) 2 (CN) 2 ].
Coordination geometries of other pyridyl-based complexes
The naphthyridines consist of a group of diazanaphthalenes with one nitrogen in each ring. In particular, the bidentate naphthyridine, 2-(2-pyridyl)-1,8-naphthyridine (pynp; 10), is a useful ligand for mononuclear systems (Fig. 11). For complexes with ligand 10; however, there is a stereochemical question with regard to the binding of ligand 10 because it has an asymmetrical structure. Consequently, it would be very interesting to study complexes involving 10 in an effort to understand the relationship between the coordination geometries of these complexes and their reactivities. The [Ru(tpy)(10)Cl] + complex acts as a catalyst for water oxidation: it shows excellent catalytic activity with turnover numbers (TNs) of 1,170 (Tseng et al., 2008). This value is the highest in the analogous [Ru(tpy)(L)Cl] + type complexes (L = bidentate polypyridyl ligands) whose TNs are in the range of 0-570. The ligand 10 differs from all of the other bidentate (L) because it is asymmetric. Although there are two possible isomers in [Ru(tpy)(10)Cl] + (Fig. 12), only one isomer ( Fig. 12(b)) is isolated and its structure has been determined by X-ray crystallography (Tseng et al., 2008). Two isomeric mixtures of [Ru(tpy)(10)Cl] + exist in solution (approximately 1:1), both of which can be separated by column chromatography. Alternatively, irradiation of visible light to isomeric mixtures of [Ru(tpy)(10)L] 2+ (L = OH 2 or CH 3 CN) leads the formation of only one isomer in particular organic solvents (Oyama, www.intechopen.com Yuzuriya & Takase, 2011). The resulting complex ([Ru(tpy)(10)(OH 2 )] 2+ ) is the proximal configuration ( Fig. 12(a)), which differs from the corresponding chlorido complex. In contrast, particular stereoisomers could be selectively prepared in the cis-[Ru(bpy)(10)(CO)Cl] + system (Oyama, Hamada & Takase, 2011). As shown in Fig. 13, although there are four possible geometries in the complex, only two types of the complex could be selectively synthesized. Although the prepared complexes are all single species as determined from the spectroscopic measurements, their structures cannot be assigned because of their spectral resemblance. Consequently, X-ray analysis was required to determine their detailed structures. The molecular structures of two [Ru(bpy)(10)(CO)Cl] + isomers are shown in Fig. 14. One isomer corresponds to (a) in Fig. 13 (the pyridyl ring of 10 is situated trans to the CO ligand), the another isomer corresponds to (b) in Fig. 13 (the pyridyl ring of 10 is situated cis to the CO ligand). www.intechopen.com Picolinato (pic; 11) can also coordinate to a metal ion as an asymmetrical bidentate ligand (Fig. 15). The mono-pic complex, [Ru(11)(CO) 2 Cl 2 ] -, has been prepared and fully characterized, including X-ray analysis . Although three geometrical isomers are expected for [Ru(11)(CO) 2 Cl 2 ] -( Fig. 16(a)), only form (i) in Fig. 16(a) was confirmed, and neither form (ii) nor (iii) was detected in the solid state and in solutions. The asymmetrical chelate of 11 gives substantially different effects to the two Ru-CO bonds of [Ru(11)(CO) 2 Cl 2 ] -. The Ru-C bond distance trans to the oxygen atom is shorter than another Ru-C bond trans to the nitrogen atom ( Fig. 17(a)). The complex containing mono-11, cis-[Ru(bpy)(11)(CO) 2 ] + was also prepared by the reaction of Hpic with [Ru(bpy)(CO) 2 Cl 2 ], and only the type (iv) of the two possible isomers (iv) and (v) was obtained ( Fig. 16(b) and Fig. 17(b)) . Two geometrical isomers (Fig. 18) of [Ru(tpy)(bpyO)(CO)] + (bpyO = 2,2'-bipyridin-6-onato; 12) were selectively synthesized by different synthetic routes . These isomers bearing 12 were determined by X-ray crystal structure analysis. Based on their pK a 's, redox potentials and IR spectra, the electron density at the ruthenium center of the distal isomer in Fig. 18 is higher than that of the proximal isomer. Such a difference is associated with the stronger trans influence of CO compared with the central pyridine of tpy, because the electron-withdrawing ability of CO induces the pyridonato structure of the resonance between ruthenium-pyridionate (Ru-bpyO -) and -pyridonato (Ru-bpyO) forms. The two-electron reductions of the distal isomer are followed by a partial Ru-CO bond cleavage, whereas the 12-based reduction of the proximal isomer causes cyclometallation by www.intechopen.com an attack of the pyridonato oxygen at the carbonyl carbon. Thus, cyclometallation caused by 12-based reduction effectively suppresses the reductive cleavage of a Ru-CO bond. [Ru(bpy)(11)(CO) 2 ] + (isomer (iv) in Fig. 16(b)). Fig. 18. Chemical structure of 2,2'-bipyridin-6-onato (bpyO) and two geometrical isomers in [Ru(tpy)(12)(CO)] + .
Coordination modes
It is known that not only some acido anions such as NO 3 -, CO 2 Rand CO 3 2-, but also organic ligands having multi donor atoms, can coordinate to metals as monodentates, chelating bidentates, or bridging bidentate ligands. In this section, such examples are presented. In coordination compounds, the NO 3 -ion can be noncoordinating (counter ion), or it can coordinate in a monodentate (NO 3 -O) or bidentate (NO 3 -O,O') fashion. The N-O stretching vibrations, for example, are observed at 1475, 1272 and 991 cm -1 in the IR spectrum of cis-[Ru(bpy) 2 (CO)(NO 3 )]PF 6 . It is rather difficult to differentiate between these structures by vibrational spectroscopy because the symmetry of the NO 3 -ion differs very little between them (C s vs. C 2v symmetry) (Nakamoto, 1986). However, X-ray data clearly indicates the monodentate coordination of the NO 3 -ion. The sums of the three O-N-O bond angles in the nitrato ligand are 360.0°, indicating that the nitrato moiety has a planar structure. Extensive studies have been made on metal complexes of carboxylic acids. The carboxylate ion may also coordinate to a metal in a monodentate (CO 2 R-O) or bidentate (CO 2 R-O,O') mode. In general, the coordination mode of the carboxylate ion is distinguishable using vibrational spectroscopy (Deacon & Phillips, 1980). Monodentate complexes exhibit ∆ν values (ν as (CO 2 -)-ν s (CO 2 -)) which are much greater than the ionic compounds, whereas chelating (bidentate) complexes exhibit ∆ν values which are significantly less than the ionic values. For example, the cis-[Ru(bpy) 2 (CO)(CO 2 H)] + complex shows the large formato ∆ν value of 339 cm -1 (Gibson et al., 1999), and this value is clearly greater than the ionic value (∆ν = 201 cm -1 ) (Ito & Bernstein, 1956). X-ray structural data for this complex clearly identified the monodentate formato ligand (CO 2 H-O) bound to an octahedral ruthenium center (Gibson et al., 1999). As stated above, the azopyridine ligand (3) generally coordinates to a metal ion through both pyridyl and azo nitrogens (bidentate chelate fashion). However, the crystal structure of a ruthenium complex containing 3 as a monodentate ligand ([Ru(bpy) 2 (3-N)(CO)] 2+ ) has been reported ( Fig. 19(a)) (Oyama, Fujita & Yui, 2008). The ligand 3 coordinates to the ruthenium center through only the pyridyl nitrogen atom. The remarkable feature in the structure of this complex concerns the azo moiety of 3, which is directed toward the adjacent terminal carbonyl ligand. Taking into account the fact that the nitrogen atom of the azo group is located just to the side of the carbonyl carbon, the existence of the CO ligand could influence the orientation of the azo moiety of 3. The redox reactions of [Ru(bpy) 2 (3-N)(CO)] 2+ using solution IR spectra under electrolysis conditions revealed a nucleophilic attack of the azo nitrogen atom of 3 on the carbonyl carbon ( Fig. 20(a)), because of the short interatomic distance between the azo nitrogen and the carbonyl carbon atoms (2.865 Å). In contrast, the dissociation of CO from the two-electron reduced species easily occurs in the analogue with the bidentate 3 ([Ru(tpy)(3-N,N')(CO)] 2+ ; Fig. 19(b)), because the ligand is unable to interact with CO ( Fig. 20(b)). The diphenyl-2-phosphinopyridine ligand (dppy; 13) has been used to construct interesting metal complexes because it possesses both the soft (P) and hard (N) donor atoms (Newkome, 1993). Although both the P-mono and P,N-bidentate coordination modes have been known for mononuclear systems (Fig. 21) (Moldes et al., 1998), 13 coordinates to the ruthenium center through only the P atom in [Ru(bpy)(13)(CO) 2 Cl] + (Ooyama & Sato, 2003). A prominent feature of the structure is that the pyridyl nitrogen atom of 13 directs toward the plane that includes two carbonyl ligands (Fig. 22(a)). The existence of carbonyl ligands may influence the orientation of the aromatic rings of 13 because relatively short distances between the noncoordinating nitrogen atom of 13 and the carbonyl carbons are observed (3.068 Å): the analogous complex situating two 13 moieties in mutually trans positions also exhibits a similar trend as shown in Fig. 22(b) (Ooyama & Sato, 2004).
Comparison of the structures and structural parameters
When chemists simply recognize the molecular structure of a compound, they usually pay particular attention to the points described above. However, it is also very helpful for chemists to understand how intermolecular forces (packing forces) act between molecules in crystals when they investigate the constructions of materials and medicines. In addition, various chemical and physical properties can be predicted by comparing bond parameters (e.g., bond distances, angles or torsion angles) in a compound.
Comparison of structures
Almost all structurally characterized coordination compounds of azopyridines have the ligand bonded to one metal center via the azoimine (N=N-C=N) chelate arrangement. In the case of 3 and 4, formation of a chelate complex creates the free aromatic ring (phenyl for 3 or 2-pyridyl for 4). The orientation of the pendant aromatic ring is dependent on its coordination environment: the dihedral angle between the two aromatic rings of [Ru(3)(CO) 2 Cl 2 ] is 35.29° (Fig. 23(a)) whereas the dihedral angle between the two pyridyl rings in [Ru(4)(CO) 2 Cl 2 ] is 7.99° ( Fig. 23(b)), which shows planarity of the coordinated 4 ). The distinction observed in solid state can be explained on the basis of a possible weak interaction between the noncoordinating nitrogen atom and the adjacent carbonyl carbon in [Ru(4)(CO) 2 Cl 2 ] rather than a different bulkiness between the CH in phenyl ring of 3 and the N in pyridyl ring of 4, because the interatomic distance between the N atom and the adjacent C atom in [Ru(4)(CO) 2 Cl 2 ] is fairly short (2.682(3) Å). This weak interaction has often been observed in ruthenium carbonyl complexes with noncoordinating pyridyl moieties (Mizukawa et al., 1999;, which contain both a pyridyl nitrogen (δ-) and a carbonyl carbon (δ+). The X-ray structures of terpyridyl complexes with the dimethoxyphenyl pendant have been determined. In the metal-free ligand 14 (Fig. 24), the three pyridyl rings are approximately coplanar, whereas the dimethoxyphenyl substituent in 14 is not coplanar with the terpyridyl moiety, making an angle of 50.2° with the central pyridyl ring (Storrier, Colbran & Craig, 1998). The dihedral angle between the central pyridyl ring and the dimethoxyphenyl ring in the ruthenium complexes with 14 are in the range of 43 to 56°, which are close to that of metal-free ligand 14 . Interestingly, the OCH 3 orientation of two methoxy moieties are different in two types of habits (yellow needles and red blocks) of [Ru(14)(bpy)(CO)](PF 6 ) 2 . As shown in Fig. 25, two OCH 3 moieties point in the same orientation in the yellow crystal ( Fig. 25(a)), whereas in the red crystal they point in opposite directions (Fig. 25(b)) . The former structure is identical to the corresponding chlorido and acetonitrile complexes. On the other hand, the latter form is consistent with that observed in the metal-free 14. These differences are most likely due to different packing or intermolecular effects on the orientation of methoxy groups of 14.
Comparison of structural parameters
Physical data of terminal carbonyl groups are generally useful indicators for electronic states around metal centers in metal complexes. In particular, bond distances and vibrational spectra are considered important (Cotton et al., 1999). In order to distinguish the electron states in the ruthenium mono carbonyl complexes with polypyridyl ligands, the stretching force constants of the CO group using IR spectral data were calculated in addition to bond distances and stretching frequencies of the {Ru-CO} 2+ moiety . Although obvious characteristics about the bond distances (Ru-C and C-O) are not shown in the complexes, the force constant (k) and the stretching frequencies of the CO groups can clearly be divided into two groups (Table 1): the k values of the azocontaining complexes are higher ca. 0.4-1.1 N cm -1 than of the values for the complexes without azopyridines. It is presumed that the electron density of the ruthenium center considerably decreases when an azo group (which has greater π-acidity) coordinates to the ruthenium atom, because the decrease of the k value caused by ligand-based oneelectron reduction corresponds to 0.56 N cm -1 in [Ru(bpy) 2 (CO)(quinoline)] 2+ (Wada et al., 2004). Thus, it can be concluded that complex formation with azopyridyl ligands induces relatively high k values. This observation can be ascribed to the decrease of the π backdonation from the ruthenium center to the CO group. As a result, this suggests that the azopyridyl ligands serve as prominent electron reservoirs compared with other polypyridyl ligands such as bpy. * 4,4'-Me2bpy = 4,4'-dimethyl-2,2'-bipyridine; 5,5'-Me2bpy = 5,5'-dimethyl-2,2'-bipyridine; 5,6'-Me2phen = 5,6'-dimethyl-1,10-phenanthroline; 4,7'-Me2phen = 4,7'-dimethyl-1,10-phenanthroline; dpk = di(2pyridyl)ketone; dpa = di(2-pyridyl)amine; phen = 1,10-phenanthroline; biq = 2,2'-biquinoline. The N-N bond distance in an azo moiety is an excellent indicator of the charge on an azo group (Kaim, 2001). The values for unreduced azo N-N bonds are 1.22-1.31 Å in metal complexes (1.23-1.26 Å for metal-free ligands). The one-electron reduced (anion radical) ligands have a bond distances of 1.31-1.41 Å, whereas the two-electron reduced (hydrazido) forms have single bonds with 1.41-1.50 Å distances (Fig. 26) (Sarkar et al., 2008). As shown in Table 2, the azo N-N bond distance of [Ru(bpy) 2 (CO)(3-N)] 2+ is dramatically shorter than those of other complexes: the distance (1.188(4) Å) is shorter than that of a typical N=N double bond (1.23 Å) (Oyama, Fujita & Yui, 2008 In the case of bis-azopyridyl complexes, bivalent azopyridine properties are observed by the different azo N-N bond distances. For example, the structure of the monoradical [Ru(4)(4 •-)(CO)(PPh 3 )] + has been determined (Shivakumar et al., 2000). The most remarkable feature of the structure is the difference in the two N-N bond distances: one is 1.284(6) Å, the other N-N bond is 1.336(6) Å. Accordingly, both unreduced and one-electron reduced (anion radical) types are present within the same molecule, which represents a model case of ground-state radical localization in one of the two ligands. The stabilization of radical on one ligand might be supported by the trans ligand. A similar example is also reported in [Ru(7 2-) 2 ] (Samanta et al., 2008). The [Ru(7) 2 ] molecule has longer N-N bond distances (1.324(7) and 1.327(6) Å), clearly suggesting the radical dianion form of the ligand (one charge from the deprotonation and another charge from the one-electron reduction centered on the azo group). This diamagnetic molecule may thus be viewed as a singlet diradical species. In this situation, the electrons from a d 4 configuration at the metal couple with the singly occupied ligand molecular orbitals to create a spin-paired entity. This situation has been confirmed by DFT (density functional theory) calculations.
Conclusion
Single crystal X-ray diffraction studies provide a valuable probe to visualize molecules.
Although some problems such as disorder and twining exist in measurements and analyses, it still represents the most important analytical method for coordination chemists. The author believes that advances in this technology will lead to an increase in the use of single crystal X-ray diffraction, including the X-ray snapshot technique which enables the capture of frame-by-frame movies of chemical reactions as they proceed in situ, now ubiquitous in NMR (Inokuma, Kawano & Fujita, 2011).
Acknowledgment
The endeavors of past and present co-workers from my research group in obtaining the results presented in this chapter are gratefully acknowledged, their names appear in the reference list. The Institute for Molecular Science, Bruker AXS Japan and Rigaku Corporation are gratefully acknowledged for performing X-ray measurements of some complexes. Finally, I thank Mr. Takashi Yamanaka for his contribution in writing this chapter. | 5,069.4 | 2011-12-16T00:00:00.000 | [
"Chemistry"
] |
Theoretical and experimental investigation of a 1:3 internal resonance in a beam with piezoelectric patches
Experimental and theoretical results on the nonlinear dynamics of a homogeneous thin beam equipped with piezoelectric patches, presenting internal resonances, are provided. Two configurations are considered: a unimorph configuration composed of a beam with a single piezoelectric patch and a bimorph configuration with two collocated piezoelectric patches symmetrically glued on the two faces of the beam. The natural frequencies and mode shapes are measured and compared with those obtained by theoretical developments. Ratios of frequencies highlight the realization of 1:2 and 1:3 internal resonances, for both configurations, depending on the position of the piezoelectric patches on the length of the beam. Focusing on the 1:3 internal resonance, the governing equations are solved via a numerical harmonic balance method to find the periodic solutions of the system under harmonic forcing. A homodyne detection method is used experimentally to extract the harmonics of the measured vibration signals, on both configurations, and exchanges of energy between the modes in the 1:3 internal resonance are observed. A qualitative agreement is obtained with the model.
Introduction
Piezoelectric (PZT) materials constitute an efficient mean of coupling mechanical vibrations to an electrical circuit. Several applications are usually targeted such as micro/nano electromechanical systems (Bhugra and Piazza, 2017;Brand et al., 2015), vibration control (Bricault et al., 2019;Collet et al., 2008;Preumont, 2011;Soltani and Kerschen, 2015), and energy harvesting (Erturk and Inman, 2011;Jacquelin et al., 2011;Mam et al., 2016). In most of those cases, even if linear systems are traditionally considered for their simplicity, taking advantage of nonlinearities is of the most interest (Cao et al., 2015). Among others, in vibration control and energy harvesting, the synchronized switch strategies (see, e.g. Richard et al. (1999), Ducarne et al. (2012) and Lallart (2016)) are intrinsically nonlinear because of the electronic switching between two electromechanical states synchronized with the structure oscillations. When a primary structure is nonlinear, fully passive PZT nonlinear tuned vibration absorbers, that follow its change of frequency with the increasing vibration amplitude, have been recently proposed (Lossouarn et al., 2018). Another concept, the nonlinear energy sink, for which the absorber is intrinsically nonlinear, has been realized with a PZT shunt (Silva et al., 2018;Zhou et al., 2014), thanks to nonlinear electrical circuits.
PZT materials are usually glued on elastic structures and both can present nonlinear behaviors. The PZT material nonlinearities were studied to understand their dynamics (Abdelkefi et al., 2012;Guyomar et al., 1997Guyomar et al., , 2011Parashar and Wagner, 2004;Von Wagner and Hagedorn, 2002;Wolf and Gottlieb, 2001), but if deriving a proper nonlinear PZT law, thermodynamically consistent, has been already addressed, obtaining the values of the coefficients for a practical application seems to be still an open field of investigation. Moreover, the aforementioned publications use a classical electric enthalpy function which is a high-order (smooth) polynomial in the strain, whereas in Leadenham and Erturks (2015), a low-order function that includes the absolute value of the strain (it is, thus, nonsmooth), is proposed and seems to be closer to what is measured in practice. On the other hand, thin structures are well known to present geometric nonlinearities, especially cantilever beams, that have been thoroughly studied in the past (Anderson et al., 1994(Anderson et al., , 1996Crespo Da Silva, 1988;Pai and Nayfeh, 1990). In this work, only the geometrical nonlinearities of the beam are taken into account, whereas the PZT material constitutive law is considered linear.
Contrary to linear systems, nonlinear oscillations can present mode coupling(s) and energy exchange(s) between different modes due to internal resonances. This phenomenon can be used for passive control and/or energy harvesting (Mook et al., 1985). The creation of internal resonances in nonlinear systems is influenced by their degrees of nonlinearities. Let us suppose ω q , ω r , and ω k are resonant frequencies of a system, where q, r, and k represent integers corresponding to a specific frequency. For system which presents cubic nonlinearities, the internal resonance can occur if we have ω q x ω r , ω q = 3ω r , and ω k x |±2ω q ± ω r |. In a system with additional quadratic nonlinearities, besides the aforementioned conditions, the internal resonance can emerge if ω q x 2ω r and ω k x ω q ± ω r (Nayfeh, 1979;Nayfeh and Balachandran, 1989). When an internal resonance is reached, a strong energy exchange between the modes can arise, depending on the nonlinear coefficients of the system. The 1:3 internal resonance has been studied for plates (Sun et al., 2018;Zhang and Guo, 2012), MEMS (Czaplewski et al., 2019;Houri et al., 2019;Ramini et al., 2016), buckled beams (Emam and Nayfeh, 2013), and clamped-clamped beams (Ghayesh et al., 2012;Özkaya et al., 2008), demonstrating bifurcations and chaotic behaviors. Garg and Dwivedy (2019) used a beam with PZT materials excited parametrically, where the 1:3 internal resonance being achieved by an added mass.
The aim of this work was to investigate experimentally the behavior of an elastic homogeneous beam equipped with PZT patches presenting a 1:3 internal resonance between modes. Two configurations are examined: a unimorph configuration with only one PZT patch on one side of the beam, and a bimorph configuration for which two PZT patches are placed symmetrically on each side of the beam, as shown in Figure 1. As it will be observed, both configurations present theoretically quadratic and cubic nonlinearities, thus, 1:2 or 1:3 internal resonances can allow energy exchanges between modes. From the theoretical developments, it will be deduced that the position of the PZT patches on the beam can be used to tune the natural frequencies of some modes to create resonances such as ω 3 x 2ω 4 and ω 3 x 3ω 2 , to achieve 1:2 and 1:3 internal resonances. No other internal resonances were identified between at least the first four modes. The focus of this study was to investigate the behavior of the systems, while the 1:3 internal resonance is happening between the second and the third mode of the structures. The additive mass and stiffness of PZT patches are used to tune the natural frequencies of the structure in a 1:3 internal resonance relation to study the exchanges of energy between the two involved modes. The PZT patches are not used as sensors or actuators because this study is a preliminary work to investigate the positive use of nonlinearities to favor the energy exchanges, before using the structure as an active beam for energy harvesting and vibration damping applications.
The article organization is as follows: the experimental setup and methodology are first presented. Then, modal analyses of the systems are carried out and experimental results are compared with those obtained from an analytical model. Periodic solutions under harmonic excitation of the system, in the case of 1:3 internal resonance, are obtained both theoretically and experimentally. Finally, conclusions are provided.
Experimental methodology
Experimentations were achieved on three different systems: a homogeneous beam without PZT patches, a unimorph configuration, and a bimorph configuration ( Figure 1). For the two PZT configurations, the PZT materials are glued between x = x 1 and x = x 2 measured from the clamping. L b denotes the length of the beam, and Δ = L b À x 2 is the distance between the end of the PZT patches and the end of the beam. The mechanical properties of the homogeneous beam, the PZT materials (we used a PIC151 PZT material from PI Ceramic), and the epoxy glue (from Mam et al. (2016)) are defined in Table 1.
Experimental setup
The apparatus is shown in Figure 2. The vibrating beam was fixed on the head of a shaker (Brüel & Kjaer 4808), thanks to a homemade clamping system. An accelerometer was glued on this clamping system, allowing to measure the base acceleration prescribed to the structures. A scanning laser vibrometer (Polytec PSV-400) was used to measure the velocity of several points of the vibrating beam. The shaker was driven by a power amplifier and a computer with input/output acquisition cards enabled to generate the input driving signal and to record the velocity and acceleration signals.
2.1.1. Detection of the mode shapes and natural frequencies.
From the experimental setup, an experimental modal analysis was performed. First, we measured the frequency response functions (FRFs) between the velocity of several points on the beam and the base acceleration. We first investigated the influence of the position of the PZT patches on the natural frequencies and mode shapes, to verify whether it is possible to tune the natural frequencies to achieve 1:2 or 1:3 internal resonance, that are The first four natural frequencies were detected for the different length of the homogeneous beam L b . In practice, the PZT patches are glued at a fixed distance Δ of the free end of the beam and we changed the position of the beam in the clamping system, thus, modifying L b (see Figure 1). Because the length L p = x 2 À x 1 of the PZT patches is fixed, x 1 and x 2 are changed according to x 1 = L b À Δ À L p and x 2 = L b À Δ. For the unimorph configuration, Δ = 60 mm and the length L b of the beam has been increased from 110 mm to 170 mm with steps of 5 mm. For the bimorph configuration, Δ = 70 mm and the length L b of the beam has been increased from 120 mm to 170 mm with steps of 5 mm. In both cases, the length of the PZT patches is L p = 50 mm. Around the particular lengths for which the 1:2 and 1:3 internal resonance frequency ratios of equation (1) were obtained, the step was reduced to 1 mm. To obtain the mode shapes, the FRFs were measured at several points along the length of the beam, on a line of points in the middle of the upper surface. Then, the operational deflection shapes at resonances were plotted as an estimation of the mode shapes.
2.1.2. Forced response measurements. We were interested in the velocity response of the system when excited by a harmonic base acceleration of frequency V. In our studies, the response of the structure was periodic and we used a homodyne detection to measure the amplitude and the phase of several harmonics of the signals at fixed excitation frequencies. The structure was excited at a certain amplitude of acceleration at several successive frequencies V, stepped in a given frequency band of interest. Because of the retroaction of the vibrating structure on the shaker, a control loop was used to keep the excitation amplitude constant during operations. For each frequency measurement, a time delay was prescribed to wait for the end of the transient and to reach a steady state. Then, each amplitude of harmonic component was calculated by the computer from the velocity signal of the laser vibrometer with the homodyne detection strategy. It consists in multiplying the velocity signal by sin(hVt) and cos(hVt) functions and taking the average of the result to extract the amplitude of the hth harmonic (see, e.g. Monteil et al. (2015) and Denis et al. (2018) for details).
Because we focus in this article on a 1:3 internal resonance between the second and third modes of the structure, with a direct driving of the lower mode, the frequency bandwidth is defined around the second natural frequency ω 2 . Then, to measure precisely the response of each of the involved modes (we assume that the response of the structure is the modal superposition of only the two modes involved in the internal resonance), for each frequency measurement, two experiments were performed, one after the other. To obtain the response of the second mode, the laser was first pointed on a vibration node of the third mode Table 1. Mechanical properties of the homogeneous beam (experimentally identified), the piezoelectric material PIC151 (from Thomas et al. (2009)), and epoxy glue (from Mam et al. (2016)).
Beam 7810 179 0.5 13 PIC151 8500 66.7 0.5 10 Epoxy glue 8000 5 0.01 10 shape, and, second, the response of the third mode was obtained by pointing the laser on a vibration node of the second mode shape.
Theoretical modeling
We consider the cantilever beams with PZT patches sketched in Figure 1. We denote by b × h the width and the thickness of the elastic layer cross section and by b p × h p the ones of the PZT patches. Because of the clamped/free boundary conditions, we assume that in large amplitude vibrations, the beam remains inextensible (Dowell and McHugh, 2016). This assumption may be broken by the PZT patches, in particular in the asymmetric configuration, because their mode of operation is extension/compression. However, we assume that this effect is small because of the axial stiffness of the beam, which is very large than that of the bending one, and also because we are interested in bending vibrations that are assumed to be uncoupled of axial vibrations due to the clamped/free boundary conditions. Parametric excitation is also not considered. Moreover, we assume a linear PZT constitutive law, mostly because no available numerical values for those parameters are available in the literature. As shown in the Appendix 1 section and in Guillot et al. (2019), the axial/bending coupling due to an asymmetry of the lamination of the beam (this is the case for the unimorph configuration (Ducarne et al., 2012)) is exactly canceled by the inextensibility property.
The governing electromechanical equations of the system, involving the transverse displacement v(x, t), where t is the time, and the voltage V(t) across the PZT patches, can be written as Ducarne et al. (2012) and Guillot et al. (2019).
Equation (2) represents the dynamical equilibrium of the beam and equation (3) the electrical state of the PZT material.
In the aforementioned equations, m(x) is the mass per unit length, D(x) is the bending stiffness, p(x, t) is an external force per unit length, Q(t) is the electric charge contained in one of the electrodes of the PZT patches, and δ(x) is the Dirac δ function. For the unimorph configuration, the PZT coupling coefficient is Θ = e 31 b p (h + h p )/2 and the electric capacitance is C = ϵ 33 b p (x 2 À x 1 )/h p , where e 31 is the PZT constant and ϵ 33 is the dielectric permittivity of the PZT material (Ducarne et al., 2012), see Table 2. First equation predicts dynamical equilibrium of the beam and electrical state of the PZT. For the bimorph configuration, because the two collocated patches are in series, Θ is the same and the capacitance C is half that of the unimorph configuration (Ducarne et al., 2012).
3.1. Tuning of the natural frequencies for 1:2 and 1:3 internal resonance The linear part of the governing equation (2) is now considered in short circuit (V = 0) mðxÞ€ vðs; tÞ þ ½DðxÞv 00 00 ¼ 0 (4) and the associated eigenmodes of the stepped beams (the mass and stiffness addition of the PZT patches are included) are computed, as explained in Ducarne et al. (2012) and Guillot et al. (2019). To be rigorous, in the case of the unimorph configuration, which has a nonsymmetric lamination, the linear axial/bending coupling, not present in equation (4) because of the inextensibility condition, is included in those computations, as explained in Ducarne et al. (2012). The natural frequencies and mode shapes are computed for several values of the beam length L b and of the placement of the PZT patches, defined by Δ (see Figure 1). The length of the PZT patches is fixed at L p = x 2 À x 1 = 50 mm. Because the bending inertia and stiffness depend on the location of the PZT patches on the beam, the natural frequencies depend on the PZT patches' positions, and one can find values of (L, Δ) for which the relations one are fulfilled to create a 1:2 internal resonance between mode 4 and mode 3 or a 1:3 internal resonance between mode 3 and mode 2. To illustrate these results, the maps of the ratios ω 4 / ω 3 and ω 3 /ω 2 are plotted as functions of (L, Δ) in Figure 3. They have been computed with the parameters of the beam gathered in Table 1 without considering the epoxy layer. Those plots have to be considered as a qualitative first insight into the tuning of the natural frequencies, before a precise experimental tuning, as presented in the next section. For the unimorph configuration and Δ = 60 mm, the special lengths L b = 0.142 mm and L b = 0.129 mm allow to reach to the 1:2 and 1:3 internal resonances, respectively. As for the bimorph configuration, we have Δ = 70 mm and the Table 2. Piezoelectric and dielectric constant parameters of the piezoelectric material PIC151 (from Thomas et al. (2009)), with ϵ 0 = 8.854 × 10 À12 F/m.
Periodic solutions for the 1:3 internal resonance
We now discuss the full nonlinear equations of motions. Because we target a 1:3 internal resonance between the second and the third modes, we assume that the response of the system in the steady state, under harmonic forcing of frequency V, is mainly governed by those two modes. This way, only the second and third modes are supposed to mainly exist, neglecting the effects of other possible modes, such as the first mode because it is not resonant. We, thus, expand v(x, t) on the two corresponding mode shapes (f 2 (x) and f 3 (x)) vðx; tÞ ¼ f 2 ðxÞr 2 ðtÞ þ f 3 ðxÞr 3 ðtÞ where (r 2 (t) and r 3 (t)) are the corresponding unknown modal coordinates of the second and third mode, respectively. Assuming an open-circuit condition (the current is equal to zero, which leads to Q = 0, see Trindade and Benjeddou (2009) and Ducarne et al. (2012)), introducing equation (5) into (2) and (3), multiplying the result successively by f 2 (x) and f 3 (x), and finally using the orthogonality properties of the modes, one obtains € r 2 þ 2μ 1 _ r 2 þ ω 2 2 r 2 þ N 2 r 3 2 þ N 3 r 2 2 r 3 þ N 4 r 2 r 2 3 þ N 5 r 3 3 þ N 6 r 2 À r 2 2 Á :: þ N 7 r 2 ðr 2 r 3 Þ :: þ N 8 r 2 À r 2 3 Á :: þ N 9 r 3 À r 2 2 Á :: þ N 10 r 3 ðr 2 r 3 Þ :: þ N 11 r 3 À r 2 3 Á :: þ N 12 V ¼ F 2 cosðVtÞ (6) € r 3 þ 2μ 2 _ r r þ ω 2 3 r 3 þ M 2 r 3 2 þ M 3 r 2 2 r 3 þ M 4 r 2 r 2 3 þ M 5 r 3 3 þ M 6 r 2 À r 2 2 Á :: þ M 7 r 2 ðr 2 r 3 Þ :: þ M 8 r 2 À r 2 3 Á :: þ M 9 r 3 À r 2 2 Á :: þ M 10 r 3 ðr 2 r 3 Þ :: þ M 11 r 3 À r 2 Modal linear damping terms of coefficients μ 1 and μ 2 have been added in the aforementioned equations. Moreover, the analytical expressions of all the N i , M i , i = 2, …, 12 coefficients can be found in Guillot et al. (2019). They are here computed according to the geometrical and material parameters of the systems under study. Finally, F 2 and F 3 are the modal forcing amplitude, that are both nonzero because of the base acceleration driving, equivalent to a uniform force per unit length p(x, t) = Àmγ e cos(Vt) (where γ e is the amplitude of the base acceleration prescribed by the shaker) Figure 3. Theoretical map of the realization of the internal resonance 1:2 and 1:3. Plots of the frequency ratios ω 4 /ω 3 and ω 3 /ω 2 as functions of the beam length L b and the position of the piezoelectric patches Δ, for the unimorph and bimorph configurations. The solid black lines give the 1:2 and 1:3 contour lines. In those computations, the piezoelectric patches are supposed to be perfectly glued on the elastic layer without the additional epoxy layer. and, thus, not orthogonal to the mode shapes. The modal damping coefficients are experimentally estimated with the modal damping factors ξ i (μ i = ξω i ), computed with the À3 dB bandwidth Δω i at resonance (ξ i = 2Δω i /ω i ), in the case of the lowest amplitude excitation for which no nonlinear coupling occurs. The coefficients N i and M i from equation (6) were calculated from their analytical expressions deduced from equations (2) and (3) and the mechanical and electrical known values from Tables 1 and 2.
The periodic solutions of this system, in the steady state, are numerically computed by a higher order harmonic balance coupled to a continuation method (the asymptotic numerical method) implemented into software Manlab (Arquier et al., 2005. Because of the targeted 1:3 internal resonance, we are mainly interested in the first and third harmonic of the second and third mode modal coordinates r 2 (t) and r 3 (t), respectively, for different forcing amplitudes F. Theoretically, because the nonlinear forces in equation (6) are odd, all even harmonics are zero in the periodic responses (they can be obtained after a symmetry-breaking bifurcation which has not been observed here). Thus, in the theoretical results, the second harmonic of the modal coordinates is equal to zero and was not displayed (see Figures 4 and 5). The displacement v(x, t) can be constructed from equation (5).
To look at each mode independently, as explained in the experimental methodology section, the motion of the beam is measured at two positions successively. First, at the node of the third mode shape (x = x f3 ), to measure the contribution of mode 2, and then, at the node of the second mode shape (x = x f2 ), to investigate the contribution of mode 3. Thus, from equation (5), the beam's displacements are defined by For the unimorph configuration, the nodes of the second and third modes closest to the tip end are defined as x f2 x 0.099 mm and x f3 = 0.113 mm, respectively, as for the bimorph configuration, we have x f2 x 0.114 mm and x f3 x 0.133 mm.
In addition, because of the excitation bandwidth defined around the second mode and the internal resonance 1:3 between ω 3 and ω 2 , we have The evolutions of the different harmonics amplitudes of v(x = x f2 , t) and v(x = x f3 , t) versus V are presented in Figures 4 and 5. Because of the 1:3 internal resonance, one would expect that the first harmonic of the low-frequency (LF) mode two and the third harmonic of the high-frequency (HF) mode 3 be the dominant harmonics, in the case of a strong transfer of energy from mode two to mode 3. On the contrary, those simulations show that the effect of the 1:3 internal resonance, in term of energy transfer between the modes, is not as strong as it is for other cases (see, e.g. Cao et al. (2015), Thomas et al. (2003Thomas et al. ( , 2007 and Monteil et al. (2015)). It would tend to prove that for cantilever beams in direct bending driving, a 1:3 internal resonance between two modes (that have been tuned here by the structural effect of the PZT patches) does not lead to a strong energy exchange. To the knowledge of authors, this result is original because no previous work addressed the 1:3 internal resonance between two modes of a cantilever beam. However, further investigations are needed to fully understand what has to be done on the system to boost the exchanges of energy. The present case of a cantilever beam is here rendered complex by the presence of 20 resonant cubic terms in the oscillators (6), which a priori have all influence on the energy exchange. Finally, these results can probably be explained by considering that the clamped/free boundary conditions lead to very small geometrically nonlinear effects (the first mode, for instance, is slightly hardening), as compared for instance with clamped-clamped beams or 2D structures, such has plates and shells, for which internal resonances have very strong effects (Amabili, 2008;Monteil et al., 2015;Thomas et al., 2003Thomas et al., , 2007. No experiment was performed on the beam without PZT materials because for a homogeneous cantilever beam, no 1:3 ratio of the natural frequencies is observed in theory, which prevents any internal resonance to easily occur.
Experimental investigation and comparison to theory 4.1. Tuning of the natural frequencies
With the protocol explained in the previous sections, we first measured the natural frequencies of the homogeneous beam as a reference. By plotting the results as a function of the theoretical frequencies of a cantilever beam ω k ¼ β 2 k h=L 2 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi Y =ð12ρÞ p (with β 1 = 1.875, β 2 = 4.694, and β 3 = 7.855…), it is possible to estimate the value of the Young's modulus Y of the stainless steel because all the other parameters are known (the geometry has been measured and the mass density ρ has been obtained by weighing the beam). We obtained Y = 179 GPa. Then, we measured ω 2 , ω 3 , and ω 4 for several values of the beam length, for the two PZT beam configurations (unimorph and bimorph). Those values are compared with the theoretical ones in Figure 6. A good agreement is obtained between experiments and theory, better for the unimorph configuration than for the bimorph one. The effect of the epoxy layer has also been investigated and is found to slightly change the natural frequency values. Those discrepancies can be explained by usual experimental characteristics not taking into account in the model: a nonperfect clamping (the clamping device, made in steel, is not infinitely rigid), some material values not experimentally identified (the material characteristics of the PZT and the epoxy layer, see Table 1), the Euler-Bernoulli assumptions of a rigid cross section without transverse shear, etc. All those characteristics could be verified with numerical computations (with a commercial finite-element code) and are out of the scope of this study because our interest is in the experimental tuning of the natural frequencies.
As predicted by Figure 3, it is shown that there exist particular values of the beam length L b for which ω 4 /ω 3 x 2 and ω 3 /ω 2 x 3, so that 1:2 and 1:3 internal resonances are at hand. Precisely, setting L b = 129 mm for the unimorph configuration (Figure 6(c)) and L b = 145 mm for the bimorph configuration (Figure 6(d)) will lead to the required frequency ratio to favor a 1:3 internal resonance between the second and third modes of the beam. As for the internal resonance 1:2, the special lengths L b = 0.139 mm and L b = 0.153 mm allow the desired ratio to occur for the unimorph and bimorph configurations, respectively. The corresponding natural frequency values can be found in Tables 3 and 4. No other internal resonances were found, at special lengths L b , between the other first four modes of the structure. Tables 3 and 4.
Mode shapes of the system
The homogeneous beam and both PZT configurations were set at a length L b = 170 mm, and experiments were conducted to estimate the mode shapes of the three systems. The experimental results are shown in Figure 7 along with the theoretical mode shapes. An excellent agreement between the test and theory is obtained. One can also observe that the mode shapes show a overall modification (a change of curvature) in the area where the PZT patches are glued, as compared with the mode shape of the homogeneous beam (without PZT materials). Moreover, this kind of figure is useful to estimate the positions x f2 and x f3 of the nodes of mode 2 and 3 for the measurements of the 1:3 internal resonance presented in the following section. Note that the beam lengths L b used in the following section are not the length shown in Figure 7 because of the required particular tuning of ω 3 /ω 2 = 3.
Periodic solutions with the 1:3 internal resonance
Both PZT configurations are settled at the special length L b so that ω 3 /ω 2 = 3, to naturally favor the 1:3 internal resonance (L b = 129 mm for the unimorph configuration and L b = 145 mm for the bimorph configuration). The system is excited at different levels of amplitude of the base acceleration. We assume that the modes which are in internal resonance are those which dominate the response of the system, as shown by equation (5). Then, as shown by equation (9), pointing the laser at a node of the third mode (x = x f3 ) enables to measure the response of mode two only and pointing the laser at a node of the second mode (x = x f2 ) enables to measure the response of mode three only. A stepped frequency sweep is then performed in a narrow frequency band around ω 2 : we choose to directly drive the LF mode (mode 2) around its resonance and to observe a transfer of energy toward the HF mode (mode 3). The results are shown in Figure 8 for the unimorph configuration and Figure 9 for the bimorph configuration. From the theoretical analysis, it was shown in Figures 4 and 5 that no strong transfer of energy was observed between the first harmonic (H1) of the LF mode and the third harmonic (H3) of the HF mode. Although, the present experimental results show the opposite. For the unimorph configuration (Figure 8), the amplitude of the third harmonic of the HF mode is about five times that of its first harmonic, whereas H3 for the LF mode is negligible with respect to H1, meaning that the dominant harmonics of the HF mode is H3. Moreover, the third harmonics of the HF mode is about 40 times larger than H1 of the LF mode, showing that this third harmonic is not directly excited by the third harmonic of the LF mode. Analogous results are shown in Figure 9 for the bimorph configuration. This confirms that a clear transfer of energy from the LF mode to the HF mode, associated to the 1:3 internal resonance, is at hand.
The fact that the energy transfer is observed experimentally, whereas it is not clearly predicted by the theoretical model is now addressed. In the theoretical model, only geometrical nonlinearities are considered, with a linear PZT constitutive law. Several works in the past showed that nonlinearities in the PZT constitutive law are experimentally observed (among others, see Guyomar et al. (1997Guyomar et al. ( , 2011 for a PZT stack with a 33 effect and Parashar and Wagner, (2004) for a sole PZT ceramics) In particular, in the case of cantilever beams with PZT ceramics in a 31 effect (see, e.g. Von Wagner and Hagedorn (2002) and Leadenham and Erturk (2015)), the nonlinear PZT material effect seems to be of the same order of magnitude than the geometrical one because the first mode of the cantilever beam is observed with a softening nonlinear effect, whereas it is predicted hardening in the case of geometrical nonlinearities only. From a theoretical point of view, it is also shown in Guillot et al. (2019) that the nonlinear PZT constitutive law adds more cubic terms in the model and also additional quadratic nonlinearities. However, in practice, some additional work, out of the scope of the present article, has to be carried out to correctly experimentally identify all the parameters of the constitutive law, Figure 7. Comparison between the theoretically (lines) and experimentally (points) obtained second (top) and third (bottom) mode shapes for: the homogeneous beam (continuous line with cross points) without piezoelectric materials, the unimorph configuration (dashed line with x points), and the bimorph configuration (dotted line with star points). to obtain a usable theoretical model and compare its results to experiments. The asymmetry of the structure in the transverse direction, in the case of the unimorph configuration, canceled by the inextensibility constraint (see Appendix 1), can also be a factor that could change the nonlinear behavior of the structure. In shells and laminated plates, it adds quadratic nonlinearities (Lazarus et al., 2012;Thomas et al., 2005). We believe that the inextensibility constraint is very realistic and, thus, that this effect is probably negligible in practice.
Experimentally, some small amplitudes of the second harmonics are detected, which are naturally not present in our theoretical developments because no quadratic nonlinearities are included in the model. From Guillot et al. (2019), some of the assumed nonlinear terms of the PZT materials could be responsible for the quadratic behavior, leading to the detection of second harmonic in the displacements. This can also be explained by unavoidable imperfections in the beam or by some aeroelastic damping effects Mam et al. (2016). One can also observe that the ratio between H2 and H1 for the LF mode is of about 1/20 for the unimorph configuration, whereas it is of about 1/75 for the bimorph configuration. This can be explained by the nonsymmetric lamination of the unimorph configuration, which breaks the transverse structural symmetry of the beam. This effect is not taken into account in the model because of the inextensibility condition. The Appendix 1 section gives more details about this point. . Experimental results for the periodic forced response of the unimorph configuration in 1:3 internal resonance. First three harmonics of the signals measured at x = x f3 (low frequency-mode 2-response, left column) and x = x f2 (high frequency-mode 3response, right column) on the beam, as a function of the excitation frequency V, for different amplitude of base acceleration (each color corresponds to a given base acceleration).
It is observed that the amplitude of the third harmonic of the second mode in Figure 8(e) shows different behavior for the different amplitudes of excitation. Because the amplitudes of the harmonic are really low (0.02 mm maximum), we assume that they are more related to noise and were not representative of the real amplitude of the third harmonic.
Conclusion
In this study, several results have been obtained about the design of a cantilever beam with PZT patches to favor internal resonance and energy exchanges between the modes. First, both theory and experiments allowed us to confirm that the position of the PZT patches on the beam can help to tune the natural frequencies of the beam to achieve 1:2 and 1:3 internal resonances, between the fourth and third mode and the second and third mode, respectively. Experimentally, both 1:2 and 1:3 internal resonances have been tested, with a success only for the 1:3 internal resonance, on which this article has been focused. Experimentally, a clear exchange of energy between the lower mode to the higher one has been exhibited. However, the theoretical results did not predict such a clear energy exchange and explanations have been proposed to improve the model. In particular, PZT constitutive nonlinearities, not included in the model, could have an important effect on the nonlinear behavior and on the energy exchanges. In any cases, the measured energy exchange due to the 1:3 internal resonance could be improved by connecting the PZT patches to a proper nonlinear circuit, to enhance the nonlinear effects and achieve an efficient control of the beam. . Experimental results for the periodic forced response of the bimorph configuration in 1:3 internal resonance. First three harmonics of the signals measured at x = x f3 (low frequency-mode 2-response, left column) and x = x f2 (high frequency-mode 3response, right column) on the beam, as a function of the excitation frequency V, for different amplitude of base acceleration (each color corresponds to a given base acceleration).
Then, the bending moment is, with equation (11) M ¼ where S is the cross section and S is its area and V p is the voltage difference across the p-th piezoelectric layer and where y k is the vertical position of the interface between the (k À 1)th and the kth layer. Those constants are, respectively, the axial/bending stiffness, the bending stiffness, and the piezoelectric coupling in bending.
Considering that the beam is inextensible, which is mandatory to obtain a simple nonlinear model as observed in the following, leads to e = 0. Consequently, the beam generalized constitutive law (14) reduces to The nonlinear equations of motion of the beam, with no restriction on the cross-section rotation θ (geometrically exact beam model), with Euler-Bernoulli assumptions, and rotatory inertia neglected, are Thomas et al. (2016) 8 < : ðN cos θ À T sin θÞ 0 ¼ m € u; ðN sin θ þ T cos θÞ 0 þ p ¼ m€ v; T ð1 þ eÞ þ M 0 ¼ 0: where m is the mass per unit length of the beam. Then, the first of the aforementioned equations leads to By eliminating N and T using equation (19) and the last of equation (18) in the second one, and using the clamped/free boundary conditions, one obtains Finally, using constitutive law (17), expanding all functions in Taylor series up to the third order in v 0 (θ = arcsin v 0 x v 0 + v 03 /6, 1/cos θ x 1 + v 02 /2, and tan θ x v 0 (1 À v 02 /2), using the inextensibility conditions (from (12) e ¼ 0 0u ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 1 À v 02 p À 1x À v 02 =2), and considering that Θ p is zero out of x 2 [x 1 , x 2 ], one obtains equation (2). The electric charge of equation (3) is obtained in the same manner, as explained in Ducarne et al. (2012).
A comment about these equations should be made. If a part of the beam has a nonsymmetric lamination (it is the case in the present study for the unimorph beam in the x 2 [x 1 , x 2 ] area), B ≠ 0 and equation (14) shows that an axial/ bending coupling occurs. In the linear model of Ducarne et al. (2012), this effect is eliminated in the equations for a cantilever beam because N = 0, which leads to a modified bending stiffness calledD. In the present nonlinear model, N ≠ 0 because of the geometrical nonlinearities (see equation (19)), so that one cannot eliminate this axial/ bending coupling of the equations without further assumption. It is the inextensibility condition (e = 0), formulated to eliminate u in the equations, which naturally cancels the linear axial/bending coupling in equation (14). However, the axial strain e is clearly nonzero in the nonsymmetric area of the beam. This effect neglected in the present nonlinear model would probably be responsible of quadratic nonlinearities.
To give a first step to a more exact model, one has to compute N by integrating equation (11) N ¼ where A ¼ P K k¼1 b k h k Y k and Ξ p = b p e 31 . Eliminating the axial strain e between (21) and (14) leads to Then, eliminating N between (22) and (19) leads to replacing (17) by whereD ¼ D À ðB 2 =AÞ andΘ p ¼ Θ p À ðΞB=AÞ are the modified constants that take into account the elimination of the axial strain, as introduced in Ducarne et al. (2012). A Taylor expansion of the functions of θ in v 0 up to third order shows that the additional (underlined) terms are quadratic in v 0 . Then, even if the substitution of M in (20) is not possible because (23) is now a differential equation in M, it shows that a nonsymmetric lamination in the beam adds quadratic terms in the transverse equation of motion. | 9,479.4 | 2020-03-05T00:00:00.000 | [
"Engineering",
"Physics"
] |
A Distributed Bi-behaviors Crow Search Algorithm for Dynamic Multi-Objective Optimization and Many-Objective Optimization
Dynamic multi-objective optimization problems (DMOPs) and
Many-Objective Optimization Problems (MaOPs) are two classes of the optimization
filed which have potential applications in engineering. Modified
Multi-Objective Evolutionary Algorithms hybrid approaches seem to be suitable
to effectively deal with such problems. However, the Crow Search Algorithm has
not yet considered for both DMOP and MaOP. This paper proposes a Distributed Bi-behaviors Crow Search Algorithm (DB-CSA) with two
different mechanisms, one corresponding to the search behavior and another to
the exploitative behavior with a dynamic switch mechanism. The bi-behaviors CSA
chasing profile is defined based on a large Gaussian-like Beta-1 function which
ensures diversity enhancement, while the narrow Gaussian Beta-2 function is
used to improve the solution tuning and convergence behavior. The DB-CSA
approach is developed to solve several types of DMOPs and a set of MaOPs with
2, 3, 5, 7, 8, 10 and 15 objectives. The Inverted General Distance, the Mean
Inverted General Distance and the Hypervolume Difference are the main
measurement metrics are used to compare the DB-CSA approach to the
state-of-the-art MOEAs. All quantitative results are analyzed using the
nonparametric Wilcoxon signed rank test with 0.05 significance level which
proving the efficiency of the proposed method for solving both 44 DMOPs and
MaOPs utilized.
Introduction
During the last decade, a wide range of metaheuristics are designed to solve many complex problems based on Evolutionary Algorithms (EA) like the Genetic Algorithm (GA) [1] and the Swarm Intelligence (SI) such as the Particle Swarm Optimization (PSO) approach [2]- [5].
Different Multi-Objective Evolutionary Algorithms (MOEAs) have been employed to solve static single and multi-objective optimization problems, where the main challenge is to find the best global solutions through a compromise between convergence and diversity on the search space. However, this process becomes more challenging when solving Dynamic Multi-Objective Optimization Problems (DMOPs) characterized by several types of time-varying Pareto Optimal Set (POS) and Pareto Optimal Front (POF) [6].
Generally speaking, MOEAs are designed to track and react effectively to the change that may affect the POS and the POF while conserving both convergence and diversity concepts [7], [8]. On the other hand, Evolutionary Dynamic Optimization (EDO) approaches should include explicit and implicit mechanisms to detect and correctly react to those changes. A change detection mechanism can be maintained through detectors from a feasible search population like the current best solutions, the memory of optimal solutions or some predefined subpopulation. Also, it can be assumed separately to the search space using a set of random selected solutions, a fixed point, a regular grid of solutions or a set of determined points. In addition, the algorithm behaviors have considered as a robust detection strategy based-on the average of bestfound solutions, the time-varying observation of different sub-swarms, the diversity of the solutions compared to the success rate, time-varying distributions and statistical methods.
Five groups of EDO methods are available in the literature to solve DMOPs; diversity-based techniques, memory-based approaches, prediction methods, parallel systems and the transfer learning-based algorithms. Increasing the mutation rate (hyper-mutation) or adding a randomly new member and relocate some useful solutions are the main mechanisms to manage the diversity in dynamic optimization, this technique may fall within undetected regions while of interests. The diversity-based approach [1] shown their ability for solving dynamic problem with continuous and small time-varying parameters and show their limits in problems with severe environmental changes. Furthermore, many DMOPs have presented some periodical or recurrent changes making storing historical experience of solutions useful to preserve diversity.
Memory-based approaches use redundant representation of an evolutionary algorithm using extra-memory components to help detecting future changes [9]. This category of approaches is very effective to solve DMOPs with periodically time-varying properties. However, such mechanisms slow down the convergence and strengthen diversity in the EDO approaches. The main disadvantage of memory-based algorithms is the ineffectiveness of redundant solutions stored in the archive. On the other hand, the prediction-based methods tend to predict changes based-on limited patterns. Such system can detect the global best solution quickly but they fail when the changes are stochastic which increases their relative training error rates. The parallel approaches present an optimization process over multiple sub-swarms that may handle the problem on separate search space and are recommended for multi-modal problems while are computationally expensive. A key challenge for these methods is finding the appropriate number of sub-swarm and their sizes. Last but not least, the transfer learning-based methods [5], [10]- [12] have the advantage to re-use previous computational experience to improve the efficiency of the new generated populations after each change detection by adding transfer learning mechanisms which is a time-consuming process.
The efficiency of MOEAs significantly decreases when dealing with MaOPs. In MaOPs, the number of objectives to satisfy is in general equal or higher to 3. Furthermore, three main issues are introduced when solving MaOPs thus including; (i) the inutility of dominance operator when dealing with a large number of objectives, (ii) the lack of convergence and diversity and (iii) the limited population size in a large dimension of objectives space that increase exponentially. Many Pareto-based approaches showed their limits to deal with the increasing number of non-dominated solutions using the dominance operator causing the issue of poor convergence implicated by the Active Diversity Promotion (ADP) phenomenon [13].
As a solution, a variety of enhancements are adopted to the original MOEAs when solving MaOPs including the decomposition-based and indicator-based approaches. Decomposition mechanisms combine multiple objectives into a single one or sub-problems. Some of the popular techniques of this type are Pareto sampling [14], improved Pareto sampling (MSOPS-II) [15] and multi-objective evolutionary algorithm based on decomposition (MOEA/D) [16].
The decomposition-based approach become more effective with a set of sub-MOPs such as presented in the reference vector-guided evolutionary algorithm (RVEA) [17], MOEA/D-M2M [18], NSGA-III [19] and the MOEA/DD [20] and the MOEA/D-ROD [21]. In addition, a set of performance metrics are considered to guide the optimization process over different indicatorbased approaches like the fast hypervolume based evolutionary algorithm (HypE) [22], the Smetric selection based evolutionary multi-objective algorithm (SMS-EMOA) [23], the indicator based evolutionary algorithm (IBEA) [24], the Evolutionary Many-Objective Optimization Algorithm based on IGD Indicator with Region Decomposition [25] and the MaOEA/IGD [26].
A set of new techniques are proposed to deal with the issue of the ineffectiveness of the dominance operator over a set of Pareto-based methods like L-optimality [27], -dominance [28], fuzzy dominance [29], Grid-based Evolutionary Algorithm (GrEA) [30], θ Dominancebased Evolutionary Algorithm (θ-DEA) [31] and the preference order ranking [32]. Diversity management techniques are proposed to arrange a good balance between the convergence and the diversity when solving MaOPs. In [30] a three grid-based criterion was proposed to maintain diversity including the grid crowding distance, the grid coordinate point distance and the grid ranking. A diversity promotion mechanism, DM, is introduced in [33] to activate or disactivate the diversity of the population based on the spread and the crowding distance of solutions.
In NSGA-III algorithm [19], the reference point-based strategy is used to solve MaOPs. The shift-based density estimation (SDE) strategy [34] has been utilized to replace the dominance operators of MOEAs. Also, the knee point-driven evolutionary algorithm (KnEA) [35] has developed using both knee point-based selection and dominance-based selection. Three groups of preference-based approaches including priori algorithms, interactive algorithms and posteriori algorithms are employed to deal with the issue of population size limitation in regards to the large dimension of the objective space. The most known posteriori approaches are the Preference-Inspired Coevolutionary Algorithms (PICEA-g) [36], the novel two-archive algorithm (TAA) [37] and its improved version (Two_Arch2) [38].
In addition, the Particle Swarm Optimization (PSO) algorithm has received a great attention in MaOP. The Control Dominance Area of Solutions (CDAS) [39] is used with SMPSO and SigmaMOPSO for MaOPS. The indicator-based PSO systems have been proposed to maintain leader's selection using the R2 indicator as presented in H-MOPSO [40] or the hypervolume metric in S-MOPSO [41]. Two-stage strategy and a parallel cell coordinate system are adopted in MaOPSO/2s-pccs [42]. A preference-based method is proposed using PSO system focusing on solutions around the knee point and called knee driven particle swarm optimization (KnPSO) [43]. In [44] the MaPSO method uses leader's selection from a certain number of historical solutions by using scalar projection. In addition, the HGLSS-MOPSO algorithm [45] has adopted the Hybrid Global Leader Selection (HGLSS) using two global leader selection mechanisms the first for exploration and the second for exploitation. A recent published paper [46] has presented an adaptive localized decision variable analysis approach under the decomposition-based framework to solve the Large-Scale Multi-Objective Optimization problems and Multi-Tasking Optimization Problems in MaOPs. As a conclusion, all mentioned Many-Objective Evolutionary Algorithms (MaOEAs) are presented as highly complex and time-consuming systems, essentially when using decomposition-based mechanisms and/or the quality indicators to deal separately with convergence and diversity.
The Crow Search Algorithm (CSA) [47] is a meta-heuristic simulating the social organization of crow folks essentially for food-search procedure. Crows are characterized by their ability to memorize food sources they found but also sources that other members of the flock may hold or hide. The CSA algorithm was first proposed as a mono-objective optimization technique and population into several sub-populations and solving many sub-problems separately and simultaneously making the MOEA/D system lower and timely consuming.
Transfer-learning-based techniques are reliable alternatives for DMOPs based on the MOEA/D as a baseline system. In 2020, the new memory-driven manifold transfer learning was proposed based evolutionary algorithm (MMTL-MOEA/D) [51]. This approach has combined the memory mechanism to preserve the previous best solutions and the manifold transfer learning feature to estimate the best solutions, so that the best solutions are conserved and set as initial population of the next generation.
In addition, a randomly reinitialized mechanism (RI-MOEA/D) [51] is used to 10 % of selected populations after each change to maintain the diversity. A combination between the PPS [50] and the MOEA/D are considered in the PPS-MOEA/D algorithm to solve the DMOP.
Also, the support vector regression (SVR) based on evolutionary algorithm (SVR-MOEA/D) is proposed in [52] is designed to solve the nonlinear correlation between two historical optimization process. The SVR, is used to predict a new population after each change in the search space. A transfer learning-based dynamic multi-objective evolutionary algorithm (Tr-MOEA/D) is proposed in [53], aiming to solve the issue of non-independent and identically distributed data in a dynamic environment. The Tr-MOEA/D system implements a transfer learning mechanism to reuse the past historical population after each change which speed-up the optimization process. In KF-MOEA/D [54] system a Kalman filter (KF) is used to predict a new population prior to perform the convergence concept.
Many-Objective Optimization Methods
Generally speaking, many-objective algorithms are designed to optimally manage the couple of exploitation and exploration concepts. A vector angle-based evolutionary algorithm (VaEA) [55] is proposed for Unconstrained MaOPs. This algorithm uses the maximum-vector-angle as selection mechanism to guarantee a good distribution and approximation to a POF; while the worse solutions are replaced with a new generated one. The θ-DEA [31] system is based on NSGA-III while with a new θnondominated concept which is different from the original dominance operator used on the pareto-based methods. It employs a set of reference points to cluster the solutions set in order to enhance the exploration phase. The NSGA-II/SDR is a modified version of the NSGA-II with a Strengthened Dominance Relation (SDR), presented in [56] for solving MaOP. The NSGA-II/SDR adopts the angle and the niching mechanism to select the best converged solutions. MOEA/DD, MOEA dominance and decomposition [20] is a hybridization between the MOEA/D [16] and the NSGA-III [19]; where the many-objectives are decomposed into sub-problems then a dominance criterion is used to aggregate the global solution. Different grid-based criterions like the grid crowding distance (GCD), the grid ranking (GR) and the grid coordinate point distance (GCPD) are integrated in MOEAs to evaluate the fitness function of the MaOP. In addition, the GrEA system [30], is designed to maintain a good balance between convergence and diversity over both the grid dominance and grid difference to evaluate the fitness function and pushing the system toward the best optimal solutions. Two variants of the Pareto-based evolutionary algorithm using the penalty mechanism (PMEA) are presented in [57], the MPEA-MP and the MPEA*-MA. The PMEA-MA is developed using the Manhattandistance and the cosine distance as the convergence and distribution metrics, it includes a population preprocessing to enhance the diversity. The second variant, PMEA*-MA, is a simplified one, which do not adopt the preprocessing step.
The AnD algorithm [58] is a non-pareto-based method and maintains the diversity of the population using an angle-based selection technique, then it picks optimized members which are the same search direction as a sorting solution. A hybridization between the Strength Pareto Evolutionary Algorithm (SPEA) and the shift-based density estimation (SDE) strategy in [34] is denoted by (SPEA/SDE) it estimates the density of the population, then individuals who are not converging are eliminated to enhance the diversity among the divergent solutions only. In [59], the SPEAR leverages on reference direction-based density estimator using the standard SPEA algorithm for multi/many objective optimization problems. The knee point-driven evolutionary algorithm (KnEA), proposed in [35], evolves a population then select nondominated solutions based on knee point criterion, which may be assumed to a Pareto strategy.
Furthermore, the two-stage evolutionary algorithm (TSEA) is developed in [60], in the first stage several sub-populations are optimized to converge to different regions of the Pareto front, then the nondominated solutions of each sub-population are considered as individuals to optimize in the second stage. In indicator-based methods several quality metrics are used to perform the optimization process, for example the Monto Carlo simulation is used in HypE algorithm [22] to minimize the computation cost and to approximate the results. The preferencebased approaches use different adaptation mechanisms to perform the decision toward the true Pareto front. In [36], the PICEA-g algorithm integrates the coevolution as a posteriori adaptation mechanism with a set of candidate solutions to help decision making and approximate the entire of POF. Two archives are used in the Two_Arch2 [38] system, where the first is considered for convergence (CA) and the second is to maintain diversity (DA). A crossover operator is used between the CA and DA as selector mechanism and mutation operator is used in CA memory.
Existing Crow Search-based Methods
The Crow Search Algorithm (CSA) [47] was first proposed in 2016 to solve constrained engineering optimization problems. In Furthermore, two binary version of CSA algorithm are proposed in [64] and [65]. The first one is the BCSA [64] which used a V-shaped transfer function to obtain a binary representation a continuous data with application to feature selection. The second on [65] consists in applying a sigmoid transformation and was applied to solve the 2D bin packing problem. Several modified versions of CSA tended to manage the diversity based on the Gaussian distribution and diversity information of the population such in [66] for electromagnetic optimization, the usability factors hierarchical model for feature extraction and prediction [67], the priority-based technique is used to determine the sufficient flight length amount for each crow to update their position based other crow for economic load dispatch problem [68] and the modification of the CSA parameters like; the awareness probability and the random perturbation of each crow is proposed in [69].
A set of mechanisms has been used to improve the CSA algorithm including search bounds limits management strategy [70], adding an archive component [71], restructuring the awareness probability [72] to enhance the random perturbation and the dynamic probability of CSA system. Several operators have been added to achieve a good balance between the convergence and the diversity such as the Roulette wheel selection tool and the inertia weight, the Lévy flight and the adaptive adjustment factors. In addition, a cross-over and a mutation operator was proposed to hybridize CSA intrinsically in [73] with application to a hybrid renewable energy PV/wind/battery system. Many hybridization methods are developed to combine the CSA algorithm with the Grey Wolf Optimizer (GWO) [74], the Cat Swarm Optimization (CSO), the Crow PSO [75] and the Crow Search Mating-based Lion Algorithm [76].
The proposed Distributed Bi-behaviors Crow Search Algorithm
Different MOEAs are designed to solve the DMOP should be able to detect the problem patterns changes and to response respectively. However, many modified evolutionary
The Standard Crow Search Algorithm
The Crow Search Algorithm (CSA) was proposed by Askarzadeh in 2016 [47] as a metaheuristic for solving constrained engineering optimization problems. Crows are known to be social bird with the ability to memorize and use food source positions when needed; those sources may be the result of a personal search or from the crow group social activity. The CSA algorithm mimics the crows flock search mechanisms' and use it for optimization purposes.
The search process is detailed in Figure 1 and should keep awake if other crows discover. Assuming that the j-th crow decides to visit a previously memorized position at iteration ( ) ( , ) and assuming that a congener (i) is following the crow (j), two controversial behaviors, may occur each one represented by a state: -The first state is when the crow ignores being followed, so it simply continues searching considering what it previously found( , ).
-The second state is when the crow is aware of being followed; in this case the crow will simply hide its food source and undergo a fully random search.
These two position updates are detailed in equation (1).
: In CSA algorithm, the balance between exploration and exploitation during the optimization process is achieved by the flight length (Fl) of the ℎ crow during the update process of each position. However, the memory ( + 1) of each crow i is updated using equation (2). All the optimization process is executed until a predefined maximum number of iterations.
A General Presentation of the new DB-CSA Approach
The Distributed Bi-behaviours Crow Search Algorithm (DB-CSA) is based on the couple of Beta distribution profiles for exploitation and exploration enhancement as presented the flowchart in Figure 2 and detailed in the pseudo code in Figure 3. The new DB-CSA system has the same optimization process as the standard CSA algorithm [47] and the main difference is provided on the convergence and the diversity treatment during the optimization process when updating the position of each crow . In DB-CSA algorithm, each crow is presented as a potential solution in the search space. The key processing steps of the proposed approach, see Figure 2, are detailed as follow: In the standard CSA algorithm, the update of crow position is done according to the Equation (1), while the convergence and the diversity stages are treated separately causing the issue of premature convergence. However, this issue has treated by the new DB-CSA system using a bi-behaviours beta distribution profiles to assume a dynamic and a good balance between both stages. The two beta distribution profiles are presented in equation (6) denoted by _ which and _ respectively for exploitation and exploration. The couple of beta profiles are used to modify the original equation (1) presenting the update process executed at each iteration for each crow . The two profiles were presented based on the beta function proposed by Alimi [77] and presented in both equations (3), (4) and (5). When, the main advantage in using the beta functions here, is their capacity to produce several forms and configurations of distributions, including the normal Gaussian one. The one-dimensional Beta function is defined in equation (3).
Where; p, q, 0 and 1 are a real value, with ( 0 < 1 ) ∈ ℝ and is detailed in equation However, the multi-dimensional version is provided in the mathematical definition (5) presenting product of the one-dimensional in (3).
The dynamic switch mechanism between the bi-behaviors Beta-1 and Beta-2 profiles are assumed by a comparison between the fitness function ( ( )) of each crow and the average solution (crow). If the fitness function ( ( )) = ∑ =1 is greater than the mean value, we assume an exploration stage for the crow optimization process using Beta-1 behaviour in Equation (6) is used the update the crow position. Otherwise, the second Beta-2 behaviour in Equation (6) is considered pushing each solution to the exploitation stage.
As it can be illustrated in figure 2, the two beta distribution profiles are detailed as follows: ✓ The first large Gaussian Beta-1 exploitation profile, which characterized by a large standard deviation pushing the population for a good diversity in the search space with p and q variables of the Beta function in equation (3) are equals to 50.
✓ The second narrow Gaussian Beta-2 exploration profile adapts a limited standard deviation with p and q in equation (3) where; Beta-1 is a beta random distribution over [0, 1] which is assimilated to fine search step around the optimal solution, while the Beta-2 is more like a random explore mechanism performed away from the previous optimal solution, ( ). Both Beta-1 and Beta-2 values are determined using equation (3) with different configuration of the two properties p and q.
The mutation operators in [78] is added to maintain more diversity in the flock of N crows.
The nonuniform and the boundary mutation operators in equations (7) and (8) are applied to modify the variables X = 1 , 2 , … , of each crow according to the probability mutation equal to 1 , where is the dimensional search space and X ∈ [ , ] where: and are the lower and the upper bounds respectively. The nonuniform mutation in equation (7) is applied when the modulo value when dividing the crow position i by three is equal to zero.
However, if the remainder is equal to one the boundary mutation in equation (8) is used. Otherwise, all variables are considered without mutation operators.
where: 1 and 2 are a random value between 0 and 1. 6.5. Update the crow position using Equation (6) on Beta-1 exploitation profile 6.6. Else: 6.7. Update the crow position using Equation (6) on Beta-2 exploration profile 6.8. End If 6.9. Update the memory using Equation (2) 7. End For 8. Apply the mutation operators using equation (7) and (8) 9. Update the archive of non-dominated solutions 10. End While 11. Return the archive of the non-dominated solutions The advantage of the proposed DB-CSA algorithm is proved over their simplicity in terms of complexity which is equal to ( × log( )). When, the dynamic beta distribution profiles are the main properties of the DB-CSA algorithm investigating a high flexibility to produce several forms and configurations of distributions. Using both large Beta-1 and the narrow Beta-2 functions have given the standard CSA a new mechanism to assume a good distribution of the population toward the best approximated results.
Experimental Study
The experimental study presented in this section is conducted using personal computer with 8 Go of Ram and a i7 intel processor. A Java implementation of the proposed method is done on the jMetal framework [79]. Results are presented with two comparative studies as detailed in Table 5: -The first is done to compare the new proposed DB-CSA to a set of MOEAs designed for Dynamic Multi-Objective Optimization Problems (DMOPs).
-The second is for Many-Objective Optimization Problems (MaOPs).
-Algorithm configuration and parameters are listed in Table 4.
Quality Indicators
The performance measurements of all tested systems are done using the minimum values of the three quality indicators (QI), including the Inverted General Distance (IGD), the Mean Inverted General Distance (MIGD) and the Hypervolume Difference (HVD) which are presented respectively in equations (9), (10) and (11) respectively. All those metrics are used to measure both convergence and diversity of the tested MOEAs.
Tested Benchmarks
Forty-four benchmarks are used to evaluate the relative performances of the proposed method upon the two scenarios. The twenty-one DMOPs test beds are as follows: five FDA [6], three dMOP [49], seven UDF [80] and six F(ZJZ) [81] functions. The twenty-three problems for MaOPs are composed of: seven MaF test suite MaF1-7, seven DTLZ1-7 functions and nine WFG1-9 problems. Test configurations detailed in Table 4 according to the number of variables (D) and objectives (M).
For dynamic multi-objective optimization, Farina et al. [6] has presented three types of DMOPs classified into three categories according to the time-varying POF and POS. In type I, the POS change and the POF remains the same, in type II both POS and POF are changed.
However, type III of DMOP presents a time-varying POF and POS is unchanged. The main properties of all tested problems are reported in Table 3 presenting the variation of both POS and POF.
A-Comparative study (1) for DMOPs:
The first comparative test is done for DMOPs using FDA, dMOP, UDF and F(ZJZ) benchmarks with 2 and 3 objectives. Five standard MOEAs [9] and the six-transfer learningbased methods [51] are compared to the new proposal DB-CSA system. All compared algorithms have the same parameters settings referring to the original publications [9] and [51].
However, all DMOPs are characterized by a dynamic POS or/and POF according to the timevarying property that change at each instance as in equation (12).
where: , and are the severity of change, the iteration counter and the frequency of the change respectively. Three categories of environmental change are considered in this study and differentiated according to the values of fixed to 10 and the variation of the frequency .
The property is equal to 5, 10 and 20 for severe, moderate and slight environmental changes respectively.
As resumed in Table 4, the swarm and the archive size are equal to 100 as fixed in [9] and Table 3.
B-Comparative study (2) for MaOPs:
The second experimental test is done for many-objective optimization referring to the contributions [57] and [58] to compare the proposed DB-CSA approach to seven and thirteen Many Objective Evolutionary Algorithms (MaOEAs) respectively. As mentioned in Table 4
Results Analysis and Discussion
In this sub-section, comparative result analysis is conducted for the experimental studies of DMOPs and MaOPS, using the nonparametric Wilcoxon sign rank test [82], while some qualitative results are performed over the box plot of the one-way ANOVA test [83]. The statistical analysis methods are used to estimate the p-value property to determine the statistically significant difference between the compared methods. If the p-value is less or equal to 0.05, the statistical results are considered significantly important. All quantitative results are presented in the appendices section including Tables 9, 10, 11, 12, 13, 14, 15, 16 and 17.
A-Analysis of the comparative study (1) for FDA and dMOP problems
The comparative study (1) Based on the reported results over MIGD metric in Table 9, it is remarkable the efficiency of the new DB-CSA system having the best mean and standard deviations values for all test suites with different environmental changes compared to six transfer learning-based approaches.
Based on the statistical results over the Wilcoxon signed rank test on Table 6 and dMOP test suites over the IGD and HVD metrics respectively can be seen in Tables 10 and 11. Based on IGD metric on Table 10, we can argue the superiority of DB-CSA method compared to five standard MOEAs designed for dynamic multi-objective optimization. The results based on Wilcoxon signed rank test are presented in Table 7, indicating that DB-CSA is the best method over IGD at 0.05 statistically significance level compared to other MOEAs.
While, the same conclusion is confirmed using the box plot over one-way ANOVA test in Figure 5. Based on Table 7 and comparing the negative and positive ranks, the DB-CSA is the best method over HVD quality indicator. While, this importance does not determine as statistically significant with a p-value greater than 0.05. The one-way ANOVA results in Figure 6 assume the competitive importance of DNSGA-II, dCOEA, PPS, MOEA/D, and SGEA for solving FDA and dMOPs test functions with 2 and 3 objective including different environmental changes when using the HVD metric.
B-Analysis of the comparative study (1) for UDF and F problems
Considering the quantitative results for the Unconstrained Dynamic Functions (UDF1-UDF7) in Table 12, it appears that the DB-CSA has the greatest values for all UDF functions. Table 7, we can resume that the DB-CSA is the best method, however this importance does not present a high statistically significance with a p-values greater than 0.05 compared to the five MOEAs over the IGD metric.
Based on HVD results reported in Table 13, the DB-CSA has a good result for the majority of UDF benchmarks, and fails only for solving the disconnected UDF6 compared to the DNSGA-II system. However, we can resume the importance of the PPS system for solving F5, F7 and F10 and the SGEA for F6 and F9. Also, the Wilcoxon signed rank test detailed in Table 7 presents the importance of DNSGA-II, dCOEA, PPS, MOEA/D and SGEA with a p-value exceeding 0.05 significance level. Figure 7 has reported the one-way ANOVA results in a box plot of the six MOEAs over IGD and HVD metrics.
1) Analysis of the comparative study (2) for MaF and WFG problems with 2, 3 and 7 objectives
For the second comparative study (2), thirteen many-objectives evolutionary approaches Table 3. Results reported in Table 14, shown the IGD results of the 14 compared Many-Objective Evolutionary Algorithms for solving nine MaOPs (WFG1-WFG9) characterized by a dynamic shape of the POF that change from convex to concave. The DB-CSA has ranked as the first system for solving seven WFG test suites from nine thus including; WFG1, WFG3, WFG4, WFG5, WFG6, WFG8 and WFG9 and fails only for WFG2 compared to HypE and θ-DEA having almost the same mean values of the IGD metric for WFG7 when the number of objectives is equal to 2. By increasing the number of objectives to 3 and 7 the WFG becomes more complex and the issue of the lack of convergence and diversity presents the challenging task. Based on the reported IGD values of the tri-objectives WFG functions in Table 14, we can conclude the efficiency of the new proposed DB-CSA approach to deal with the increasing number of objectives. Also, Table 14 has shown the best values for MaOPS with 7 objectives.
In addition, Table 15 has showing the mean and the standard deviation values over IGD metric for solving the MaF test suite (MaF1-MaF7) with 2, 3 and 7 objectives functions. Figure 12, has presented the approximated POF for the MaF test suite. The new DB-CSA is presented a good method for solving the MaF test suite compared to the thirteen state of the art MaOEAs. Tables 16 and 17 presenting the efficiency of new DB-CSA approach over IGD metric for solving the complex set of tested nine WFG1-9 problems and seven DTLZ1-7 functions respectively. However, this difference is reported as statistically very significant when using the Wilcoxon signed rank test with 0.05 significance level as detailed in Table 8, when all computed p-values are less than 0.05. Figure 8, has presented the boxplot over the one-way ANOVA test for solving a set of WFG test suit with 3, 5 and 15 objectives, when the DB-CSA is the best method.
Conclusions and perspectives
In this paper, a new Distributed Bi-behaviors Crow Search Algorithm (DB-CSA) is proposed for dynamic treatment of both convergence and diversity concepts, which is based on two new mechanisms: distributed bi-behaviors profiles characterized by a large gaussian Beta-1 and narrow gaussian Beta-2 functions for exploitation and exploration enhancement respectively.
All quantitative results are analyzed using the nonparametric Wilcoxon signed rank test with 0.05 significance level. The experiments showed that the proposed DB-CSA is significantly better than the key similar techniques used in this paper for comparisons. DB-CSA is found to be more effective in solving dynamic multi-objective problem characterized by different timevarying of both POS and POF with 2 and 3 objectives. It is also a powerful solver for the many-
Acknowledgment
The research leading to these results has received funding from the Ministry of Higher Education and Scientific Research of Tunisia under the grant agreement number LR11ES48. Table 9. MIGD results (Mean and Standard Deviation) for FDA and dMOP functions. The symbols "+", "≈" and "−" denote that the performance of the compared algorithm is statistically better than, equivalent to, and worse than DB-CSA. The symbols "+", "≈" and "−" denote that the performance of the compared algorithm is statistically better than, equivalent to, and worse than DB-CSA. The symbols "+", "≈" and "−" denote that the performance of the compared algorithm is statistically better than, equivalent to, and worse than DB-CSA. The symbols "+", "≈" and "−" denote that the performance of the compared algorithm is statistically better than, equivalent to, and worse than DB-CSA The symbols "+", "≈" and "−" denote that the performance of the compared algorithm is statistically better than, equivalent to, and worse than DB-CSA 30E-7) 7.13E-4(4.90E-6) The symbols "+", "≈" and "−" denote that the performance of the compared algorithm is statistically better than, equivalent to, and worse than DB-CSA. | 7,709.8 | 2021-09-14T00:00:00.000 | [
"Computer Science"
] |
Upregulated GSDMB in Clear Cell Renal Cell Carcinoma Is Associated with Immune Infiltrates and Poor Prognosis
Gasdermin B (GSDMB) is part of the gasdermin (GSDM) family, and they use varying means of domain interactions in molecules to adjust their pore-forming and lipid-binding actions. The GSDM family has roles in the regulation of cell differentiation and proliferation, particularly in the process of pyroptosis. Nonetheless, the correlation of GSDMB with immune infiltrates and its prognostic values in clear cell renal cell carcinoma (ccRCC) are still undefined. Therefore, we assessed the correlation of GSDMB with immune infiltrates and its prognostic role in ccRCC. The transcriptional expression profiles of GSDMB in ccRCC tissues in addition to normal tissues were retrieved from The Cancer Genome Atlas (TCGA) and additionally verified in a different independent cohort, which was obtained from the Gene Expression Omnibus (GEO) database. The Human Protein Atlas and the Clinical Proteomic Tumor Analysis Consortium (CPTAC) were used to assess the protein expression of GSDMB. To assess the effectiveness of GSDMB in distinguishing ccRCC from normal samples, the receiver operating characteristic (ROC) curve analysis was performed. Relationships between GSDMB expression, clinicopathological variables, and overall survival (OS) were evaluated with multivariate methods as well as Kaplan-Meier survival curves. Protein-protein interaction (PPI) networks were created with STRING. Functional enrichment analyses were conducted by utilizing the “ClusterProfiler” package. The Tumor Immune Estimation Resource (TIMER) and tumor-immune system interaction database (TISIDB) were utilized to determine the association between the mRNA expression of GSDMB and immune infiltrates. GSDMB expression was significantly more upregulated in ccRCC tissues compared to surrounding normal tissues. An increase in the mRNA expression of GSDMB was related to the high pathologic stage and advanced TNM stage. The analysis of the ROC curve indicated that GSDMB had an AUC value of 0.820 to distinguish between ccRCC tissues and adjacent normal controls. Kaplan-Meier survival analysis indicated that ccRCC patients with high GSDMB had a poorer prognosis compared to those with low GSDMB (P < 0.001). Correlation analysis showed that the mRNA expression of GSDMB was associated with immune infiltrates and the purity of the tumor. Upregulation of GSDMB is significantly related to immune infiltrates and poor survival in ccRCC. The results of this study indicate that GSDMB could be regarded as a biomarker for the detection of poor prognosis and potential target of immune treatment in ccRCC.
Introduction
Renal cell carcinoma's (RCC) incidence has been growing on a global scale in the last few decenniums, and RCC has the highest mortality rate annually among urological carcinomas [1]. RCC is a heterogeneous type of carcinoma, of which the most common form is clear cell RCC (ccRCC) which makes up 75-80% of RCCs [2]. Due to the resistance to chemotherapy and radiotherapy, the current treatment of ccRCC patients is still unsatisfactory. Therefore, resecting the tumor is the most opti-mal choice as treatment of ccRCC patients, which is regarded as the sole type of treatment that could lead to complete curation [3]. Generally, the majority of ccRCC patients are diagnosed in an advanced stage, as a result of an occult onset and rapid progression [4]. Although targeted therapy has shown a positive effect on extending the duration of patients' survival time, the drug resistance associated with long-term use was still a problem that has not been settled [5]. Immune therapy, in particular immune checkpoint inhibitors, is a type of treatment for ccRCC patients that is very promising [6].
However, not every patient can benefit from it since research has shown that the objective response rate to anti PD-L1 therapy is merely 20% approximately. The patients who did have a positive response to immune checkpoint inhibitors did not exhibit long-term remission [7]. The proliferation mechanism of ccRCC has a complex and multifactorial nature, consisting of an elaborate network of different genetic backgrounds and multiple carcinogens that result in changes in oncogenes or tumor suppressors [8]. Thus, it is a necessity to determine the molecular mechanisms that are related to the progression of ccRCC, which is valuable for diagnosis and treatment.
A new kind of programmed cell death known as pyroptosis has vital functions in both immune defense and septic shock [9]. It is also known as programmed cell death mediated by gasdermin. It is known that the gasdermin (GSDM) family has different functions in the regulation of both cell proliferation and differentiation containing GSDMA, GSDMB, GSDMC, GSDMD, GSDME, and DFNB59 [10]. GSDMB and GSDMA genes are found in chromosome 17q2, and GSDMC and GSDMD can be found in chromosome 8q24 [10]. Except for DFNB59, the other family members of the gene share an approximate 45% of sequence homology; in addition, each GSDM has two domains that can bind one another and are attached via a long type of flexible linker [11]. With the exception of DFNB59, other known members of the GSDM family have comparable 3D structures as indicated by the sequence homology [12]. The gasdermin-N domain allows the majority of GSDM members to serve as a novel kind of pore-forming protein. While they are executing their function as pore-forming proteins, multiple GSDM family members may use varying processes of interactions between intramolecular domains that modify their pore-forming and lipid-binding actions, possibly inducing pyroptosis-like qualities in these cells. In GSDMB, one of the members of the GSDM family, pyroptosis-like features have also been observed, and several studies have suggested that overexpression of GSDMB exists in multiple types of carcinomas, in which it could be correlated with the progression of cancer and metastasis. However, the value of GSDMB in prognosis and its relation with immune infiltrates in ccRCC are yet to be completely elucidated.
In this article, we downloaded data and evaluated the association between GSDMB expression, clinical data, and overall survival (OS) in patients with ccRCC by using the different databases TCGA, GEO, and Human Protein Atlas. Then, the TIMER and GEPIA databases were used in order to identify the correlation between GSDMB expression and immune cells that have infiltrated and their equivalent sets of gene markers. Besides, we used the STRING website to explore the GSDMB-interacted protein network. Results demonstrated that high GSDMB level was correlated with poor prognosis and related to an inadequate infiltration of immune cells in ccRCC. Hence, there is a strong possibility that GSDMB overexpression may undermine the antitumor effects of the immune system in ccRCC.
2
Journal of Immunology Research 3 Journal of Immunology Research database contains information of cell-specific locations for over 40 different healthy tissues as well as 20 most common categories of carcinomas. Furthermore, data on protein immunohistochemistry in human tumor tissues and normal tissues is also available on the HPA website. UALCAN (http://ualcan.path.uab.edu/) is a convenient and simpleto-use online resource that can be used for analyzing publicly available data on cancer. Using proteomics technologies, CPTAC (http://ualcan.path.uab.edu/analysis-prot .html) evaluates tumor biospecimens by mass spectrometry, which identifies and quantifies the characterizing proteome and constituent proteins of every tumor sample. In the present report, we used UALCAN to perform a throughput analysis of GSDMB protein expression obtained from CPTAC.
Univariate and Multivariate Logistic Regression
Analyses. In order to identify the impact of the expression of GSDMB in ccRCC patients, univariate Cox regression analysis was conducted to calculate the relation between GSDMB's expression level and OS of patients across two different cohorts. Then, multivariate analysis was conducted to evaluate whether GSDMB is a distinct prognostic factor of survival in ccRCC patients. GSDMB was considered statistically significant in the Cox regression analysis when P < 0:05 .
Protein-Protein Interaction (PPI) Networks and
Functional Enrichment Analysis. The Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) website (https://string-db.org/) is another online tool. On this website, a large collection of integrating and consolidated PPI data is hosted. The PPI network information could be obtained after importing the GSDMB into STRING. A confidence score of >0.7 was regarded as significant. The "Clus-terProfiler" package was used to perform Gene Ontology (GO) enrichment as well as Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analyses of coexpression genes, which were visualized with the "ggplot2" package [13].
Tumor Immune Estimation Resource (TIMER)
Database. TIMER (https://cistrome.shinyapps.io/timer/) is an extensive web-based resource that can be used for systematic analysis of immune infiltrates in various kinds of cancers. In the present study, we applied TIMER to establish the association among GSDMB's expression in ccRCC and six different types of immune infiltrates (B cells, CD4positive T cells, CD8-positive T cells, macrophages, neutrophils, and dendritic cells).
2.6. The Gene Expression Profiling Interactive Analysis (GEPI A) Analysis. The GEPIA (http://gepia.cancer-pku.cn/index .html) is a database, which can be accessed online and is comprised of 8587 normal and 9736 tumors samples from GTEx and TCGA data. The database is dedicated to different types of analyses regarding the expression of RNA sequencing. We used it to analyze the association among GSDMB expression and various immune cell markers. The x-axis in the graph represented the amount of GSDMB expression, while the y-axis represented other type of genes of interest. Furthermore, TIMER data was used to verify which genes had a significant association with GSDMB expression as indicated by the GEPIA website.
2.7.
Tumor-Immune System Interaction Database (TISIDB). TISIDB (http://cis.hku.hk/TISIDB/) is an integrated repository web portal, accessible online, for information on the correlation that exists between tumors and the innate immune system. In this article, we used the TISIDB to establish GSDMB expression and tumor-infiltrating lymphocytes (TILs) in cancers occurring in Homo sapiens. The relative abundance of TILs was deduced from the gene expression profile, which was derived through gene set variation analysis. Spearman's test was conducted to quantify the associations between GSDMB and TILs.
Statistical Analyses.
All of the statistical analyses were conducted with R (V 3.6.3), and the R package ggplot2 was utilized to observe the differences in expression. The Mann-Whitney U test and paired t-test were conducted to establish the differences among ccRCC tissues and surrounding normal tissues. The pROC package was used to visualize the ROC curve, in which the cutoff value of GSDMB could be detected. To evaluate the effect of GSDMB on survival, log-rank and Kaplan-Meier tests were performed by using the survminer package. Correlation analysis was used by the Pearson correlation and Spearman test.
Expression Pattern of GSDMB in Pan-Cancer
Perspective. The complete working set contained 33 types of cancer of which the mRNA expression pattern of GSDMB was evaluated. As shown in Figure 1, in comparison to normal tissues, GSDMB was significantly upregulated in 12 out of 33 cancer types and downregulated in 15 out of 33 cancer types. The data demonstrated that the mRNA expression of GSDMB was expressed in an abnormal way throughout different types of cancers.
Upregulated mRNA and Protein Expression of GSDMB in ccRCC Patients.
In order to establish the mRNA as well as protein expression of GSDMB in ccRCC, data on GSDMB expression found in TCGA, GEO, and HPA were analyzed. staining from HPA demonstrated that GSDMB protein was also upregulated in ccRCC tissue. These findings suggest that the mRNA as well as protein expression of GSDMB is upregulated in ccRCC.
Relationships between GSDMB mRNA Levels and
Clinical Pathological Features of ccRCC Patients. Dunn's test and the Kruskal-Wallis test were conducted to assess the relation among GSDMB mRNA expression and clinical pathological features of ccRCC samples. Table 1 shows the baseline features of ccRCC patients which were retrieved after accessing TCGA database. As shown in Figures 3(a)-3(l), higher levels of GSDMB expression were identified in patients with a high T stage (Figure 3(a)) and patients with a high pathologic stage (Figure 3(b)). Besides, the GEO database also demonstrated that GSDMB was upregulated in patients with a high T stage (Figure 3(c)). Nonetheless, statistically significant differences were not observed among the levels of GSDMB expression and diverse clinical pathological features, including gender (Figure 3 Overall, these outcomes suggested that GSDMB is associated with the high T stage, which additionally suggests that GSDMB may have a role as a biomarker of poor prognosis in ccRCC.
Construction and Verification of a Nomogram on the Basis of GSDMB Expression.
In order to present a useful quantitative model that can assist clinicians in establishing the correct prognosis of ccRCC patients, we constructed a nomogram which combined the clinical features of patients that were independently correlated with survival through multivariate analysis (M stage, age, histologic grade, and GSDMB; Figure 5(b)). A point scale was used to appoint the locations of these variables in the nomogram according to the multivariate Cox analysis as follows: we used a straight line to identify the number of points for the variables in the nomogram, and the total number of the points appointed to every variable was rescaled on a scope between 0 and 100. The different locations of the variables were summed and then listed as the total number of points. Vertical lines were drawn from the axis of total points downward to the outcome axis to identify the expected survival of ccRCC patients after 1, 3, and 5 years. The C-index of the nomogram was 0.774 with 1000 bootstrap replicates. The bias-corrected line, which was visualized in the calibration plot, was nearing the ideal curve (also referred to as the 45-degree line), which represents a fair agreement between the observed and predicted values ( Figure 5(c)). Taken together, the results have shown that the nomogram is a superior model capable of establishing long-term survival (1, 3, and 5 years) in ccRCC patients than individual prognostic factors.
Identifying DEGs in High and Low GSDMB Expression
Groups. The DSEeq2 package in R (|logFC | >2, modified P value <0.05) was used to analyze the data from TCGA, and 1331 DEGs were detected in the high level of the GSDMB expression group and low level of the GSDMB expression group; among these, 1197 were upregulated and 134 downregulated genes in the high expression group (Figure 6(a)). Figure 6(b) shows the heatmap of the ten most significant DEGs in the high-level and low-level GSDMB expression groups.
3.9. PPI Networks and Functional Annotations. In order to build PPI networks and functional annotations, the STRING database, GO, and KEGG analyses were conducted. A network of GSDMB and its associated 10 coexpression genes is presented in Figure 7(a). Moreover, Figure 7(b) shows that the alterations in the biological process of GSDMB were related to cytokine-cytokine receptor interaction. Functional
Discussion
In this article, we first revealed that the mRNA expression of GSDMB was not normal in different types of cancers. Then, we demonstrated that both mRNA and protein expressions of GSDMB were upregulated in ccRCC. Upregulated mRNA expression of GSDMB was related to a high T stage as well as high pathologic stage in a positive way. ROC curve analysis suggested that GSDMB may be a potentially valuable diagnostic biomarker for the differentiation between ccRCC and normal tissues. The results of the Kaplan-Meier curves and univariate analysis have demonstrated that high mRNA expression of GSDMB is correlated with short OS and DSS. Taken together, GSDMB could be valuable as a potential biomarker that is related to a poor and unfavorable prognosis in ccRCC. The nomogram was generated by integrating the clinical characteristics that were identified via multivariate analysis as being independently correlated with survival to present a quantitative model to clinicians, which can be helpful in predicting the prognosis of ccRCC patients. Besides, PPI networks and functional annotations we constructed. Moreover, GSDMB may have a distinct function in immune infiltration in ccRCC. GSDMB was formerly referred to as GSDML (gasdermin-like protein). It is based in chromosome 17q21, where other genes that affect illnesses related to atypical immune responses might also be harbored. What is more, 17q21 also contains ORMDL3, which also has the ability to regulate GSDMB's expression [14]. Recent studies have shown that GSDMB is capable of inducing pyroptosis-like features; however, it is still unknown if GSDMB can generate 14 Journal of Immunology Research pyroptosis or in what way GSDMB mechanistically takes part in this inflammatory regulation [15]. The N-terminal domain of GSDMB could link up with sulfatide distinctively, and since the overexpression of sulfatide is often associated with the progression of cancer, it suggests that GSDMB may have a significant function in cancer cell metastasis and migration [16][17][18][19]. A few articles regarding the role of GSDMB in oncogeny of few different cancers have been published recently, involving breast cancer, gastric cancer, and cervical squamous cell carcinomas [20][21][22]. Nevertheless, a comprehensive exploration of GSDMB's expression and its value as a prognostic indicator in ccRCC has not been performed. In our research, according to the pancancer analysis, we demonstrated that GSDMB mRNA is atypically expressed in different types of cancers. Furthermore, we certified that GSDMB was significantly upregulated in ccRCC. So far, the specific role of GSDMB in tumors has not been reported comprehensively. The previous article suggests that the inhibition of Hsp90 may be a new mechanism that could block GSDMB-2 and prevent it from applying its tumorigenic potential [23]. Other studies demonstrated that the expression levels of GSDMB and Alu versus longterminal region-(LTR-) derived promoter utilization could be valuable markers in assessing the growth and development of gastric cancer [24,25]. Lutkowska Journal of Immunology Research Journal of Immunology Research cervical cancer [26]. One of these identified polymorphisms is the single-nucleotide polymorphism NC_000017.10: g.38051348A>G (rs8067378), based in 9.5 kb downstream from the location of GSDMB. This is equivalent to the LTR and the cellular promoter, which could prompt GSDMB expression. In this article, the results of the coexpression analyses have shown that GSDMB expression is significantly associated with that of the palmitoyltransferase complex, while this should be tested by other experiments. All of the results above indicate that GSDMB could be a potential valuable biomarker or possible target in cancer treatment. To verify the clinical value of GSDMB in diagnosing ccRCC, a ROC curve analysis was conducted. Our findings demonstrated that GSDMB had a significantly greater AUC value in the identification of ccRCC. In addition, the results of the Kaplan-Meier curves and log-rank test have shown that ccRCC patients with a high level of GSDMB mRNA expression are related to a reduced OS and DSS compared to patients with low levels of GSDMB. Based on these findings, we conclude that GSDMB may function as a prospective diagnostic biomarker that can be of value in the differentiation between ccRCC and normal tissues.
The GSDM family has roles in the management of cell differentiation and proliferation, particularly in the process of pyroptosis. Pyroptosis is a new kind of programmed cell death that has vital functions in immune defenses [27]. In 1992, it was observed for the first time in macrophages, which were infected by the Gram-negative bacteria Shigella flexneri; however, this term only became known after 2001 when it was referred to as such by Lawrence H. Boise [28]. Pyroptosis arises via the activity of different stimuli and inflammatory caspases which influence cleavage of the GSDM family and the discharge of its N-terminal effector domain as well as C-terminal inhibitory domain [29]. The N-terminal domain oligomerizes the inside of the membrane of the cell and creates pores, resulting in the quick rupture of plasma membranes, thereby discharging the contents in the cell and proinflammatory mediators like interleukin-(IL-) 1β and IL-18 [30]. The discharge of molecular patterns associated with damage from lysed pyroptotic cells can lead to the recruitment of immune cells and increases to stimulate inflammation. Studies demonstrated that GSDMB is involved in pyroptosis: cleavage of the GSDMB protein by caspase-1 causes pyroptosis [31], GSDMB stimulates noncanonical pyroptosis through increasing the activity of caspase-4 [32], and caspase-3/-6/-7 can cleave GSDMB [15]. Nonetheless, the correlation analysis of GSDMB expression and immune cell infiltration in ccRCC has not been studied. Our study has shown that multiple immune cells that infiltrate tumors (CD4-positive T cells and neutrophils) were associated with GSDMB expression in ccRCC through using TIMER. In addition, we also demonstrated that a positive relation was observed among GSDMB expression and abundance of activated B cells, eosinophils, activated CD8 T cells, activated CD4 T cells, immature B cells, MDSC, monocyte cells, Tgd cells, NK cells, Th17 cells, and Treg cells. These outcomes indicate that a potential association exists between GSDMB and immune infiltration in ccRCC. Besides, relationship between GSDMB and PD1/ PD-L1 in ccRCC was explored. We found that GSDMB expression was significantly positively correlated with PD1 in ccRCC. It demonstrates that tumor immune escape might be involved in GSDMB-mediated carcinogenesis of ccRCC. Nonetheless, continuing research should be conducted to further verify this association. A few limitations exist in the present article. Firstly, GSDMB's expression and its prognostic significance were investigated with publicly available online databases; more research in which clinical samples are analyzed is needed to verify the above findings. Besides, to provide additional support on the precise process of how GSDMB impacts immune infiltration in ccRCC, in vivo/vitro experiments need to be performed.
Conclusions
Conclusively, in the present study, we have shown that mRNA and protein expression of GSDMB is upregulated in ccRCC and associated with a high TNM stage in a positive way for the first time. This study indicates that GSDMB may be recognized as a potential biomarker associated with poor prognosis, which can be used to detect ccRCC patients that
Data Availability
The data presented in this study are available within the article materials.
Disclosure
A preprint has previously been published [33].
Conflicts of Interest
All the authors declare no conflict of interest.
Authors' Contributions
Yong Zhang and Yuanshan Cui were responsible for the design, analysis, and writing. Yong Zhang was responsible for the conceptualization, supervision, resources, visualization, proofreading, and validation. Yong Zhang, Zhongbao Zhou, and Yuanshan Cui were responsible for the funding acquisition and visualization. All authors have read and agreed to the published version of the manuscript. Yuanshan Cui and Zhongbao Zhou contributed equally to this study as co-first authors. | 5,063.8 | 2021-12-16T00:00:00.000 | [
"Biology"
] |
Automated Code Testing System for Bug Prevention in Web-based User Interfaces
Automation in testing user interfaces is a prerequisite for overcoming the major weaknesses of manual testing, such as time consumption, not being able to reproduce the sequence that generates a bug or the tendency to repeat only the successful steps. Continuous testing represents an important step in the agile software development cycle because any features and changes added to the code need to be checked before their propagation to production environment. Manually testing is a resource and time-consuming process thus the solution would be to make the entire workflow from committing a change to publishing a new release completely automated. The solution proposed within this paper is a framework for automated code testing and bug prevention that relies on Selenium, a framework supporting also headless testing, integrated with a Continuous Integration (CI) server such as Jenkins.
Introduction
At present, easy access to information and communication technologies represents one of the premises of good functioning in modern society [7].Software producers are frequently working improving the applications in the attempt to keep up with the pace imposed by the modern society needs.In recent years software development shifted from the traditional style towards Agile development mainly caused by the need to accelerate the launch of software applications on the market.Traditional development style implies an accurate but time costly planning, development and major releases in terms of software products.With Agile development, the software is produced in short cycles, and frequent releases are preferred.No matter of the chosen scenario, tests are required for ensuring a reliable release of the software that meets all the envisioned business and technical requirements.Validation is defined as: "Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled [11].Testing for Validation should confirm that the software contains the feature set and operates according to the requirements established before development began.In practice, the actual cost of software testing is determined by how much it costs to reduce uncertainty of the software quality to the appropriate amount for that application [4].Manual software testing obviously requires human resources, interface analysis and evaluation.A thorough manual testing is usually performed in long periods, but due to the increased pressure from the management, the testers are forced to release the applications more quickly, a fact that often affects the quality of the application.Therefore, the producers turned to solutions for performing automated testing, which can be viewed as the automated version of manual testing.In practice there is still a lack of knowledge in the subject of automated testing efforts and pay-off.In a survey of over 700 test professionals, 70-percent of respondents stated they believe that automation for software testing is a high payoff endeavor; however, they were not sure why that was or how automation fit with their project [2].This shows there is an initial optimism approaching automated testing but a lack of deeper understanding in order to proceed in a certain business case.
2 Continuous Software Delivery
Continuous software delivery is a software engineering approach in which developer teams produce software in short cycles, ensuring that an application can be released safely at any time.This approach describes different aspects of iterative software applications development such as continuous integration, continuous delivery, continuous testing and continuous deployment [3].It must be noted that the concept of continuous delivery is not similar with continuous deployment, concept which implies that the updates are automatically deployed to the production environment.In continuous delivery the team takes the necessary measures to ensure the updates can be deployed to production but may choose not to do it, usually due to business reasons.To implement and work in continuous deployment, one must be doing continuous delivery.Continuous integration refers to the process of permanently adding new commits to source code.Each team member submits work as soon as it's finished and in this way each developer knows immediately if their code will meet minimum standards and they can immediately fix bugs.Continuous delivery is based on continuous integration and each commit is automatically tested at the time it is pushed.In addition to the automation component and integration testing, a continuous delivery system will include functional tests, regression tests, and possibly other tests, such as pre-generated acceptance tests.After passing the automated tests, the code changes are sent to a standby environment.
Fig. 1. Test Driven Development
Continuous deployment adds more automation to the software development process.After passing all the automated delivery tests, each commit is deployed into production as soon as it is available [8].
Agile development
Agile emerged in the 1990s from different lightweight software approaches as a response to some project managers' dislike of the rigid, linear Waterfall methodology.It focuses on flexibility, continuous improvement, and speed [15].Through this approach, software is developed in short cycles, thereby ensuring reliability for timely releases.This results in building, testing and releasing the software faster and more frequently.The approach has proven to reduce cost, time and the risk of delivering critical changes to production, thereby allowing incremental updates to the production system [5].Agile is an umbrella concept that includes other methodologies such as Scrum, Extreme Programming, Kanban, Crystal etc.The main phases in the Agile development cycle are Planning, Performing requirements analysis, Product Design, Development and Testing.The phases are not consecutive, they are flexible and can be done in parallel as the design and requirements often change during product development and testing.In Agile there is continuous feedback and frequent face-to-face interactions, the project team and stakeholders understand and prioritize the right requirements.Agile teams use user-story backlogs to manage the requirements.Before starting an iteration, the team agrees with the requirements they should meet for the next delivery.This collaborative approach ensures that the most important features are prioritized.Requirements are continually updated throughout the project as new information is presented.
Kanban
It is a visual frame used to implement Agile that shows what it should produce, when to produce it and how much it produces.This encourages small incremental changes to an existing system and does not require a specific configuration or procedure.Kanban board is used during development -which is a tool for implementing the Kanban project method.Traditionally, this tool was a physical plate, with magnets, plastic chips, or notes on a white board to represent work items, but now more and more project management software tools have created Kanban online panels.
SCRUM
Scrum is an agile methodology for managing and planning software projects.A framework within which people can address and solve complex and adaptive problems [1].The Scrum team consists of a Product Owner, the Development Team, and a Scrum Master.Scrum Teams are self -organizing and crossfunctional.Self-organizing teams choose how best to accomplish their work, rather than being directed by others outside the team [6].The Development Team usually consists in few members, yet not smaller than three people.The functionalities, bug-fixes and improvements are defined and tracked in Product Backlog.The development process occurs iteratively, each iteration is called Sprint and has 2 to 4 weeks.At the beginning of each Sprint, the team holds a meeting where the items in the Backlog are organized and tasks are allocated to developers.Usually team members are requesting tasks by themselves, based on their project experience and programming knowledge.During the Sprint the team meets for briefing sessions and tasks can be re-allocated to ensure that Sprint can end successfully.At the end of the Sprint the team holds another meeting, the review is performed on each assignment and any unfinished tasks are moved in the next sprint.
eXtreme Programming -XP
eXtreme Programming is a type of software development designed to improve the quality and ability to respond to changing customer requirements.There are systems whose functionality is expected to change every few months but in many software environments dynamically changing requirements is the only constant.In an XP team the developers, the managers and customers as well, work all together asking questions, negotiating scope and schedules, and creating functional tests.
[http://www.extremeprogramming.org/]The XP principles include feedback, assuming simplicity and adopting change.XP iterations last one or two weeks long compared to Scrum teams which work in iterations lasting 2 to 4 DOI: 10.12948/issn14531305/22.3.2018.03weeks.The XP teams are open to changing the content of their iteration if the work hasn't started yet on a particular feature, thus a new feature prioritized by the customer can be added to the existing sprint and the team will start working on it.XP recommends engineering practices, specifically techniques like testdriven development, the focus on automated testing, pair programming, simple design, refactoring, continuous integration and so on.
Automation of Software Testing
In the Continuous software development cycle, testing is a prerequisite before propagating changes in the production environment.Automation is required for overcoming the major weaknesses of manual testing, such as time consumption, not being able to reproduce the sequence that generates an error, low coverage caused by the tendency to repeat actions, etc. Automation process relies on strategies, tools and artefacts that augment or reduce the need for manual or human involvement or interaction in unskilled, repetitive or redundant tasks [12].The process of automating the software testing is similar to a software development process.A big difference consists in the test assertion document which must be created before starting the development.When it comes to a software there are several types of tests that can be automated [12]: Functional testschecking the operations behavior Regression testschecking the system behavior Stress testssimulating maximum loads to determine the capability Performance testscheck if the system is adequate and meets the expectations Loading tests -determining the points at which the capacity and performance of the system become degraded to the situation that hardware or software upgrades would be required In the automation process, one of the goals is to run tests without user assistance.Continuous testing does not eliminate manual testing from the continuous delivery model.
Using continuous testing, the team will constantly test the up-to-date version of the code available.Continuous testing still involves manual exploration tests and user acceptance tests of the new modules before implementing the corresponding automated tests.This testing approach differs from traditional testing as the software is expected to change over time, regardless of a defined launch schedule.
Use case of automated testing for web platform
The use case presented in the article represents the testing automation of a complex web application used by the operators at the ICI Bucharest -Romanian Top Level Domain Registry.Operators' authentication in the app is performed by username and password, with users' roles and access levels being already defined.The development team is composed of six members working on Scrum methodology, including the Scrum master.Every Sprint lasts for 2 weeks.For development, organize and discussions, the team uses Atlassian Stash, a Git repository management solution for enterprise teams.It allows everyone to easily collaborate on Git repositories.
Application and environment
The system functions over a middleware architecture, meaning that it provides means to connect the various software blocks into an application where these can exchange information with relatively easy-to-use mechanisms.Middleware deals with component communication modes and can be used in a wide range of domains.The middleware provides a set of commands through an API for running specific tasks.The web applications interact with middleware through API calls and are widely used by operators and clients.These applications are under continuous development and integration, have a stable user interface and initially were manually tested before propagation to production environment.The manual testing process was extremely time consuming for developers and operators, specifically before releases, there-for we started investigating planning and developing an automated testing solution.The servers and the machines are monitored with dedicated solutions, and the middleware includes unit testing, thus it was as important to design automated functional tests on the client side to check that there are no errors in the code, all elements are visible and operating correctly, as this application is highly used, with thousands of operations performed each day.According to OASIS Test Assertions Guidelines Version 1.0 [13], a document containing assertion tests must be developed before implementing actual tests.The document should be updated whenever a change in the web-side platform is required.Therefore, the starting point consisted in the elaboration of the test assertions document containing all the operations that the user can perform on the app.This was a time-consuming process and includes all the inputs and outcomes of the userside operations.To decrease pressure on developers, the operators participated in the description of the tests.
Technologies for developing testing automation
One of most widely used tools for automated code testing and bug prevention is Selenium, a framework supporting also headless testing, which can be integrated with a Continuous Integration (CI) server such as Jenkins or Travis.Selenium consists of a suite of tools for automating web browsers and provides a complex set of testing functions for web all types of web applications across multiple platforms, as it runs in most browsers and operating systems.It is highly flexible because it allows multiple options for locating and testing UI elements with the goal of validating expected test results against real-time application behavior.Selenium provides interoperability with most programming languages such as Python, C#, Java, Ruby, thus it can easily be integrated in testing frameworks.Selenium basically consists of two main components the Selenium Webdriver and Selenium IDE.Selenium Webdriver is the core engine driving the browser natively as a user either locally or on a remote machine using the Selenium Server.Selenium WebDriver accepts commands and sends them to a browser through a browserspecific browser driver, which sends commands to a browser and retrieves results.Selenium WebDriver does not need a special server to execute tests.Instead, WebDriver directly starts a browser instance and controls it [9].Selenium IDE is a complete integrated development environment (IDE) for Selenium browser-based regression automation suites and tests that enables fast development of bug reproduction scrips.It facilitates recording, playing, editing, and debugging tests.Selenium IDE was initially implemented as a Firefox Add-On and it is recently available on Chrome also.Phantom JS is a headless WebKit scriptable with a JavaScript API for web page interaction automation that enables navigation, taking screenshots and test assertions.All these key features make it a common tool used to run browser-based unit tests in a headless environment.Being driven by the need for testing web applications headless on a CentOS distribution we started analyzing the various options for designing the architecture of a testing system that could ensure flexible and accurate application testing.As a first step there were analyzed several configurations but only two were chosen for actual implementation and capabilities testing: Selenium Webdriver with Firefox browser used with Xvfb display server and Selenium Webdriver with Phantom JS.There were generated twenty test cases using Selenium IDE, then were exported and run.One of the tests performed to a form, consisting in asserting true the presence of a text field after clicking a "Submit" button failed on PhantomJS although using Firefox the test returned "ok".The functionality was then manually tested and was working.The conclusion was that even though Phan-tomJS is a functional headless browser, it is not a real browser that users actually use while Firefox run with Xvfb provides much more accurate tests, within current environment.
Selenium is a powerful automation testing tool as it is extremely flexible as it allows adding new functionalities to both Selenium test scripts and Selenium's framework to customize test automation.Jenkins, a Java-based open source solution is a server used to deliver continuous build, is the tool to complete this task.It has the capability to monitor any job defined as a cron, SVN or GIT.A continuous integration server is designed to automatically or manually trigger complex workflows to build, test, and deploy software components [10].Although it is a platform focused on building software systems, Jenkins-CI can easily be expanded with over 800 extensions for complex computational tasks.We can use Jenkins's powerful distributed model for CI to run our Selenium tests in parallel on a Jenkins cluster.For an Agile team, Jenkins provides everything needed for a robust continuous build system.Jenkins' extensibility allows the system to adapt to many different pre-existing environments.To ensure code stability, good collaboration between developers and fast release cycles, Jenkins is set up to build selenium tests automatically on every pull requests made on the Stash Server.The initial plan was to have a high degree of granularity and to create tests for each element.During the development phase it was noted that this is time-costly as each test implied authentication, form-completions procedures, run middleware commands, test itself and logout.As a result, the team changed the approach to create larger tests, for example a single test for an entire form instead creating test for each field.This decreased the granularity level but the time savings were a considerable advantage.
Automated testing system
The environment presented above in Chapter 4.1 runs on Linux based servers.In this context the main concern that arises when designing the architecture of a system for functional testing of web applications is that there is no display output for the browser to launch in.To overcome this issue the team configured the tests to launch the browser virtually using Xvfb virtual frame buffer server and Firefox.Detailed test cases were specified in the test assertion documents and 119 tests were created for covering them.The work procedure was to develop each test in Selenium IDE, installed in Firefox installed on a machine with display output.The tests included assertions for checking the presence of elements on the web page and continued with checking the messages that were returned if one or more fields were not filled in or filled in incorrectly.After all these checks, the fields are filled with valid data (e.g., valid email), the data is sent and the confirmation / success message is recorded.Programmers decided depending on the case which is the best option to check the presence of the elements -wait for, assert presence, verify.Each of these procedures have several options.From the Selenium IDE short menu, one can manually select the required assertion command from a list of commands provided in the Recording Addition.Each Selenium test was recorded and exported as Python2 unit test and included in a single Test Suite.The middleware API was often used to perform certain tasks in the background and decrease the time required by test runinstead of using web forms to create data (registrant details, domain information etc), API commands were used for this tasks.The figure bellow shows the workflow of the testing system using Jenkins Continuous Integration Server: Each failure was reported to the developer that submitted broken code and to the specified reviewers.The code submission was automatically denied from production environment.The overall impact of the deployed system was a decrease in time spent for testing and a decrease in the number of bugs in the production environment.Initially there were 5 developers and 3 operators testing the system manually before each major release for approximately 1 week and after the automated testing system implementation the number reduced to 2 developers and 1 operator performing manual tests.
Disadvantages
The development and maintenance of an automated testing solution requires considerable effort on the development team and costs on the client when it comes to complex web applications, especially when the user interface changes frequently.In these cases, it can prove a hard task to create and maintain automated tests for dynamic contents.
The assertion document must be elaborated considering all the aspects, including different account types in case the displayed content or the client interface is different.These situations require additional development effort.
Conclusions
Automated software testing primarily reduces human errors, either in development or in manual testing.Test results can be stored in a database and advanced statistics can be devel-oped.The decision on whether to perform automated tests varies from one organization to another, but in times where Agile development is spreading for faster software development, the automation of tests becomes a requirement for a successful implementation.Testing automation on user interfaces is the solution when the interface is stable and provides key elements that are rarely or never changed.The alteration of the interface implies the reconstruction of test-cases and an analysis on costs-benefits must be done by the client prior to the decision of developing automated tests.By implementing automated testing, the software producers gain significant cycle-time and quality improvements.The time cycles for software releases are shortened and the reliability of the UI is increased. | 4,699.8 | 2018-09-30T00:00:00.000 | [
"Computer Science"
] |
Constraints on thermalizing surfaces from infrared observations of supermassive black holes
Infrared observations of Sgr A* and M87* are incompatible with the assumption that these sources have physical surfaces in thermal equilibrium with their accreting environments. In this paper we discuss a general parametrization of the energy balance in a horizonless object, which permits to quantify how close a horizonless object is in its behavior to a black hole, and analyze the timescale in which its surface can thermalize. We show that the thermalization timescale is unbounded, growing large for objects that mimic closely the behavior of a black hole (and being infinite for the latter). In particular, the thermalization timescale is proportional to the time that energy spends inside the horizonless object due to propagation and interactions with the bulk. Hence, these observations can be used to quantitatively restrict the dynamical behavior of horizonless objects, without being able to discard the existence of a physical surface.
I. INTRODUCTION
While black holes have been for a long time a central topic in gravitation theory, the fastpacing advancements in gravitational-wave detection and very-long-baseline interferometry (VLBI) observations have revived the interest in the possibility of probing the inner structure of these purely gravitational objects.Among the most striking consequences of these developments is the possibility to test deviations from the standard solutions of general relativity describing black holes, which are singular and are therefore expected to be regularized by quantum-gravitational effects.
Quite remarkably, the viable resulting geometries endowed with an outer horizon where found to belong to basically two families of solutions [1,2].Both these families admit as limiting cases horizonless ultracompact configurations (see [3] for details).Similar static solutions for ultra-compact quasi-black hole configurations can be found in the literature independently from the aforementioned limiting procedure (see e.g.[4][5][6][7][8][9] and references therein), and as such they appears to be a rather generic class of black hole mimickers and interesting case study for observational constraints.
While there is by now a rich literature concerning the theory, phenomenology and viable constraints in different classes of black hole mimickers (see e.g.[10,11] for comprehensive reviews on this subject), for what concerns constraints on ultra-compact horizonless objects with a physical surface, a special role has been recently played by VLBI observations of supermassive black holes (Sgr A * and M87 * ) [12,13].Here, we will focus on complementary arguments that constrain the possible existence of a surface using infrared observations of Sgr A * and M87 * [13][14][15][16][17][18].
The arguments in the aforementioned papers were groundbreaking in demonstrating that constraining the existence of a surface was within reach with available data from infrared observations.In particular, these papers indicate that observations are incompatible with a physical surface in thermal equilibrium with its environment.Nonetheless, our understanding of black hole mimickers has strongly advanced in recent times, and it is not clear whether thermal equilibrium is reached within a sufficiently short timescale.In what follows we shall show that a more accurate characterization of the physics involved in these exotic objects has a profound impact on the implications of these early analyses, resulting in more complete physical models and thus refined constraints.
The present authors have pursued this line of research in previous works, in particular [10] and [19] (see also [20,21] by other authors).These works have shown that updating the assumptions in [14][15][16][17][18] can result in sizeable changes in the associated constraints, thus reaffirming the necessity for a critical revision of the underlying assumptions on which the latter are based.
We want to stress here that the most critical aspect for the evaluation of these constraints is an adequate parametrization of the energy exchange between the horizonless object and its environment.More specifically, equilibrium requires that incident energy onto the horizonless object is re-emitted, which will generally occur only after a certain re-emission timescale.
It is essential to account for this re-emission timescale in analysis that determine whether or not reaching equilibrium is possible.In this work, we study this problem for the first time, building a general parametrization of this energy exchange that includes a temporary absorption coefficient and timescale, and analyze how the equilibrium timescale depends on these parameters.
II. ENERGY BALANCE IN A HORIZONLESS OBJECT
When a black hole is surrounded by matter, all energy that moves across the horizon is absorbed by the black hole, which adjusts dynamically by changing its mass and angular momentum (and possibly electric charge, though this is not particularly relevant in astrophysical situations).Of course, semiclassically black holes can in principle re-emit part of this energy back in the form of Hawking radiation over long times, however for most astrophysical black holes the cosmic microwave background is hot enough to counterbalance this tendency and induces further black hole growth (measured in terms of the horizon area) even in the absence of matter fluxes [22].
For horizonless objects the physics is more complex.In the most general situation, the net absorption associated with a black hole can be replaced by the following channels: 1. Absorption: A fraction κ of the incident energy can be permanently absorbed by the internal degrees of freedom of the object, changing the intrinsic state of the latter.
2.
Temporary absorption/Delayed re-emission: A fraction κ of the incident energy can be re-emitted (inelastically) after a certain amount of time τ κ, with the delay caused by a combination of propagation and interaction effects in the bulk.
Instantaneous re-emission:
A fraction Γ of incident energy can be re-emitted (inelastically) almost instantaneously, after interaction with surface degrees of freedom.
4.
Reflection: A fraction Γ of incident energy can be reflected (elastically) without being absorbed by the object.
Transmission:
A fraction T of the incident energy can travel freely across the object without any interactions taking place.
Note that the coefficient κ can either describe absorption or instantaneous re-emission in the limits τ κ → ∞ and τ κ → 0, respectively.Hence, it can be understood as a more physical realization of these two (idealized) channels.In previous work [10], when applying this parametrization to a discrete model of energy exchange, we only considered these idealized channels (also, we implicitly set T → 0), but here we want to go a step forward.
The specific behavior of a given horizonless object is model-dependent.In fact, our knowledge of the dynamics of these objects is not detailed enough to determine which of the channels above is dominant for a given model.Hence, from a phenomenological perspective it is reasonable to consider all of them as equally possible, and cast constraints on the different parameters involved.
Given the above five parameters κ, κ, Γ, Γ and T, respectively introduced for the five items listed above, we can easily see that they are sufficient for characterizing a broad class of horizonless black hole mimickers.One of such objects with only κ = 0 will be the closest in behavior to a black hole.On the other hand, a horizonless object with only κ = 0 will behave like a black hole for a certain timescale τ κ that can be very long depending on the model.The remaining limiting cases, only Γ = 0 and only Γ = 0 respectively, display more stark deviations with respect to black holes, and could be potentially constrained in VLBI observations [13,23].A similar comment applies to objects with only T = 0 [24,25].Now that we have introduced our parametrization, let us dwell in the next section in the discuss previous works that have explored the role of these parameters in infrared and VLBI observations of supermassive black holes.
III. RELATION TO PREVIOUS WORK
The parametrization introduced in the previous section aimed at being complete regarding the possible types of interactions between the incoming energy and the horizonless object.
Previous works in the subject consider a subset of these behaviors, which we briefly review in the following, together with the reasons behind such choices.
Most of the works below assumed spherical symmetry (except when otherwise noted below).Hence, we can introduce an effective radius of the object R, together with a dimensionless measure of compactness, µ = (R − 2M )/2M .
• The original works [14][15][16][17][18] assumed an instantaneous remission by the ultra compact object of the incident radiation, i.e. κ = κ = Γ = T = 0, and only Γ = 0.In the argument provided by the authors this follows from the consideration that, in thermal equilibrium, Kirchhoff's law implies that all energy received by the horizonless object is instantly re-emitted.On general grounds this would imply that, if energy is initially distributed among the other channels for a given model of horizonless object, it is the dynamical evolution towards equilibrium that progressively re-distributes it until the energy balance can be adequately described by κ = κ = Γ = T = 0.The authors showed then that thermal equilibrium is incompatible with infrared observations.
• A further step was taken in [20,26] with the analysis of the timescale required for equilibrium to be reached, still under the assumption κ = κ = Γ = T = 0 so that only Γ = 0.This analysis showed that gravitational lensing plays an important role in attaining thermal equilibrium, a role previously unaccounted for.Indeed, with increasing compactness there is a closing escaping angle ∆Ω for rays leaving the object surface: for µ 1 one can show that ∆Ω/2π ≈ 27µ/8+O(µ 2 ) [10] (in the following, we will define ∆ = ∆Ω/2π).In turn this implies that the timescale in which equilibrium is reached must scale at least as 1/µ.Thus, the equilibrium assumption fails to hold for µ small enough.This means that the incompatibility between thermal equilibrium and infrared observations can be translated into a constraint on µ (or, equivalently, R).
• The timescale to reach equilibrium was re-analyzed in [10], together with the introduction of non-zero coefficients κ = 0, Γ = 0 and Γ = 0 (the coefficient κ was implicitly considered in the general parametrization introduced, but put to zero for the analysis of equilibrium, together with T = 0).For this more general situation, the constraints are now formulated as (generically nonlinear) combinations of the available parameters.Of particular importance is the absorption coefficient, being the constraints very sensitive to non-zero values of the latter.
• Once rotation is included [27], re-emission is not uniform throughout the surface.
This effect increases with spin, and makes previous calculations of the timescale in which equilibrium is reached inapplicable.In particular, the re-emission pattern of equilibrium in the presence of rotation, and the timescale in which this pattern can arise, are unknown.
• In [13], an updated account of the original works [14][15][16][17][18] is provided, also taking into account the aforementioned effect of gravitational lensing [20,26].This updated discussion still has κ = κ = Γ = T = 0 and only Γ = 0, as it focuses on the equilibrium state.That neglecting absorption is questionable was stressed in the follow-up paper [19] stressing again the profound impact that taking it into account can have on the obtainable constraints.
As it is apparent in the brief review above, the arguments [14][15][16][17][18] have generated widespread interest, and further refinements have been published by different groups.A possible point of contention is whether or not a non-zero value of κ is physically reasonable.
Let us discuss this in some detail in the next section.
For completeness, before focusing on the role of κ and κ, we include a list of works that have used (part of) the parametrization above to model VLBI observations of alternatives to black holes.VLBI observations provide complementary constraints to the infrared constraints that are the subject of this paper.The parametrization introduced in [10], which does not include the transmission coefficient, was used in [23] to determine the image features associated with reflection and re-emission, providing an exhaustive exploration of the parameter space spanned by R, Γ and Γ. Complementary models in which only non-zero transmission coefficient T was included were the focus of [24,25].On the other hand, [13] also discussed the features associated with reflection for specific values of the parameters R and Γ.
RIUM
It is clear that κ = 0 prevents equilibrium, in the sense of perfect balance between received and instantaneoulsy re-emitted energy, to be reached.The same comment holds true for any of the other channels (delayed re-emission, reflection and transmission) discussed in Sec.II.
Indeed, any energy deposited in any of these channels cannot go into the instantaneous reemission channel, thus always resulting into a deficit in the re-emission channel with respect to the incident energy.
Hence, a possible objection is that assuming that κ = 0 is incompatible with equilibrium.
Note, however, that this is actually what happens if the central object is a classical black hole.A classical black hole can never be in equilibrium with its accreting environment, due to its purely absorptive nature (κ = 1), and the fact that all incident energy is stored in internal degrees of freedom.The same comment applies to semiclassical black holes (|κ − 1| 1), as the features of re-emission of energy in the form of Hawking radiation are constrained as a function of the black hole mass, and cannot be arbitrarily adjusted to achieve equilibrium with the accreting environment.
Horizonless objects are expected to mimic closely the behavior of black holes.Even though the mimicked behaviors are model-dependent, it is reasonable to expect that at least part of the incident energy will be transferred to internal degrees of freedom, and that not all this energy can be re-emitted in arbitrary amounts to achieve equilibrium.A simple argument in this sense consisting in the fact that ultra-compact object must be able to convert at least some of the incident energy in expansion so to avoid to form a trapping horizon as a consequence of the accreting energy [28].
These aspects are certainly dependent on the dynamics of specific models, which is not well understood.It is therefore completely unknown whether it is reasonable to assume that a horizonless object must reach equilibrium with its accreting environment.It may well be that not being able to reach equilibrium with its accreting environment (or not being able to do so on astrophysical relevant timescales) is a feature of horizonless objects.
Even if we assume that κ = 0, there is still the issue that, while all incident energy in this case will be radiated away, the amount of time it takes for the re-emission to take place, τ κ, is unknown and also dependent on model-dependent dynamics.Again, for a classical black hole this time is infinite, so one can expect that a good black hole mimicker will have a relatively long timescale for delayed re-emission.This leads to the natural question about how this delay in re-emission impacts on the achievement of thermal equilibrium, in particular on the associated timescale.
As the role of absorption κ has been studied in previous papers, we will focus on the role of temporary absorption for the rest of the paper.For the sake of comprehensiveness, we will discuss the energy exchange between a horizonles objects and its accreting environment in full generality, and then focus on the situation in which κ = Γ = Γ = T = 0 but κ = 0, and analyze amount of time that it takes for a horizonless object to achieve equilibrium with its environment as a function of the timescale of energy release.We will discuss how this model reproduces the behavior analyzed previously in suitable limits (very short and very long re-emission times, respectively), and the new insights that it provides into the problem of equilibrium.
V. A DISCRETE MODEL OF ENERGY EXCHANGE
In this section, we introduce a discrete model to describe the energy exchange between a general horizonless object and its environment.
Let us consider a discretization of time such that we use the set of integers {1, ..., n} to denote different moments in time.All time intervals have the same size ∆t, which we take to be roughly proportional to the light-crossing time τ S = r S /c.We assume that there is a uniform energy injection x in each interval.Also, {x i } n i=1 will be the incident energy (that is, the energy that reaches the object from its environment) at different moments, and { i } n i=1 the energy released by the object at the same time.
In App.A we discuss the energy balance for different time intervals in order to derive recursion relations, while Fig. 1 provides a schematic summary.It follows that we can write the total incident energy X n and the total escaping energy E n in the interval n ≤ N as where while and Due to temporary absorption, Eqs. ( 1)-( 4) must be completed with the following modifications for n ≥ N + 1: and Let us now take a closer look to the physics in these recursion relations.
The recursion relations discussed in the previous section allow to study general situations for arbitrary values of the five parameters κ, κ, Γ, Γ and T.However, as we want to understand the role played by temporary absorption in the achievement of thermal equilibrium, we will focus here on the simplified case in which only κ and κ are non-zero.The recursion relations are then reduced to and These expressions cannot be summed analytically.Numerical evaluation is always possible, though we have also been able to find an analytical approximation as discussed in the following.
For the purposes of finding a suitable analytical approximation, let us consider for a moment κ = 0 (no temporary absorption), for which the recursion relation can be summed analytically leading to [10,19]: From this expressions, it follows that for timescales longer than the outgoing flux reaches a steady state: Let us now come back to the case in which κ = 0. Delayed re-emission introduces a delay between two successive bounces on the surface for a fraction of the energy.It is then reasonable to conjecture that considering the expression without re-emission, and replacing the timescale τ S with the average timescale between to consecutive bounces for the fraction of energy that eventually escapes the gravitational field, could provide a good analytical approximation.A fraction κ of the energy takes a time τ S + τ κ = (N + 1)τ S between two consecutive bounces, whereas the remaining energy takes a time τ S .Therefore, the average time is given by and we can make the following analytical guess: It is straightforward to check numerically whether this provides a good approximation for the flux of energy; Fig. 2 shows that this is indeed the case.We can therefore use Eq. ( 13) as a very good approximation of the outgoing flux of energy.From this result, we can infer that the presence of delayed re-emission does not alter the asymptotic value of the energy flux once the steady state is achieved; rather, it prolongs the time it takes to reach the steady state.In fact, the steady state is now reached for timescales We can see that N plays an important role in the thermalization timescale.While κ is by construction bounded from above by 1, N can be unbounded.In fact, taking the limit N → ∞ recovers the behavior of a black hole, which means that larger values of N yield better black hole mimickers.In fact, even in the absence of absorption, the presence of temporary absorption can significantly weaken the constraint.For instance, in the case of Sgr A*, if we assume κ = κ = 0 the observational constraint [13] Ė Ṁ < 10 −3 , (note that [13] provides a tighter constraint of 10 −3 instead of the 10 −2 in the original paper [14]) implies where we have used the Eddington timescale T 3.8 × 10 8 yr to provide an estimation of the typical timescale for the variation of its accretion rate.Note that the argument above requires the stationary of the source to be strictly applicable.Source variability can disrupt equilibrium or delay its onset in a way that is difficult to estimate using the formalism above.
In some cases (e.g.[26]), the Hubble time is used instead of the Eddington timescale, which changes the bounds below but can also lead to an overestimation of these constraints due to the non-equilibrium nature of the source.Another aspect to take into account is that the accretion rate used in the equations above is also changing in time and likely higher in the past.Hence, there is some ambiguity on the precise numerical values of these constraints; a definitive solution for these ambiguities would require a more thorough understanding of the evolution of the coupled system composed by the horizonless central object and its accreting envinroment.
Equation (16) implies On the other hand, when κ = 0, we get This constraint can be much weaker than the one given in Eq. ( 16) for N large enough.
A fundamental question to answer is therefore the value that N typically takes for specific models such as gravastars [4][5][6] or semiclassical relativistic stars [7][8][9].Unfortunately, the dynamics of these models is not yet understood well enough to extract the value of N .Nevertheless, it is possible to illustrate that N can become very large for black hole mimickers, due to gravitational time delay associated with propagation effects.
Let us consider a very simple toy model, constructed in spherical symmetry by demanding that the Misner-Sharp-Hernandez mass [29,30] for each sphere is an away from its critical value which would yield the formation of a horizon [31].The interior of such a stellar structure [32][33][34] is approximately described by the metric where dΩ 2 is the usual line element on the unit 2-sphere.The re-emission timescale for incident energy can be split as the sum of the time of propagation inside the structure, plus the interaction with the latter.From Eq. ( 19), we can see that just propagation effects imply for this model that For 1, we then have where τ H is the Hubble time and M the mass of the Sun.Hence, it is not difficult to have ultracompact objects for which the thermalization timescale becomes comparable or even larger than the Hubble time, which means that thermalization is not possible for these objects in practice if 10 −22 (M/M )κ.Let us stress that using the Hubble time is a conservative estimate, and a more realistic estimate would be provided by the variability timescale for a particular astrophysical system (e.g.Sgr A * ), which should be several orders of magnitude lower than τ H .
VII. CONCLUSIONS
Modeling the interactions between horizonless objects and their accreting environments is essential to cast constraints on these alternatives to black holes.In this paper, we have presented a general parametrization of these interactions, and focused on understanding the role that temporary absorption plays in reaching steady state.
Temporary absorption is necessary for the horizonless object to be adapt dynamically to its environment and be eventually be able to reach a steady state.This is in particular necessary to avoid the formation of horizons.Hence, a non-zero value of κ seems to be unavoidable based on known physics.
The second parameter necessary to describe temporary absorption is the re-emission timescale, which we have parametrized in terms of N .We have shown that this parameter has an important impact on the thermalization timescale and that it can become arbitrarily large, preventing the latter from happening altogether for relatively compact horizonless objects.
In summary, that equilibrium is not observed in systems such as Sgr A * and M87 * can be used to place constraints on horizonless objects, ruling out models in which this thermalization timescale is short enough so that expecting equilibrium is reasonable.However, we have shown that simple arguments indicate that ultracompact objects would have thermalization timescales that are too long for equilibrium to be feasible in our universe.Hence, it is possible that supermassive horizonless objects, not in equilibrium with their accreting environments, exist in nature.
Appendix A: Discretized energy exchange Let us discuss in detail the different components at play in the energy exchange between a horizonless objects and its environment, within the discretized model used in the paper.
This provides a derivation of the recursion relations in Sec.V.
For the first interval, the energy balance is as follows: • There is an injection of energy x onto the horizonless object.
• The total amount of incident energy is x 1 = x.
• A fraction of energy κx 1 is permanently absorbed by the object.
• A fraction κx 1 is temporarily absorbed by the object, and will be re-emitted after a time τ κ = N τ S .
• A fraction Γx 1 is reflected, escaping the gravitational well of the object.
• A fraction Tx 1 travels across the object without interaction.
For the second interval: • There is an injection of energy x onto the horizonless object.
• A fraction of energy (1 − ∆) Γx 1 returns to the surface after being re-emitted in the first interval.
• A fraction of energy κ(x 1 + x 2 ) is permanently absorbed by the object.
• A fraction κ(x 1 + x 2 ) is temporarily absorbed by the object, and will be re-emitted after a time N τ S .
• A fraction Γ(x 1 + x 2 ) is re-emitted instantaneously.From this fraction, an amount ∆ Γ(x 1 + x 2 ) escapes the gravitational well of the object, while the remaining amount ) is gravitationally lensed back to the object.Let us define 2 = ∆ Γx 2 = (1 − ∆) Γ 1 , so that the total energy that escapes is 1 + 2 .
• A fraction Γ(x 1 + x 2 ) is reflected.From this fraction, an amount Γx 1 escapes the gravitational well of the object, while the remaining amount Γx 2 is gravitationally lensed back to the object.
• A fraction T(x 1 + x 2 ) travels across the object without interaction.
For the third interval: • There is an injection of energy x onto the horizonless object.
• A fraction of energy Γx 2 returns to the surface after being reflected in the previous interval.
• The total amount of incident energy is • A fraction of energy κ(x 1 + x 2 + x 3 ) is permanently absorbed by the object.
• A fraction κ(x 1 +x 2 +x 3 ) is temporarily absorbed by the object, and will be re-emitted after a time N τ S .
• A fraction Γ(x 1 +x 2 +x 3 ) is re-emitted instantaneously.From this fraction, an amount ∆ Γ(x 1 + x 2 + x 3 ) escapes the gravitational well of the object, while the remaining amount (1 − ∆) Γ(x 1 + x 2 + x 3 ) is gravitationally lensed back to the object.Let us , so that the total energy that escapes is 1 + 2 + 3 .
• A fraction Γ(x 1 + x 2 + x 3 ) is reflected.From this fraction, an amount Γx 1 escapes the gravitational well of the object, while the remaining amount Γ(x 2 + x 3 ) is gravitationally lensed back to the object.
• A fraction T(x 1 + x 2 + x 3 ) travels across the object without interaction.
For the (N + 1)−interval: • There is an injection of energy x onto the horizonless object.
• A fraction of energy (1 − ∆) Γ N k=1 x n returns to the surface after being re-emitted in the previous interval.
• A fraction of energy Γ N k=2 x n returns to the surface after being reflected in the previous interval.
• The total amount of incident energy is X N +1 = N +1 k=1 x k , where • A fraction of energy κX N +1 is permanently absorbed by the object.
• A fraction κX N +1 is temporarily absorbed by the object, and will be re-emitted after a time N τ S .
• A fraction ΓX N +1 is reflected.From this fraction, an amount Γx 1 escapes the gravitational well of the object, while the remaining amount Γ N +1 k=2 x n is gravitationally lensed back to the object.
• A fraction TX N +1 travels across the object without interaction.
For the (N + 2)−interval: • There is an injection of energy x onto the horizonless object.
• A fraction of energy (1 − ∆) Γ N +1 k=1 x k returns to the surface after being re-emitted in the previous interval.
• A fraction of energy Γ N +1 k=2 x k returns to the surface after being reflected in the previous interval.
• The total amount of incident energy is X N +2 = N +2 k=1 x k , where • A fraction of energy κX N +2 is permanently absorbed by the object.
• A fraction κX N +2 is temporarily absorbed by the object, and will be re-emitted after a time N τ S .
Figure 1 :
Figure 1: Schematic proof of Eqs.(7) and (8), with time in the vertical direction.The quantities k and x k are the released energy and the incident energy in the interval k, respectively.These quantities can be related to each other and also to the corresponding quantities in the previous time interval k − 1, as shown in the figure (see also App.A for a complementary discussion). | 6,837.6 | 2023-06-30T00:00:00.000 | [
"Physics"
] |
Enhancing Software Process Management through Control Charts
In software development life cycle, Software Process Management (SPM) acts as a significant part throughout the execution of project. In this study, the application of control chart for analyzing the stability of software process and defects in the software product is discussed. This paper will discuss the analyzing impact or collision of rework effort, defect density, inspection performance and productivity by using control charts. This paper also explains the benefits and challenges of using control charts in software organization.
Introduction to Software Process Management
With the increasing interest in predictability and effectiveness of software development practices, SPM (Software Process Management) has become a crucial aspect in the Software Engineering field.The effectiveness of software development process depends upon how well software process model is aligned.In order to carry out effective data based on decision making, it is highly essential that the software process model is managed and analyzed accurately.Developing concern in evolving efficient approaches to SPM has led to focusing on software process modeling such as formal analysis and fine grained modeling.Recently, different formal processes of software process modeling approaches are to be introduced and entirely depend upon Petri net [1].According to Khan [2] the software process management is used to make rational and reasonable decisions and some methods applied for systematic analysis of process execution and relative factors of environment are essential.Though, many formal approaches are primarily concentrated on the techniques of process modeling rather the techniques of software process management.These techniques of formal analysis are targeted mainly to configure proper model mathematically.The techniques of formal analysis offer major ways to develop software process management and their maturity with suitable software process model.But, simply setting up with better software process model does not essentially make sure the efficiency in the actual process actions.
In this paper, the author have tried to highlight the role of control charts in SPM, analysis of process parameters like rework, productivity, defect density using control charts.The paper also discusses the challenges faced by software organizations in using the control charts followed by the conclusion and future work.
Role of Control Charts in Software Process Management
The focus on SPC techniques in the field of software industry has been growing since the last decade.Several organizations have advanced maturity levels of software process improvement models including Capability Maturity Model (CMM) [3], Capability Maturity Model Integration (CMMI) [4] and SPICE ( [5,6]).These models are used to develop direct software companies to implement SPC techniques as an important step for achieving the maturity levels of process at a higher extent.The soft-ware process improvement models suggest control charts are applied in the project level process control and organizational level process for the improvement purposes.
It has also increased focus on sub process monitoring and probabilistic prediction models in CMMI certified organizations.As per CMMI processes, sub process monitoring and defect prediction model (DPM) implementation is mandatory under the areas of Quantitative Management process.
The sub process is a subset of process, but it represents as a significant and an independent set of activities that can be controlled.All classical control charts includes a centerline and control limits on both sides of the centerline.The centerline is usually the average of the set of values.The two control limits such as (Upper Control Limit (UCL) and Lower Control Limit (LCL)) are contained the value of +/− 3 sigma; where sigma denotes the standard deviation (denoting distance from the centre point).3-sigma limits result in very few false alarms and that point indicates outside the limits are highly possible in special causes.
Card [7] has mentioned that control charts are regarded as statistical data analysis refined tools that involve lower and upper restrictions to find variations.These charts are most commonly used in statistical process control analysis.A control chart is used to control and assess the variability of product or processes characteristics.Generally, preparing a control chart involves setting up the upper and lower control limits of data differences from the average value of a data set.If an examined data value lies outside the control limits then it would trigger the analysis.The usage of SPC and control charts may help to diminish and develop the differences in the implementation of a defined software process.Figure 1 shows the sample control chart: The characteristic reasonable lower and upper bounds may be set up distinctly.Sometimes they may reflect the expectations of customers.On the other hand, the bounds may be based on the experiences of past software management process.From the average value the standard deviation may be used as these limits.For example, if one SD (Standard Deviation) is used as the lower and upper control limits then in an examination that falls outside of these limits can be produced for possible alarm and attention.In software project management, the idea of a control chart along with the use of SD as the lower and upper limit may be used to examine and track a particular characteristic of a methodology or a product.The usability characteristic may be examined through usability testing in case of a product [8].
Florac and Carleton [9] have described that to make use of control charts for the software processes, then the statistical process control must be identified first.The features that can be studied for the result of this process involves delivering defect density, productivity, performance review and rework effort among others.To control charts, the major focus is an advantageous statistical process control guidelines and tools are offered for the improvement of process, process management and mea- surement within the software field.
The most frequently used chart for separate data is the XmR chart.XmR charts are specially used if little is known about the underlying distribution or if the justification for assuming a binomial or poisson process is relatively questionable.XmR chart can be used to monitor turn around time across production problems or coding effort across units examples include the u-chart for Poisson data and the p-chart for binomial data.
Analyzing Rework Effect Using Control Chart
Lantzy [10] has mentioned that rework is referred as the complete hours invested that was affected by unplanned mistakes or changes.Some of the modifications may be extra demands of customers, which are considered as developments.Classifying these rework as developments or not depends on the target of estimating the effort of rework.The rework effort is a good indicator for the quality of software process as it reveals an importance of effort the researcher invests suitable to former mistakes and doing things "First time Right".Rework enhances the costs of software project and does not add any value to the project.Any project that is completed successfully for the first time requires no rework.Rework effort therefore stands as one of the major factors behind cost incurred by an organization owing to poor quality of developed software.Cost of Poor Quality (COPQ) is a software quality metric used in determining the cost incurred on poorly delivered software.Rework effort is one of the factors of internal failures that result in the increase of COPQ [11] Houston [12] classifies the costs of software quality into two cost groups' of achieving quality and costs since lack of quality.Rework acts as leading role in the second group.Rework shows the influence of defects directly next to their amount or cost as its focus on the value of effort.As a result, defect counts and the effect of reworks considered as a supplementary evaluations for analyzing the products of software and process quality.An operational definition of the percentage of rework can be defined as:
( ) ( )
Percentage of rework Effort of rework Total effort = Conradi et al. [13] has mentioned that the percentage of rework gives the understanding about an associating cost or amount of rework with regard to complete effort.Cost of rework in a software environment could be cost incurred in fixing defects under warranty or fixing user acceptance testing defects.A sample individual chart is shown in Figure 2.
In effort overrun the process of performance limits.It might be owing to low quality of product or highly effectiveness on the inspection process.At the same time, the points that lie below the LCL represent small process of inspection in which numerous defects on products remain undetected or greater quality in product where the product has really negligible amount of defects.In both these cases, the defect density measures are used to gaining a proper understanding during interpretation of the results [14].
Analyzing Productivity Using Control Chart
Florac and Carleton [9] have described that productivity refers to the number of results generated/unit invested.For example, the number of results is the amount of painted coke bottles, refined burden of petroleum or the length of a pressed metal sheet.Measures of input contain different types such as used paint weight, for refinement number of catalyst joined or more electrical energy essential to organize pressing machines.The major results are the products of work in software development namely documents and code.And the size estimation is used to express the amount of product work generated.On the other side Jakolte and Saxena [15] have mentioned that the main input for the product work production is the HR (Human Resource).Effort is used to quantitatively measure the number of utilization of work force.The measure of productivity turns out to be the work product size generated per unit attempt.Productivity is a critical and important measure as it gives straight involvement regarding how effective the software processes are executed.The data on productivity is used to make achievable plans, to visualize the influences of development activities and also to predict deficiencies in software processes.The productivity element or factor consideration is the most important for high management since greater productivity gets lesser prices, develops chances for gains and develops rivalry forces in the market.
Figure 3 shows the individual control chart for productivity.
In Figure 3, the data points that surpass upper control limit that represent an efficient study or a quick process of analysis.Similarly, less productivity measure may represent an ineffective analysis of need or a very brief, complex, or meticulous study.If the documents of software are ordered timely in graphs, a developing or reducing tendency can be identified on the other side.The researcher is capable to view the influence of any development studies.After predicting the outliers future analysis should be essential to invent the deviation effects and obtain appropriate significant measures [16].
Analyzing Defect Density Using Control Chart
According to Radice [17] Defect Density is referred as many defects as per size of the product.The defect density formula can be defined as:
Defect density # of defects size of the product =
The metric data interpretation and analysis depends on consideration that on an average researchers have particular expectancy of defect count for each unit of software artifact that is being inspected.The measure of identifying most of the anticipated defects during the software inspection process stands as an indicator for exhibiting the effectiveness of the process of inspection.For these reason, it describe about these impacts is that statistical process control needs rational data sampling.If a Source: Jakolte and Saxena, 2002.
Figure 3. Sample individuals of control chart for productivity.
sample of data has varied distributions, the difference will be increased and the control chart sensitivity will be reduced greatly.A study conducted by Kumuro [18], reveals an example of the impact of review speed on the quality of the review process.The below graph shows the XmR chart plotted with values of review speed of a specific document review.One review is detected and its values are encircled in Figure 4.
Figure 5 illustrates Z chart representing values of Defect density plotted for the above depicted review data: This value is actually not a mistake but relatively small when compare to the values adjacent to it.Thus it can be inferred that this review was organized too rapidly and there may exist huge defects in documents to be reviewed.Viewing into the review record it turned out that critics attempted to inspect many documents about 3 times bigger than mean.Therefore, the suitable activities were to re-inspect document after categorize into more than 3 parts.Repeating again, this type of analysis can enhance stabilization of the process of peer review [19].
The above example makes it clear how control charts are useful in identifying defect density.
Benefits to Software Organizations in Applying Control Charts for Managing Software Processes
According to Carleton out that SPC offers real time identification to set up baselines of controllable method, set study and develop capabilities of dynamical process and concentrate on business fields requiring software development process.As per argument from Kim [22], SPC moves far away from decision making As a result, the software organizations cannot instantly realizes the advantages of these statistical process control techniques.Weller [23] has mentioned that statistical process control needs well formulated procedures.It requires a high level of commitment from management and an organizational climate where people are not offended when problems arise.Above all, SPC requires a discipline of strictly following the formulated procedures.Many software organizations that have implemented control charts for implementing software process control in the software life cycle development process have been greatly benefited from them.The client base of statistical process control ranges from startups of small technology, whose core business is software development to big IT firms that leverage development of software to develop operational performance and business systems.Control charts pave a way for continuous process improvement, reduce cost, minimize or reduce defects, improve productivity and finally improve the total quality of the end deliverable.Usage of control charts can lead to reduction in the control limits causing process improvements.It has been observed that rigorous monitoring of control charts plotted for process parameters like defect density and taking timely corrective and preventive actions would lead to process improvements.For example-if there is a data point outside the control limits for higher defect count in a module showed as a spike in control chart, timely action taken to remove the root cause will eliminate the similar pattern in further data points.Such data points are known as special causes of variation.Figure 6 depicts a sample comparison where current control limits have come down from the historical limits showing process improvements.There are other cases which are inherent in process known as common causes of variation.Com- It can be thus clearly understood thus that statistical process control has found a prime position in the IT sector too, offering multiple benefits in improving the overall quality of the software process.
Challenges Encountered by Software
Organizations in Applying Control Charts for Managing Software Processes Jones [24] has pointed out that monitoring the stability of software process in small organizations is a challenging problem for the software engineers.The software companies trust the quality of product merely as much as the production quantity.Cngussu et al. [25] has pointed that to make sure a higher quality level is maintained, software firms must be determined to formulate a quality policy that is dedicated to complete satisfaction of customers.The policy may include regular reliability and quality developments with every employee playing an essential role.
Caivano [26] has mentioned that to meet the challenge of data analysis, software companies should develop statistical process tools to monitor process capability using control charts and make them possible for all employees with a shared liability for analysis of data.A strategic team organizes continuous meetings to share successful statistical process control measures and take decisions based on the statistical analysis tools.According to Sargut and Demirors [27], some of the additional challenges of statistical process control for software organizations are: 1) Statistical process control is considered to be a management tool; 2) control charts are considered as an additional work for operator; 3) statistical process control are not supported with software tool; 4) statistical process control is not built into the process of manufacturing; 5) experienced operators are endangered by new processes that may replace them; 6) manufacturing and quality are not on similar page with SPC; and 7) the success of SPC is not reported with transparency.
Conclusions
Authors have implemented the control charts for monitoring multiple process parameters like defect density during unit testing, code review, system testing.The results of implementation in more than 40 projects were studied.C charts were used to monitor defect density during unit testing, code review, and system testing while XMR charts were used to monitor actual effort during the same phases.Just in time data analysis was performed by team where the defect density and effort were analyzed using the control limits set from the historical limits arrived from organization baseline of similar projects.The results were quite encouraging and there were lots of benefits achieved through the analysis.Control charts helped in performing the timely analysis for data points for special causes of variation and data points that follow a specific pattern.The team can take timely corrective and preventive actions to ensure that the similar defects/ issues are prevented from occurring in later phases of SDLC.
Application of statistical process control (SPC) in software industries, a decade ago had been a challenging task for researchers and software engineers.Every software metric had specific complexities and characteristics regarding its collection, definition and explanation.Despite these challenges, our interpretation suggests that researchers in the past and at present have proven that the application of SPC techniques via control charts has several positive effects that include reduction in cost, minimization of defects and error rates, improvement of the software quality end deliverable, thereby improving the overall profitability of the software organization.Alignment with goals of a business forms the key to a successful software process improvement.Statistical process control can help in indicating the direction to which a software process must be improved for better results.
However, it may be recommended that not all major software process should be using control charts for process measurements.Other useful statistical techniques, such as confidence intervals and prediction modeling could be better measurement tool in certain situations.Projects that are short duration, having small teams, and low business criticality may not be the best candidates for SPC monitoring using control charts.In such situations, this could result in process overhead for the project team and the overall morale of team could come down.SPC monitoring using control charts should be the best used in monitoring the most critical quality processes.These processes could vary from project to project depending upon scope and business goals.The decision to use control part for SPC should be taken into consideration the duration of project, the size of the team, availability of data and criticality of the process parameter that is to be measured.The training of the team on using the control charts and their analysis is the key to success of this initiative.
With the advent of several automated statistical process control software tools, applying SPC through control charts has become a much easier process to all software organizations of today.Therefore, it can be concluded that control charts are really helpful in software organizations by adding the value of quality at the end deliverables, and also deliver to their clients.
Future Work
The analysis can be performed in start up or a small organization to understand whether the SPC can still produce beneficial outcomes.Also, domains such as ERP Implementation can be studied for possible usage and benefits of control charts.
Figure 2
the points that lie above the UCL reveal examples in which several defects identified per unit Source: Houston, 1999.
Figure 2 .
Figure 2. Sample individuals control chart for a percentage of rework.
[ 20 ]
Statistical process control is a strong component to optimize the quantity of data required for utilization in making determinations of management.Statistical techniques offer a comprehension of baseline of business, insights of process improvements, visible and active involvement and value communication and process results.Likewise, Florac et al.[21] has pointed Source:Kumuro, 2006.
Figure 6 .
Figure 6.Sample comparison chart of control limits.mon causes are depicted as patterns of data points within LCL and UCL and are addressed through Normality rules.It can be thus clearly understood thus that statistical process control has found a prime position in the IT sector too, offering multiple benefits in improving the overall quality of the software process. | 4,662 | 2014-02-11T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
The Black Hole Weak Gravity Conjecture with Multiple Charges
We study the effect of higher-derivative corrections on asymptotically flat, four-dimensional, non-rotating dyonic black holes in low-energy models of gravity coupled to $N$ $U(1)$ gauge fields. For large extremal black holes, the leading $\mathcal{O}\left(1/Q^2\right)$ correction to the extremality bound is calculated from the most general low-energy effective action containing operators with up to four derivatives. Motivated by the multi-charge generalization of the Weak Gravity Conjecture, we analyze the necessary kinematic conditions for an asymptotically large extremal black hole to decay into a multi-particle state of finite charge extremal black holes. In the large black hole regime, we show that the convex hull condition degenerates to the requirement that a certain quartic form, constructed from the Wilson coefficients of the four-derivative effective operators, is everywhere positive. Using on-shell unitarity methods, we show that higher-derivative operators are renormalized at one-loop only if they generate local, on-shell matrix elements that are invariant tensors of the electromagnetic duality group $U(N)$. The one-loop logarithmic running of the four-derivative Wilson coefficients is calculated and shown to imply the positivity of the extremality form at some finite value of $Q^2$. This result generalizes a recently given argument by Charles, and shows that under the given assumptions the multi-charge Weak Gravity Conjecture is not a Swampland criterion.
V. Discussion 34 The theory is also believed to admit an astronomical number of vacua, which manifest at low energies as effective field theories (EFTs). This set of consistent string vacua is known as the Landscape. Due to the large number of low-energy descriptions, it may be difficult or impossible to find one that describes our world. Recently a different approach has proven useful: rather than searching through vacua, we should study the general conditions under which an EFT admits a UV completion that includes quantum gravity. Theories that admit no such completion are said to be in the Swampland [2]. A number of Swampland criteria have been put forward (for a review of the program, see [3,4]). In practice, Swampland criteria are proposed and supported using very different approaches. One is to study features present in known compactifications of string theory. Another approach relies on arguing that various properties of the infrared are required for consistency, and then studying how this constrains physics in the ultraviolet. Both sources are indirect, which makes rigorous proofs of the Swampland conjectures elusive.
One compelling candidate for a general principle constraining consistent string vacua is the weak gravity conjecture (WGC) [5]. Various forms of the conjecture have been proposed, but roughly it states that EFTs that arise as low energy descriptions of theories of quantum gravity must have a state with a greater charge than mass-i.e. for which "gravity is the weakest" force. Were this not the case, extremal or near-extremal black holes would unable to decay because emitting a sub-extremal state would cause the left-over black hole to be superextremal, violating cosmic censorship. This, in turn is problematic because it leads to the existence of an arbitrarily large number of stable states, which is believed to be pathological [2]. We now review these arguments in more detail.
A. Review of the Weak Gravity Conjecture
The original Weak Gravity Conjecture (WGC) was formulated as a conjectured Swampland criterion [5]: in a UV complete model of quantum gravity, there should not exist an infinite tower of exactly stable states in a fixed direction in charge space. Arguments against such an infinite tower include that it might lead to a species problem or remnant issues [6,7]. No proof of this statement has been given, but it is consistent with all known explicit examples of string compactifications and is conceptually consistent with a number of other conjectures about quantum gravity, such as the finiteness principle and the absence of global symmetries [2].
The conjecture can be equivalently interpreted as a statement about the (in-)stability of asymptotically large extremal black holes. In quantum gravity, elementary states with super-Planckian masses can be expected to appear to distant observers as black hole solutions of some low-energy effective field theory (EFT) [8,9]. The decay of such a state must have an equivalent semi-classical description as the discharge of the black hole, for example by Schwinger pair production of charged states near the horizon [10]. Since the relevant energy scale µ for the EFT calculation is here given by the scale of the black hole horizon µ ∼ M 2 Pl /M , asymptotically large black holes are well approximated by standard two-derivative Einstein gravity together with any additional massless degrees of freedom. All other details of the UV physics are integrated out and appear in the low-energy EFT as contributions to Wilson coefficients of higher-derivative effective operators that give subleading corrections to the black hole solutions, and/or the renormalization of M Pl and the cosmological constant.
Models of quantum gravity can then be organized into universality classes according to their massless spectra and lowest dimension interactions; each class of model has an associated set of large black hole solutions that must then correspond to the asymptotic spectrum of super-Planckian elementary states.
In this paper we consider the universality class of models in four-dimensions with zero cosmological constant and a massless spectrum of matter fields consisting of N U (1) gauge fields. To begin with we review the statement of the WGC for N = 1; in this class the spectrum of large black holes corresponds to the familiar Kerr-Newman solutions. Within a given charge sector, the lightest black hole corresponds to the extremal, non-rotating solution with Q 2 = M 2 /M 2 Pl . If the WGC is valid, then for all Q 2 greater than some critical value, the corresponding extremal black hole must be able discharge. Whether this is kinematically possible depends on the spectrum of charged states with masses lighter than the black hole.
For a general transition of the form |Q, M → |q 1 , m 1 ⊗ |q 2 , m 2 ⊗ ... ⊗ |q n , m n , where each of the final states is assumed to be localized and at rest asymptotically far away (with zero kinetic and gravitational potential energy), conservation of total energy and total charge requires Q = q 1 + q 2 + ... + q n , M = m 1 + m 2 + ... + m n .
If the initial state is a large extremal black hole with Q 2 = M 2 /M 2 Pl , then at least one of the daughter states |q i , m i must be self-repulsive, meaning In the context of a specific model, to show that the WGC is violated requires complete knowledge is the spectrum of charged states. To show that it is satisfied however, requires only that we can demonstrate the existence of a single self-repulsive state. It is useful to separate charged states into three regimes according to their masses: In this paper we are analyzing the spectrum of charged states in the black hole regime.
The corresponding analysis for a single U (1) gauge field was made in [11]; we begin by reviewing their discussion. Naively, it would seem impossible for a charged black hole to be self-repulsive since this would violate the extremality bound. The usual bound Q 2 ≤ M 2 /M 2 Pl is derived by requiring the existence of a horizon (by requiring Weak Cosmic Censorship).
When the higher derivative corrections to the effective action are included the black hole solutions, and the associated extremality bounds, are modified. For large black holes, with Q 2 1, these corrections can be calculated perturbatively in 1/Q 2 , with the leading corrections corresponding to four-derivative effective operators. The authors of [11] analyzed solutions to the following effective action where W µνρσ is the Weyl tensor. To leading-order the corrected extremality bound is The O (1/Q 4 ) contributions correspond to next-to-leading-order in the four-derivative operators and leading-order in six-derivative operators. If the corrected extremality bound is then extremal black holes with finite charge are self-repulsive and the WGC is satisfied in the black hole regime. Conversely, if the corrected extremality bound is negative then the decay of asymptotically large extremal black holes into extremal black holes with large but finite charge is kinematically impossible. This does not mean that the WGC is violated, but rather that if it is valid then there must exist a self-repulsive state in either the stringy or particle regimes.
Various arguments have been given that (5) should always be true, even from a lowenergy perspective. These include arguments from unitarity, causality [12], positivity of the S-matrix [13], shifts to entropy bounds [14], and renormalization group running [1].
The purpose of this paper is to generalize the above discussion to the universality class of models for which the low-energy matter spectrum consists of N U (1) gauge fields. We consider black hole solutions with general electric and magnetic charges.
The two-derivative approximation to the EFT has many accidental symmetries, including an O(N ) global flavor symmetry, parity and U (N ) electromagnetic duality symmetry. We do not assume that any of these symmetries are preserved in the UV, and instead analyze the most general possible EFT with the assumed low-energy spectrum In [15] it was shown that the kinematic condition for a large extremal black hole with multiple charges to decay is a non-trivial generalization of the single charge version of the WGC. In general, if a set of light states | q i , m i are available with masses m i and charge vectors q i , then the possible charge-to-mass ratio vectors of the associated multi-particle Here N i < 0 corresponds to contributions from CP conjugate states. This set describes the convex hull of the charge-to-mass vectors z i = q i /m i . The condition that the decay of asymptotically large extremal black holes be allowed is given by the convex hull condition [15]: Weak Gravity Conjecture (Multiple Charges): In a UV complete model of quantum gravity the convex completion of the set of charge-to-mass vectors for every charged state in the spectrum, with mass m, electric charges q = (q 1 , q 2 ...) and magnetic charges p = (p 1 , p 2 , ...), must enclose the unit ball | z| 2 ≤ 1.
As in the single charge case, to show that a given model does not satisfy this condition requires complete knowledge of the spectrum of charged states. It is however possible to show that this condition is satisfied with only partial knowledge of the spectrum since the convex hull of a subset of vectors always forms a subregion of the full convex hull. This condition has been previously analyzed from several perspectives [16], considering contributions from the particle regime. The purpose of this paper is to describe the general conditions on the Wilson coefficients {a ijk , b ijk , α ijkl , β ijkl , γ ij , χ ijkl , ω ij } under which the convex hull condition is satisfied by contributions from the black hole regime.
B. Overview of Results
This paper is organized as follows. In section II, we calculate the leading-order corrections to dyonic, non-rotating, extremal black hole solutions corresponding to the effective action (7); various technical details are given in appendices B and C. The corrected extremality bound is inferred by demanding the existence of a horizon (27) and is found to depend on all five of the four-derivative operators, including parity violating operators when magnetic charges are present. It is shown that the three-derivative operators do not give corrections to spherically symmetric solutions at any order in the perturbative expansion.
In section III, we describe the necessary kinematic conditions for asymptotically large black holes to decay into finite charge black holes. First we describe the natural generalization of the convex hull condition to the black hole regime, then we argue (with a formal proof relegated to appendix D) that in the large black hole regime, when the perturbative expansion in 1/Q 2 is justified, the extremality surface is always convex. The black hole WGC is then shown to reduce to the condition that a quartic form (30) is everywhere positive. We comment on the implications of known unitarity and causality constraints on the Wilson coefficients. The condition is analyzed in detail in two illustrative examples; first we consider the black hole that is charged under two electric charges q 1 and q 2 , and second we consider the black hole that has both an electric charge q and a magnetic charge p under a single U (1) gauge field.
In section IV we analyze the one-loop logarithmic running of the Wilson coefficients of the four-derivative effective operators. Using on-shell unitarity methods we prove that higherderivative operators are renormalized only if they generate local, on-shell matrix elements that are invariant tensors of the maximal compact electromagnetic duality group U (N ).
Using this non-renormalization theorem, together with the explicit one-loop UV divergence of Einstein-Maxwell, the logarithmic running of the Wilson coefficients is calculated and shown to imply the positivity of the extremality form (30) at some finite charge.
In appendix A we review the correspondence between non-redundant EFT operator bases and local on-shell matrix elements. Using elementary spinor-helicity methods a complete and independent basis of matrix elements is determined and the corresponding three-and four-derivative local operators constructed.
II. EXTREMALITY SHIFT
The goal of this section is to determine the effect of higher-derivative operators on the extremality bound for charge. As we are considering the case of multiple charges, this amounts to delineating the space of allowed charge combinations Q = q 2 1 + p 2 1 + ... for a given mass m. We use the presence of a naked singularity, or absence of an event horizon, to rule out charge configurations at a given mass; such combinations of charge and mass will be called superextremal.
In pure Einstein-Maxwell theory, the superextremal black holes have Q/m > 1. We refer to such an inequality as the extremality bound. This requirement derives from the positivity of the discriminant of the function 1/g rr , which itself comes from the requirement that that function should have a zero (i.e. the event horizon). We will see that the higherderivative corrections have the effect of shifting the right-hand side of this bound by factors proportional to the Wilson coefficients and suppressed by factors of 1/Q. Generically, nderivative operators will contribute a term in the extremality bound that is proportional to First, we consider the case of three-derivative operators. In this case we do not need to compute the extremality shift to see that these operators do not contribute to it; this is clear from the index structure and the spherical symmetry of the background solution.
Next we compute the effect of four-derivative operators. The first step is to determine a complete basis for four-derivative operators. This can be done in the standard way by writing all tensor structures and then using identities and two-derivative equations of motion to eliminate redundant ones. A somewhat more modern method for determining this basis makes use of the one-to-one correspondence between field-redefinition independent operators, and independent local matrix elements. The details of this on-shell approach may be found in appendix A.
Once the basis is chosen and the Lagrangian fixed, we use the method developed in [11] to determine the shift to the extremality bound. First we compute how higher-derivative coefficients effect the solutions of the equations of motion -in particular we are interested in the change in 1/g rr . We then compute the shift to the discriminant of this function.
Requiring the positivity of the shifted discriminant allows us to directly determine the shifted extremality bound.
This approach is necessarily first-order in the EFT coefficients; if we were to compute the shift to second-order in the four-derivative coefficients, we would need also to consider the first-order effect of six-derivative operators, as these contribute at the same order in 1/Q. This means that at each step we eliminate all terms that are beyond leading-order in the four-derivative coefficients.
A. No Correction from Three-Derivative Operators
When N ≥ 3 the leading effective interactions are given by three-derivative operators: where the dual field strength tensor is defined as From the index structure of the three-derivative operators (alternatively from the structure of the corresponding local matrix elements given in appendix A) one can show that both a ijk and b ijk are totally antisymmetric.
We analyze solutions to the equations of motion: By an elementary spurion analysis it is clear that there can be no modification of the extremality bound at O(a, b). Promoting a ijk and b ijk to background fields transforming as totally anti-symmetric tensors of the (explicitly broken) flavor symmetry group O(N ), at leading order the extremality shift can depend only on invariants of the form a ijk q i q j q k or a ijk q i q j p k , which vanish. At next-to-leading order there could be contributions of the form a ijk a klm q i p j q l p m , which do not obviously vanish for similarly trivial reasons. If present such contributions would appear at the same order, O (1/Q 2 ) as the leading-order contributions from the four-derivative operators.
Interestingly these O(a 2 , ab, b 2 ) corrections also vanish. To show this, we evaluate the right-hand-side of (12) on a spherically symmetric ansatz, with the remaining components of the field strength tensors set to zero. The higher-derivative terms are seen to vanish due to the structure of the index contractions. The equations of motion for the non-zero components g tt , g rr , F i01 , F i23 are identical to the equations of motion of two-derivative Einstein-Maxwell. The Reissner-Nordström black hole remains the unique spherically symmetric solution to the higher-derivative equations of motion with a given charge and mass.
It is interesting to note that the above argument fails if the solution is only axisymmetric, as in the general Kerr-Newman solution. For spinning, dyonic black holes, the three-derivative operators might give O (1/Q 2 ) corrections to the extremality bounds. We leave the analysis of this case to future work.
B. Four-Derivative Operators
We have argued that three-derivative operators have no contribution on spherically symmetric backgrounds. Thus, the leading shift to the extremality bound comes from fourderivative operators. We consider the action Here the Latin indices run from 1 to the number of gauge fields N . This is the most general possible set of four-derivative operators for Einstein-Maxwell theory in 4 dimensions. For a thorough discussion on how these operators comprise a complete basis, see appendix A.
We will see that the parity-odd operators can contribute if we allow for magnetic charges.
Our calculation is identical to the one performed in [11] if we set N → 1 and turn on only electric charges. We have chosen units with M Pl = 1 for convenience, though they may be restored via dimensional analysis.
Background
First consider the uncorrected theory, which is gravity with N U (1) gauge fields. This theory admits solutions that are black holes with up to N electric and magnetic charges.
These solutions take the form: Here Q 2 = q i q i + p i p i . These backgrounds are spherically symmetric, so we will impose this as a requirement on the shifted background 1 . In the case of spherical symmetry, one may rearrange the Einstein equation and integrate to find [11] For the uncorrected theory, the stress tensor is In this case, it is easy to see that the effect of the stress tensor is to add the q 2 +p 2 r 2 term to g rr .
Corrections to the Background
Now consider the effect of the four-derivative terms. To compute their effect on the geometry, we must compute their contributions to the stress tensor. We will expand the stress tensor as a power series in the Wilson coefficients as Here we have written two terms that are proportional to the first power of the Wilson coefficients (α ijkl , β ijkl , ...), because there are two different sources of first-order corrections.
The first change T (1) M ax comes from the effect of these operators on solutions to the Maxwell equations, which changes the values of M ax essentially comes from evaluating the zeroth-order stress tensor on the first-order solution of the F i equations of motion.
The second change T (1) Lag derives from varying the higher-derivative operators with respect to the metric. Thus, this term is essentially the first-order stress tensor, and we will evaluate it on the zeroth-order solutions to the Einstein and Maxwell equations. The remainder of this section will be devoted to computing each of these contributions.
Maxwell Corrections
The first source of corrections to the stress tensor derives from including the corrections to the value of F . The corrected gauge field equation of motion is We denote the right-hand side of this equation by ∇ µ G µν . The first-order solution to the Maxwell equation leads to corrections that equal (appendix B) By plugging in the zeroth-order values of the fields into this expression, we compute the corrections to the stress tensor through the Maxwell equation: The details of this derivation may be found in appendix B, but we should comment on a few interesting points. First, note the only G itr arises in the result. This is due to the Bianchi identity, which does not allow G iθφ to contribute. The Bianchi identity requires that ∂ r F θφ = 0, so in fact F i θφ can get no corrections at any order. A subtlety arises from the fact that the metric appears in the expression for the stress tensor. Therefore, it might appear that the first-order corrections to T t t involve contributions from the first-order value of F and the first-order value of g. This would be problematic because the first-order value of g is what we use the stress tensor to compute in the first place. In fact, this is not an issue; only the zeroth-order metric shows up in (20). This decoupling relies on cancellation between various factors of metric components, as well as spherical symmetry. Without this, the perturbative procedure we use to compute the shift to the metric would not work. We do not expect this decoupling between corrections to the stress tensor and corrections to the metric to happen for general backgrounds. It would be interesting to study the general circumstances under which it occurs.
Lagrangian Corrections
The second source of corrections is comparatively straightforward and comes from considering the higher-derivative terms in the Lagrangian as "matter" and varying them with respect to the metric. The variations of each term are given in appendix C. The result is In both cases, we have simplified the expressions by using the symmetries of the tensor appearing in the higher-derivative terms (e.g. α ijkl = α jikl = α klij ).
C. Leading Shift to Extremality Bound
By adding together both sources of corrections and computing the integral in (16), we compute the shift to the radial function g rr defined as, Then the shift is given by To find the shift to extremality that results from this, we examine when the new radial function g rr (r, M, Q) has zeros 2 . This equation is sixth order in r, but we are only interested in the first-order shift to the solution. We Taylor-expand near the extremal solution where r = M and Q = M , and keep only terms that are first-order in Wilson coefficients: We have kept M fixed. In going from the first to the second line, we have used that the This is a great simplification from the general problem of determining when a sixth-order polynomial has solutions. We merely need to evaluate the geometry shift on the extremal limit r = M = and divide by the derivative of the uncorrected radial function with respect to Q, which is 2/M . Now we evaluate this expression and divide by m to find the result for This is the main technical result of this paper. In the next section, we comment on the constraints that black hole decay might place on these coefficients, and we analyze this expression in more depth for the case of black holes with two electric charges, and the case of black holes with a single electric and single magnetic charge.
III. BLACK HOLE DECAY AND THE WEAK GRAVITY CONJECTURE
As described by [15] and reviewed in section I A, a state with charge-to-mass vector z is kinematically allowed to decay to a general multiparticle state only if z lies in the convex hull of the light charged states. In the case of asymptotically large extremal black holes decaying to finite charge black holes, the spectrum of light states corresponds to the region compatible with the extremality bound. At a given total charge Q 2 , and charge-to-mass vector z, the black hole extremality bound describes a surface in z-space of the form where T → 0 as Q 2 → ∞. The convex hull condition [15] then has a natural generalization to the sector of extremal black hole states: Black Hole Convex Hull Condition: It is kinematically possible for asymptotically large extremal black holes to decay into smaller finite Q 2 black holes only if the convex hull of the extremality surface encloses the unit ball | z| ≤ 1.
This means that to determine if the decay of a large black hole is kinematically allowed, we must first determine the convex completion of a complicated surface, a task that may only be tractable numerically. As illustrated in figure 1, it is possible for the convex hull of the extremality surface to enclose the unit ball even if the surface itself does not. Furthermore, the extremality surface may be non-convex even if the magnitude of the corrections is arbitrarily small.
Fortunately, the condition simplifies somewhat in the Q 2 1 regime, where the corrections to the unit circle derive from the four-derivative terms and are small as a result. In appendix D we prove that if T ( z, Q 2 ) is a quartic form, as it is in the explicit result (27), then the smallness of the deviation does imply convexity. In this regime, the convex hull condition is simplified in the sense that the extremality surface always bounds a convex region. At a given total charge Q 2 1, and charge-to-mass vector z, the black hole extremality bound describes a surface in z-space of the form The condition for the multi-charge weak gravity conjecture to be satisfied in the perturbative regime degenerates to the more tractable condition: (Perturbative) Black Hole Weak Gravity Conjecture: It is kinematically possible for asymptotically large extremal black holes to decay into smaller finite Q 2 extremal black holes if the quartic extremality form is everywhere non-negative. Using the parametrization of the effective action (7), this bound takes the form which follows directly from (27).
The result of the previous section allows one to determine whether a theory allows for an infinite number of stable black holes states by checking if the extremality form is anywhere negative. In this section we demonstrate this with a few basic examples.
Black Hole With Two Electric Charges
A black hole that is electrically charged under two U (1) groups provides one very simple example. In this case, the extremality bound simplifies to As the q factors project to the completely symmetric part of this tensor, it is convenient to where we have symmetrized the indices with weight one.
Expanding the constraint in components leads to This polynomial must be positive for all possible combinations of q 1 and q 2 . We use the fact that the polynomial in (33) is homogenous, and divide by (q 2 ) 4 . Redefining q 1 /q 2 = x simplifies the left-hand-side of the inequality to a polynomial of one variable: This polynomial is quartic so one may solve this by studying the explicit expressions for the roots and demanding that they are not real. However the positivity conditions for fourth order polynomials are much simpler and lead to a set of relations among the components of T ijkl (see, for instance, [17]). This allows the problem to be solved entirely in the case of two charges; for N > 2 one must analyze multivariate polynomials.
For an example of a theory that is in the Swampland, consider the following four-derivative terms: where α 1111 = 2, α 1122 = −8, and α 2222 = 3. Then the extremality shift becomes The inequality is satisfied when q 1 = 0 or q 2 = 0, but at q 1 = q 2 , the extremality shift is negative. Therefore, a black hole with q 1 = q 2 in this theory would not be able to decay to smaller black holes. This model requires the existence of self-repulsive states in the spectrum in either the particle or stringy regimes to evade the Swampland.
Dyonic Black Hole
Another simple case occurs when there is only a single gauge field but the black hole has both electric and magnetic charge. Then the extremality bound is obtained by removing all indices from (27): We recover the results of [11] when the magnetic charge is set to zero. A single electric charge shifts the extremality as However, a single magnetic charge has the opposite sign for γ: Requiring that both types of black holes be able to decay places a stronger constraint on α and γ: As far as we know, this stronger constraint is not present in the literature.
If we assume that both p and q are non-zero, we can again divide by p 4 as we did in the previous section, and again find a polynomial of a single variable: For the case of a single gauge field, a very physical example comes to mind: the Euler-Heisenberg Lagrangian [18], in which integrating out electron loops induces a four-point interaction among the gauge fields. 3 This model has four derivative terms given by with α = 4, β = 7 (up to overall constants that do not effect the problem). The inequality that must be satisfied is the following: Clearly this holds for all values of y. Thus, we have found that the Euler-Heisenberg theory is not in the Swampland. This does not require that we know anything about the spectrum, or that the higher-derivative operators came from integrating out a particle at all. Only the four-derivative couplings are needed to learn that this theory allows nearly extremal black holes to decay.
The condition (37) exhibits an interesting simplification when α = β and the remaining coefficients are set to zero. In this case, the condition on the quartic form then reads In this special case the extremality surface becomes invariant under orthogonal rotations in charge-space. In fact, it is simple to verify that this is the only choice of coefficients with this feature. The enhanced symmetry is a consequence of the electromagnetic duality invariance of the equations of motion for this choice of coefficients. In the effective action, the necessary condition for duality invariance is the Noether-Gaillard-Zumino condition [19] F µνF µν + G µνG µν = 0, whereG µν ≡ 2 δS δF µν .
One can verify that this is satisfied if we α = β, γ = χ = ω = 0 as above, at least to fourth order in derivatives. To make this equation hold to sixth order would require the addition of sixth-derivative operators to the Lagrangian, and so on. For a general analysis of electric-magnetic duality invariant theories, see [20]. In the following section we show that the generalization of the electromagnetic duality group from U (1) in the single charge case, to U (N ) in the N -charge case plays an essential role in renormalization group running of the four-derivative Wilson coefficients.
B. Unitarity and Causality
Infrared consistency conditions on the low energy effective theory have been used to bound the coefficients of higher-derivative operators. Such constraints were first considered in the context of the weak gravity conjecture in [21], and were extended to the case of multiple gauge fields in [16]. Further arguments based on unitarity and causality were given in [12]. Here we review these arguments and present a few generalizations.
Integrating Out Massive Particles
One source of higher derivative corrections derives from integrating out states in the particle regime. By this we mean states that are well described by ordinary QFT on a fixed spacetime background. Such states necessarily have masses smaller than some cutoff scale Λ QF T , which is the string scale or whatever scale new physics invalidates the QFT description. We have already seen a simple example of this in the Euler-Heisenberg Lagrangian above.
At tree-level, only neutral particles contribute to the four-point interactions. Consider, for example, a dilaton that couples to the field strengths. The Lagrangian for the scalar theory is We integrate out the scalar to find the effective four-derivative coupling by matching to the low-energy EFT at the scale Λ UV m φ Therefore, in this simple setup, the coefficient α ijkl takes the form For a single gauge field α = 3µ 2 m 2 φ . Unitarity requires that µ is real, which implies that α is positive [12]. It is easy to see that this is still the case when there are more gauge fields.
The extremality form for this theory is which must be positive. 4 The same reasoning shows that integrating out an axion, which couples to F iF j , generates a value of β ijlk , and that its contribution to the extremality form is also positive.
Light charged particles cannot contribute at tree-level so their leading contributions are at loop-level. The diagrams that contribute in this case are: 4 Note that unlike the case of single gauge field, unitarity does not bound all the coefficients separately. For instance, in the two charge case, µ 11 = 1, µ 22 = −1, and µ 12 = 0 would lead to These contribute at the same order except they have relative factors of z φ , the particle's charge-to-mass ratio, coming from counting couplings and propagators. Diagram (a) goes The field-strength fourpoint interaction is generated by the first three diagrams. In the limit where z 1, diagram (a) dominates all the others (as we noted above in the Euler-Heisenberg example) and the extremality form becomes Again, we find a manifestly positive contribution. For z φ near or less than one, both α ijkl and γ ij are generated by diagrams that are order z 0 φ . In that case this scaling argument does not apply, and the order one constants need to be included in the analysis. These arguments are schematic and largely review what was already considered in [16]. See that paper for details including the exact results of integrating out different particles.
One might wonder whether this analysis is relevant to the parity-odd operators. Interestingly, [22] has shown how to generalize the Euler-Heisenberg Lagrangian by integrating out a monopole or dyonic charge. The effective Lagrangian was derived in that paper (and earlier in [23]) to be where theq andp refer to the electric and magnetic charges of the dyon that is integrated out, not the charges of the black hole. This procedure generates the parity-violating fourphoton coupling as well as the two parity-even ones. This is not surprising given that magnetic charges violate parity in their interactions with the gauge field. What is more interesting is that this term is not a square, unlike every other term appearing in the effective Lagrangian. The sign of the generated term depends on the sign of the product of the electric and magnetic charges of the particle. In terms of the polynomial derived in (41), the condition that must be met to satisfy the WGC is: This polynomial is always positive, so the Lagrangian given in (51) does not allow for stable black holes and satisfies the WGC.
Causality Constraints
Another set of arguments for bounds on the EFT coefficients rely on causality. These were first considered in [21] and generalized to multiple gauge fields in [12]. Two methods were used, and they were shown to give the same result. The first is to consider the propagation of photons on a photon gas background. Requiring that photons travel subluminally constrains the four-photon interaction. The second method uses analyticity and unitarity to relate the EFT coefficients to an integral over the imaginary part of the amplitude, which is manifestly positive. The bounds obtained this way for multiple gauge fields are This inequality must hold for any vectors u and v. This bound is independent from the bounds that we have derived in (31), so it is not enough to imply the WGC on its own.
So far these arguments have only bounded the four-photon interactions. An interest-
ing causality-based argument was made in [12] that bounds the photon-photon-graviton interaction parameterized by γ. They argued that the addition of this four-derivative term introduces causality violation at a scale E ∼ M Pl /γ 1/3 (a fact noticed in [24]), so Λ QF T M Pl /γ 1/3 . This implies that γ (M Pl /Λ QF T ) 3 . This argument suggests that perhaps the W F F four-derivative terms are generically bounded by causality to be much smaller than a number of possible contributions to the F 4 terms. It would be interesting to extend the analysis of [24] to the more general set of operators used we consider here, but it is beyond the scope of our paper.
IV. RENORMALIZATION OF FOUR-DERIVATIVE OPERATORS
The Wilson coefficients that appear in the extremality shift (27) If c > 0 then at some finite value of the charge Q 2 extremal black holes must be self-repulsive.
This was shown to be the case in [1] for various explicit cases, including the single U (1) model (3). Since the renormalization group coefficient c depends only on the massless degrees of freedom, this analysis depends only on the universality class of the model. For those classes in which this conclusion holds, the WGC is always satisfied independently of the details of the UV completion, and in that sense is no longer a useful Swampland criterion.
In this section we show how this argument generalizes to an arbitrary number of U (1) gauge fields. Since there are many more four-derivative operators, we emphasize the importance of a non-renormalization theorem that arises as a consequence of the accidental U (N ) electromagnetic duality symmetry of the two-derivative approximation. In the following subsection we give an on-shell proof of this theorem, and then use it to extend the argument above.
A. Non-Renormalization and Electromagnetic Duality
Consider a low-energy effective action of the form The non-trivial non-renormalization theorem we prove below concerns electromagnetic duality symmetries, which are only symmetries of the equations of motion, not the action [19]. Consequently, they are not manifest off-shell, meaning diagram-by-diagram in the standard covariant Feynman diagram expansion, and the above reasoning is no longer valid.
Nonetheless we will prove that the above non-renormalization theorem is valid verbatim, at least at one-loop, where the flavor symmetry group O(N ) is enhanced to the maximal compact electromagnetic duality group U (N ).
It is convenient to discuss UV divergences in the context of dimensional regularization
where the loop integration is performed in d = 4 − 2 dimensions and ultraviolet divergences at one-loop appear as 1/ poles. In this context we can classify the sources of UV divergences in on-shell scattering amplitudes: 1. Cut-Constructible Divergence: By standard integral reduction algorithms, oneloop amplitudes admit a universal decomposition into a sum over a set of master integrals: where the master integrals are scalar integrals with the indicated topology. Here a i , b j , c k and R are rational functions of the external kinematic data. The first three contributions are often referred to as the cut-constructible part of the amplitude; they contain all of the branch cut discontinuities required by perturbative unitarity at oneloop. These contributions can be completely determined from on-shell unitarity cuts into physical tree amplitudes [25,26]. This determines the one-loop amplitude up to a rational ambiguity indicated by R. Since the rational part is both UV and IR finite, the divergent structure (both UV and IR) of the one-loop amplitude is completely determined by the tree-level scattering amplitudes. From the definition it is clear that only the master bubble integral I bubble is UV divergent, and therefore what we call the cut-constructible divergence is proportional to the sum of the bubble coefficients c k . These coefficients are completely determined by the two-particle unitarity cuts of the one-loop amplitude. It has been shown that the twoparticle unitarity cuts of the master bubble integrals are purely rational functions, while the two-particle cuts of the triangle and box integrals give logarithms [25,26].
By explicitly calculating the two-particle cuts of A 1-loop n one can read off the rational part as the associated bubble coefficient. Using the relation between unitarity cuts of one-loop amplitudes and on-shell phase space integrals of tree-amplitudes gives a well-known general formula for the cut-constructible UV divergence where the sums on the right-hand-side are taken over all cuts and all on-shell states exchanged in each cut. The details of the integration in this formula are not essential to the argument we make below.
2. UV/IR Mixed Divergence: In dimensional regularization IR divergences are also regularized as 1/ poles. Even though their physical origin is very different there can be non-trivial cancellations between UV and IR divergences in the on-shell scattering amplitude. Such mixed UV divergences are just as important as the cut-constructible ones, and must be included to calculate the correct beta functions [27,28]. Unfortunately, due to this cancellation they cannot be immediately extracted from the cut-constructible part of the one-loop amplitude (56). The strategy is to first independently determine the expected one-loop IR divergence, and then compare against the IR divergences in the cut-constructible part of the amplitude. Any discrepancy must be due to UV/IR cancellations, and so can be used to infer the mixed UV divergences. The true IR divergent structure is determined by the KLN theorem [29]. This states that in an inclusive cross-section, virtual IR divergences from loop integration must cancel against divergences in the initial/final phase space integrals that arise from soft/collinear real emission. Such real emission singularities are fixed by treelevel soft/collinear limits, so we find that again the mixed divergences are completely reconstructible from tree-level, physical data.
We begin with an on-shell description of U (N ) duality invariance at tree-level. The three-particle amplitudes are completely fixed 5 : where i, j = 1, ..., N are flavor indices. The fact that the on-shell three-particle amplitudes are diagonal in flavor space with unit coupling to the graviton is an on-shell expression of 5 The spinor-helicity conventions used in these expressions are given in [30].
the Einstein equivalence principle. U (N ) duality invariance is encoded in the on-shell Ward identity: where U ∈ U (N ). In the explicit expressions above this is seen to hold as a consequence of the fact that δ i j is a U (N )-invariant tensor. The 4-point amplitudes are simple to calculate using on-shell recursion Again, each of these is a U (N )-invariant tensor. As we discussed above, in the standard The precise details of the formula are not important to the argument. It is straightforward to prove U (N ) invariance by induction. Assume that all tree amplitudes A tree m , with m < n are duality invariant; using the recursive representation (62) we show that A tree n is duality invariant channel-by-channel. If the exchanged on-shell state in a given channel is a graviton, then A tree L A tree R is a product of invariant tensors, and hence invariant. If the exchanged state is a photon then the sum over helicity and the flavor index takes the form Since this is the contraction of two tensors by the invariant δ i j , it follows that this sum is likewise an invariant. Together with the explicitly verified duality invariance of the threepoint amplitudes, the all-multiplicity Ward identity follows by induction. Here the key property we used was the existence of a valid on-shell recursion for the tree-level S-matrix (62); a general discussion the necessary conditions for this to exist can be found in [31].
We are now ready to prove the following non-renormalization theorem: This result was first noted long-ago following a detailed calculation of the UV divergence [32,33], and recently generalized (including massless scalars) to the full non-compact duality group Sp(2N ) in [34]. The new result in this section is a simple argument that demonstrates the duality invariance of the divergence without the need for a detailed calculation.
We will prove that the total UV divergence is given by a sum over U (N ) invariant tensors.
Beginning with the cut-constructible part, the logic here is very similar to the inductive proof of tree-level invariance. We will show that the divergence is a U (N ) invariant tensor cutby-cut. In the representation (58) we consider the contribution of a single two-particle cut; this can be either graviton-graviton, graviton-photon or photon-photon: . . . . . .
Since the tree-amplitudes are invariant, and as in the expression (63) the exchanged photon flavor indices are contracted with invariant tensors, each case separately generates an invariant tensor. Summing over all states and cuts we conclude that the cut-constructible divergence is duality invariant.
As for the possible mixed divergence, here we begin with the full IR divergence at oneloop. This is given by the universal formula [35] where the tree-amplitude on the right-hand-side and the loop amplitude on the left-handside have the same external states and r Γ = Γ 2 (1 − )Γ(1 + )/Γ(2 − ). As discussed above, in general there may be non-trivial UV/IR cancellations in the cut-constructible part of the one-loop amplitude. These can be disentangled using knowledge of the full IR divergence.
In this case, things are somewhat simpler, and expanding the final factor in (64) gives The first term in this sum is zero by momentum conservation. Expanding (64) the full IR divergence has the form We see that the coefficient of the IR divergence is a transcendental function. We know, however, that the coefficients of UV divergences are always rational functions, since they must be removable by adding local counterterms. It follows that there can never be any UV/IR mixing at one-loop in perturbative quantum gravity and hence that the complete UV divergence is given by the cut-constructible part of the amplitude. This completes the proof of the non-renormalization theorem.
It is important to note that this theorem is valid independent of any anomalies in the duality symmetries. Indeed, in the absence of additional massless degrees of freedom, we expect a non-vanishing ABJ anomaly in the duality currents j µ D [36,37]. Explicitly for the N = 1 case: This is a mixed-gravitational anomaly. The question of how this manifests in on-shell scattering amplitudes in the context of N = 4 supergravity has been a subject of recent interest [38,39]. Such an anomalous violation of the U (N )-invariance at one-loop can appear only in the rational part of the amplitude since the cut-constructible part is completely fixed by unitarity cuts into tree-level amplitudes. The anomaly is therefore irrelevant to the effects of duality invariance on non-renormalization at one-loop. At two-loops however, anomalous rational one-loop amplitudes will have a noticeable effect on ultraviolet divergences and may lead to the renormalization of duality violating six-derivative operators. This question deserves further study.
B. RG Flow and the Multi-Charge Weak Gravity Conjecture
With the non-renormalization theorem proven in the previous section, we now show how the argument given in [1] generalizes to the multi-charge case. By simple dimensional analysis we know that the counter-terms to one-loop divergences in Einstein-Maxwell are four-derivative operators. In appendix A we give a complete classification of local matrix elements corresponding to four-derivative operators, so together with the non-renormalization theorem proven in the previous section we know that most general local UV divergence is given by At one-loop the divergence fixes the dependence of the scattering amplitude on the renormalization group scale µ 2 . After adding a counterterm with coefficient α(µ) to remove the UV divergence, the physical scattering amplitude should be independent of µ 2 which gives the logarithmic running of the Wilson coefficient where Λ UV is some UV matching scale, assumed to be arbitrarily larger than the horizon scale. The ultraviolet divergence in Einstein-Maxwell coupled to N U (1) gauge fields was first calculated long-ago [32,33], and then recalculated using unitarity methods [35,40] A 1-loop This gives the RG coefficient in (68) as From this matrix element we can reverse engineer the corresponding four-derivative operator Note that we have lost manifest duality invariance when passing from on-shell scattering amplitudes to the effective action and so have made the replacement δ i j → δ ij . As an important cross-check, the effect of such an operator on the perturbed metric at leading order in α is given by (24) to be which manifests the expected electromagnetic duality symmetry, further enhanced to O(2N ).
When evaluating the extremality form, µ should be taken to be the horizon scale µ 2 ∼ Since c > 0, as Q 2 → ∞ the logarithmic term becomes large and positive. With the logarithmic running included the extremality form at the horizon scale is given by where Q 2 = i (q 2 i + p 2 i ). In this expression α UV , β UV , γ UV , χ UV , and ω UV refers to the values of the Wilson coefficients at the matching scale Λ UV . Importantly, the logarithmic term is O(2N ) invariant and therefore gives an isotropic contribution to the extremality form. Furthermore, this contribution is large and positive, and so dominates over all other contributions. We conclude that for sufficiently large Q 2 , the extremality form is positive, independent of the values of the Wilson coefficients at the matching scale Λ UV , and consequently the multi-charge WGC is always satisfied in the black hole regime.
Here the full U (N ) duality invariance of the UV divergence (enhanced to O(2N ) in the quartic form) was essential to the argument. It would not have been enough that some Wilson coefficients had a positive logarithmic running, to prove the multi-charge WGC we require positivity in all directions, which as we have shown follows from a generalized nonrenormalization theorem as a consequence of tree-level U (N ) duality symmetry of Einstein-
Maxwell.
It is interesting to note that we can almost reach this same conclusion without knowing the explicit form of the UV divergence (71). In [12] the causality bound (53) was applied to the Wilson coefficients at the UV matching scale Λ UV and consequently to constrain the properties of the states integrated out. But this bound must remain valid even deeper in the IR where, as we have seen, the logarithmic running dominates. If the RG coefficient c had been negative, then the bound (53) is eventually violated, indicating the presence of superluminal propagation at very low energies. Since we expect that Einstein-Maxwell is not inconsistent in the deep IR, it must be the case that c ≥ 0 even without doing a detailed one-loop calculation. This argument has nothing to say about the possibility that c = 0.
Only an explicit calculation is sufficient to demonstrate the existence of a non-vanishing one-loop divergence.
V. DISCUSSION
In this paper we have studied the effect of higher-derivative corrections on black hole decay in a more general setting than what has been considered before by allowing for more than one gauge field and by considering in detail the effect of magnetic charges. The motivation for our study was to understand how the weak gravity conjecture may be satisfied in the case of more than one Abelian gauge field. This conjecture takes a variety of forms, as reviewed in the introduction, but it can be interpreted as the statement that in a UV complete model of quantum gravity there is not an infinite tower of stable states without a symmetry that protects them. Thus, the relevant question is: can nearly extremal charged black holes decay? In particular, we study whether the higher-derivative correction make kinematically possible the process where one large black hole decays into multiple smaller ones.
The conclusions of this paper are most interesting where they differ from the case of a single U (1), so let us briefly reiterate those: (a) parity-odd operators contribute to the shifted extremality bound when magnetic charges are included (even in the case of a single gauge field), and (b) with multiple charges, allowing extremal black holes to decay imposes a condition on the convex hull of the shifted extremality bound.
The result of our calculation is the shift to the extremality bound for large black holes.
The maximum charge in the corrected theory is equal to the mass plus a small correction that is proportional to the coefficients of the higher-derivative operators, seen in (27). When there is only one charge, extremal black holes can decay as long as the smaller black holes have a higher charge-to-mass ratio than the large ones. When more charges are present, different generalizations of this condition are possible. In our analysis of black hole decay, we have found a convex-hull condition reminiscent of [15]. That paper shows that black holes may decay when the convex hull of the particle spectrum in z-space contains the unit ball. The condition we have found is that large black holes are able to decay when the convex hull of the allowed charge-to-mass ratios of small black holes contains the unit ball. In our setting, however, this does not play a role; in the regime where we can apply the EFT approach we have outlined, the four-derivative corrections are much smaller than the two derivative terms. In this case, the corrections are always small enough that the space of allowed charge-to-mass ratios is convex. Therefore, we are interested in the simpler requirement that the shift to extremality is always positive.
There are a number of arguments that attempt to establish the weak gravity conjecture and we have outlined how some of them might apply to multi-charged black holes. In addition to reviewing the arguments from unitarity and causality, we have shown how to extend the argument of [1] to the multi-charged case. In doing so, we have presented what we believe is a novel proof of the statement that only duality-invariant terms in the Lagrangian are renormalized. It is interesting to note that this argument requires that electromagnetic duality is not broken at two-derivative order. It would be interesting to study generalizations where the duality is broken at leading order, such as when a dilaton couples to the field strength. Moreover, this argument depends in an essential way on a symmetry of Einstein-Maxwell which is only present in four-dimensions. In d = 4 there is no reason to expect that such a non-renormalization theorem should be valid and so it is not clear if the weak gravity conjecture is similarly trivialized by non-trivial RG running.
Considering scalar fields might also offer the opportunity to check whether the conditions we discuss on Wilson coefficients are satisfied in specific models. One such example is the four-dimensional STU model [41], which retains four Abelian gauge fields and three dilatonic scalar fields. More generally, the photon and graviton are often accompanied by light scalar moduli in UV complete models from string compactifications. This means that a full understand of the relationship between the weak gravity conjecture and higher-derivative corrections requires studying the role played by scalar fields. Another possibility is to allow for other geometries. Anti-de Sitter space, in particular, presents an interesting opportunity because of the possibility that the AdS/CFT correspondence provides more rigorous bounds on Wilson coefficients (see, for instance [42] Operator redundancies in EFTs arise due to the field reparametrization invariance of physical observables [43]. For example, in Einstein-Maxwell we consider redefinitions of the metric of the form where c i are independent coefficients. In the complete effective action (including all possible terms of all mass dimensions consistent with the assumed symmetries) the effect of such a field redefinition is to shift the Wilson coefficients. By choosing c i in a particular way, certain operators can be removed from the effective action entirely; these are the so-called redundant operators. One approach to constructing a non-redundant basis of operators is to first enumerate all local operators, then use the most general field reparametrization to remove redundant operators. In this appendix we describe an alternative approach that makes use of on-shell scattering amplitudes methods.
The S-matrix corresponding to the effective action is likewise a physical observable, and independent of the choice of field parametrization. In the tree approximation, gauge invariant effective operators generate Lorentz invariant on-shell matrix elements without kinematic singularities. The on-shell method begins with the observation that there is a one-to-one correspondence between non-redundant gauge invariant local operators and Lorentz invariant local matrix elements [44]. By making use of the spinor-helicity formalism for massless on-shell states [30], it is sometimes more efficient to construct an independent set of the latter. Below we use this correspondence to construct a complete basis for operators coupling gravity to N U (1) gauge fields with up to four derivatives.
The on-shell matrix elements we construct are in the helicity basis. Lorentz invariance is encoded in the requirement that the expressions we construct are rational functions of spinor brackets On-shell matrix elements corresponding to gauge invariant local operators are given by polynomials of spinor brackets; we first construct a basis of monomials satisfying certain physical conditions. The first condition we impose is consistency with the action of the massless little group. Such monomials must scale homogeneously with the correct little group weight determined by the helicities h i of each of the external states Here we are scaling the spinors of particle i separately, leaving the remaining spinors unchanged. Since the expressions we are constructing are simply strings ofλs and λs, this constraint is equivalent to the following A straightforward (though certainly not optimal) approach to this is to first generate a complete basis of monomials, and then numerically evaluate on sets of randomly generated spinors to find a linearly independent subset.
To construct local operators corresponding to the monomials we can make use of the following replacement rules, for photons: and for gravitons 6 : where F ± and W ± are the (anti-)self-dual field strength and Weyl tensors respectively. For non-minimal operators there are additional helicity spinors; these must come in pairs with zero net little group weight and so we can replace: 6 Here we are defining σ µν αβ ≡ i 4 αβ σ µ αα σ ν ββ − σ ν αα σ µ ββ and σ µν αβ ≡ i 4 αβ σ µ αα σ ν ββ − σ ν αα σ µ ββ . Using standard trace identities, we can rewrite the local operators we construct in the more familiar (though less compact) Lorentz vector notation.
where the derivative acts on the local operator creating state i. As an illustrative example, consider the following matrix element Using the replacement rules given above, this can be generated from the following local operator Here we have used a superscript F i to indicate that the spin-1 states correspond to distinct U (1) gauge groups. If two or more states with the same helicity correspond to the same U (1) factor, then we must Bose symmetrize over the particle labels in the matrix elements before applying the replacement rules. This generically reduces the number of independent local operators at a given order in the derivative expansion.
Finally we must discuss the constraints of parity conservation. In the spinor-helicity formalism, parity P acts by interchanging the chirality of the spinors λ iα ↔λ iα , or equivalently interchanging angle and square spinor brackets 7 . A local operator is called parity conserving if it generates local matrix elements that satisfy P · M n 1 h 1 , 2 h 2 , ..., n hn = M n 1 −h 1 , 2 −h 2 , ..., n −hn .
This means that when constructing a basis of local operators using the method described above, in a parity conserving model the matrix elements M n 1 h 1 , 2 h 2 , ..., n hn and M n 1 −h 1 , 2 −h 2 , ..., n −hn should not be counted separately, while in a parity non-conserving model they should be.
Three-Derivative Operators
In accord with the constraint (A9) the possible, non-redundant, three-derivative operators that generate on-shell matrix elements with k-photons and m-gravitons have The list of possible matrix elements modulo Schouten and momentum conservation, and the corresponding local operators is: (+1, +1, +1) : (−1, −1, −1) : There are two independent, three-derivative local operators. Imposing parity conservation there is only a single independent local operator. Such operators vanish unless all field strength tensors are from distinct U (1) factors. To preserve Bose symmetry of the matrix element we see that the associated Wilson coefficients must be totally antisymmetric in flavor indices.
An equivalent form of the three-derivative effective Lagrangian is where both a ijk and b ijk are totally antisymmetric. The first operator (a) is parity even while the second (b) is parity odd.
In this appendix we shall review the derivation of (21). Recall the corrected equation of motion for the gauge field: For simplicity we label the term in the parentheses on the right-hand side of (19) by G i µν .
First note that the anti-symmetry of F µν allows us to rewrite the equation of motion as We expand this equation in power of the coefficients α, ... ω. The zeroth-and first-order equations are: The solution to the zeroth-order equation is the uncorrected Reissner-Nordström solution.
We are interested in obtaining the first-order part, which represents the corrections to the background. The derivative may be removed from (B3b) because an additive constant has the same fall-off in r as the solution to (B3a), so we may absorb it into the definition of integration constant in the zeroth-order solution, which is q. As a result, we have Note that G µν depends explicitly on ( α, ..., ω ), so (G µν ) (1) , which is first-order in the coefficients, depends only on the zeroth-order value of the fields F µν and W µνρσ .
In addition to the Maxwell equation, the gauge fields must satisfy the Bianchi identity Together with the assumed spherically symmetry, which imposes that only F i tr and F i θφ are non-zero, this gives the following constraint on the magnetic component of the gauge field Since the leading order magnetic field (15) is the unique spherically symmetric field with magnetic monopole moment p i , and by (B6) there can be no subleading 1/r corrections, it remains the exact solution even with the addition of higher-derivative interactions.
Now we may use this to compute the first contribution to the stress tensor corrections. This relies on the non-trivial fact that this combination of √ −g and F is the only combination that appears in the corrections to the stress tensor. To see this consider the stress tensor for a Maxwell field, We are interested only in the corrections to We use the fact that only F tr and F θφ are non-zero, and only the former is corrected, to write In section II, we computed the shift to the geometry by first computing the shift to the stress tensor due to the presence of higher-derivative operators. One source of stress tensor corrections comes from varying the four-derivative operators with respect to the metric. The variations of each of these terms are recorded here for reference.
Each of the terms on the left-hand side are multiplied by √ −g in the action. Note that we use the shorthand (F i F j ) to denote F i µν F jµν , and W AB to denote W µνρσ A µν B ρσ .
Consider a general co-dimension-1 hypersurface X embedded in R n , defined by an equation of the form n i=1 where T (x i ) is small in the sense that for all points x i ∈ X, for some arbitrarily small > 0. Since this condition is preserved under orthogonal rotations, every point on X can be mapped to x i = 0 for i > 1 up to a redefinition of the function T (x i ). Without loss of generality then we will study the local neighbourhood of such a point. We use the fact that we are interested in functions of the form T (x i ) = ijkl | 15,299.2 | 2019-08-27T00:00:00.000 | [
"Physics"
] |
Survival of a single mutant in one dimension
We study a one dimensional two-type contact process with equal rate of propagation (and death) of the two types. We show that the progeny of a finite number of mutants has a positive probability of survival if and only at time 0 there is at most a finite number of residents on at least one side of the mutant’s “colony”.
Introduction
The aim of this paper is to study the probability that the progeny of a single mutant in an infinite population of residents will survive. We consider this problem in the framework of the one dimensional two-type contact process.
We will prove that if the mutant has no selective advantage nor disadvantage, compared with the individuals of the resident population, then, provided we are in the supercritical case (which means that a single individual's progeny may survive for ever), a single mutant with an empty half-line in front of him, and all sites behind him occupied by resident individuals, has a progeny which survives forever with positive probability, while any finite number of mutants, with infinitely many residents on both sides, have a progeny which goes extinct a. s. Note that we define the progeny at time t of a given ancestor at time 0 as the set of individuals alive at time t, who are the descendants of that ancestor at time 0.
Let us now explain what we mean by the contact process. Note that this process is often presented in the language of infection. We shall rather consider it here as a model of the spread of a population. Consider first the usual one-type contact process with birth parameter λ > 0. This process {ξ t , t ≥ 0} is a {0, 1} -valued Markov process, hence ξ t is a random mapping which to each x ∈ associates ξ t (x) ∈ {0, 1}. The statement ξ t (x) = 1 means that the site x is occupied at time t, while ξ t (x) = 0 means that site x is empty at time t. The process evolves as follows. Let x be such that ξ 0 (x) = 1. We wait a random exponential time with parameter 1+2λ. At that time, with probability 1/(1+2λ), the individual at site x dies; with probability λ/(1 + 2λ), the individual, while continuing its own life at site x, gives birth to another individual; the newborn occupies site x + 1 if it is empty, and dies instataneously otherwise; and with probability λ/(1 + 2λ), it gives birth to a newborn who occupies site x −1 if it is empty, and dies instataneously otherwise. Then the same operation repeats itself until site x becomes empty, independently of what happened so far. The same happens at any occupied site, and the exponential clocks at various sites are mutually independent. We will use the same notation ξ t to denote the random element of {0, 1} defined above, and the random subset of consisting of all sites x ∈ where ξ t (x) = 1.
The two-type contact process {η t , t ≥ 0} is a {0, 1, 2} -valued Markov process which starts from an initial condition (A, B), where A and B are two nonintersecting subsets of , A denoting the set of sites which are occupied by type 1 individuals and B the set of sites which are occupied by type 2 individuals at time t = 0. In other words, The two-type contact process with equal birth rates λ evolves exactly like the one-type process, with each individual possibly giving birth to individuals of the same type. We shall consider in section 4 the case where the birth rate of the mutants (i. e. type 2 individuals) differs from that of the residents (i. e. type 1 individuals).
The (one-type) contact process has been extensively studied and plays a central role in the theory of interacting particle systems (see [6], [7] and references therein) but there are very few papers on the two-type contact process (see [3] and [8]). For another closely related probabilistic model of competition between species, see durneu.
Let us now present a useful construction of the contact process, called the graphical representation, which is valid in both the one-type and the two-type cases (at least in the case of equal birth rates). The important feature of this construction is that processes corresponding to different initial conditions are coupled through it. Indeed, {ξ t , t ≥ 0} (resp. {η t , t ≥ 0}) is a fixed function of both the initial condition, and the set of Poisson point processes, which code all the randomness, which we now introduce.
Consider a collection {P x t , P x,+ t , P x,− t , t ≥ 0; x ∈ } of mutually independent Poisson point processes, such that the P x 's have intensity 1 while both the P x,+ 's and the P x,− 's have intensity λ, all defined on a probability space (Ω, , ). On the set × [0, ∞) we place a δ on the point (x, t) whenever t belongs to the Poisson process P x . On that set we also place an arrow from (x, t) to (x + 1, t) whenever t belongs to the Poisson process P x,+ and an arrow from (x, t) to (x − 1, t) whenever t belongs to the Poisson process P x,− .
The process {ξ A t , : t ≥ 0} is defined as follows. An open path in × [0, +∞) is a connected oriented path which moves along the time lines in the increasing t direction without passing through a δ symbol, and along birth arrows, in the direction of the arrow. Now To construct the two-type contact process, we call line of descendance an open path starting from an occupied site at time 0, and such that any arrow belonging to this path points to an unoccupied site.
Note that unlike open paths, lines of descendance depend on the initial configuration of the process. For A, B two disjoint subsets of , we define {η A,B t , t ≥ 0} as the {0, 1, 2} -valued process whose value at time t is given by ∃x ∈ B and a line of descendance from (x, 0) to ( y, t)} In biological terms, we think of the type 1 population as the resident population, and of the type two population as a mutant population.
We shall also need in this paper to define both the one-type and the two-type contact processes on subsets of of the form (−∞, a] (resp. [a, +∞)). These are defined as above, with the restriction that the Poisson processes (P a,+ , P y , P y,+ , P y,− , y > a) (resp. (P a,− , P y , P y,+ , P y,− , y < a)) are ignored.
Let {ξ A t , t ≥ 0} denote the one-type contact process starting from the configuration whose set of occupied sites is A. We will write ξ x t for ξ {x} t . We shall use the notation It follows from well-known results on the contact process, see e. g. Liggett [6], that there exists λ c < ∞ such that ρ > 0 whenever λ > λ c .
The aim of this paper is to prove This result has an interesting consequence concerning the one-type contact process {ξ A t , t ≥ 0}, which says that in the case where both sup(A) = +∞ and inf(A) = −∞, the progeny of any ancestor alive at time 0 dies out a. s., while in the case where |A| = ∞ but sup(A) < ∞, one and only one ancestor alive at time 0 has a progeny which survives for ever, see Corollary 3.12 below.
From the results needed to prove Theorem 1.1 we can also deduce: We conjecture that Theorem 1.2 holds for the two-type contact process on d for all d ≥ 1. In [8] it is proved that for d ≤ 2 and all initial configurations lim t→∞ (η t (x) = 1, η t ( y) = 2) = 0 for all x, y, while for d ≥ 3 the process admits invariant measures µ such that for all x = y, µ({η : η(x) = 1, η( y) = 2}) > 0. Although this last result may be seen as evidence favoring our conjecture (when d ≥ 3) it does not imply it nor is it implied by it.
The paper is organized as follows. In section 2, we recall and prove several results on the one-type contact process which are needed in further sections. In section 3, we study the case of a single or a finite number of mutants confronted with an infinite number of residents, in the case of equal birth rates. Theorems 1.1 and 1.2 are proved in subsections 3.3 and 3.4 respectively. Finally, in section 4, we conclude with some remarks on the case of unequal birth rates (i. e. when one of the two species has a selective advantage). We formulate one result and two conjectures.
In all of this paper, we assume that λ > λ c .
Some results on the one-type contact process
Let − be the set of integers smaller than or equal to 0 and let + be the set of integers greater than or equal to 0.
For a proof of these results the reader is referred to Theorems VI.2.19 and VI.2.24 in [6].
Let R t = sup s≤t r s .
PROOF: Let τ a = inf s : r s ≥ a . Then By the strong Markov property this is bounded below by which by symmetry is at least To do so fix > 0 and let c = 2 ρ . Then write where we have used Lemma 2.1 for the first inequality, and the L 1 convergence of r t /t for the equality. Hence lim sup Since is arbitrary the lemma is proved.
Although the following lemma is well known, we did not find it in previous publications and we include it here for the sake of completness. PROOF: Let 0 < ε < v . Then write: where the last inequality is due to the fact that ξ 0 n (x) = ξ n (x) for any x ∈ [ n , r n ]. We now show that the sum on n of each of the three terms of the right hand side above converges: For the first of these terms, the convegence is a consequece of the fact that for any n the distribution of ξ n is stochastically above the upper invariant measure of the contact process and of Theorem 1 of [5] .For the third term the convergence follows from Corollary 3.22 in Chapter VI of [6]. For the second term it follows by that same corollary applied to n and our choice of ε. We have thus proved that This, the Markov property and Theorem 3.29 in Chapter VI of [6] imply that Since ε is arbitrary and r n ≤ r n we get: (lim n r n n = v(λ)|τ 0 = ∞) = 1 and the lemma follows from the fact that sup 0≤s≤t≤1 r n+t − r n+s is bounded above by a Poisson r.v. of parameter λ.
It now follows: PROOF: There exists a strictly increasing sequence {x k , k ≥ 1} ⊂ A such that there is an infinite open path starting from each x k . Now for each n ≥ 0 and some R ∈ define It follows from the last Lemma that for R large enough, (A n ) = (A 0 ) > 0. From now on such an R is fixed. From the ergodic theorem, hence a. s. infinitely many A n occur. So almost surely, one A n with n ≥ R occurs. Now choose k large enough such that x k ≥ n. Clearly there exists an infinite open path starting from (x k , 0) which lies on the right of the line {(v t, t), t ≥ 0}.
Although our next result is well-known, we could not find it explicitly stated in the litterature. It follows immediately from our previous Corollary and the fact that v(λ) > 0 whenever λ > λ c . Let µ + denote the upper invariant measure for the contact process on . This is defined as follows. Denote by {χ t , t ≥ 0} the one-type contact process on . This process takes its values in {0, 1} . In accordance with the above conventions, for A ⊂ , we write χ A t for the contact process on starting with the initial condition Then µ + is the weak limit, as t → ∞, of the law of For the proof of this result, we will need the following PROOF: We first exploit the well-known self-duality of the contact process. Since there is a one to one correspondance between the open paths from some ( y, 0), y ∈ , to some (x, t), x ∈ (0, n] and the open paths from some (x, 0), x ∈ (0, n] to some ( y, t), y ∈ obtained by reversing the directions of the arrows, Letting t → ∞ in the above identity yields The last right hand side is the probability that there is an infinite open path starting from some Part b) follows from part a) and in view of Lemma 2.7, to prove part a) it suffices to show that for the contact process on there exist constants K, c > 0 such that It follows from Corollary VI.3.22 in [6] that for some K 1 , c > 0 we have: and that Therefore, using (2.1) we get for some constant K 3 . We have shown in particular that It follows from (2.2) that the second term of the right hand side decays exponentially in n. Hence, the lemma will be proved if we show that the first term also decays exponentially in n. To do so, let σ = inf{t : r t ≤ −n} and let Y n be a Poisson random variable of parameter 2λβ n. It now follows from the Strong Markov property applied at the stopping time σ that: Since, given our choice of β, lim n→∞ (Y n ≤ (1 + vβ)n) = 1 and by (2.1), (r 2β n ≤ vβ n) decays exponentially in n, the same happens to (σ ≤ β n) = (inf 0≤t≤β n r t ≤ −n).
Let T −1 be the operator on the set of probability measures on {0, 1} defined by The natural partial order on {0, 1} induces a partial order on the set of probability measures on {0, 1} which we denote by ≤. Recalling that µ + is the upper invariant measure for the contact process on , we have PROOF: Consider the contact process {χ t , t ≥ 0} this time on ∪{0}, starting again from χ 0 ≡ 1. Let now {χ t , t ≥ 0} denote the same process, with the same initial condition and the same realization of the graphical representation, except that we delete all arrows between states 0 and 1. The restriction to of the asymptotic (as t → ∞) law of χ t coincides with µ + , while the same law associated with χ t coincides with T −1 (µ + ).The result follows from the fact that for all t > 0, x ≥ 1, Our next proposition is taken from [1] (See Theorem 2 in that reference). Although there the result is stated and proved for the contact process on , their proof also holds for the contact process on .
PROOF: Consider the contact process {χ t , t ≥ 0} on , starting from χ 0 ≡ 1. We deduce from Proposition 2.9: It then follows from Lemma 2.11 below that It remains to let t → ∞.
Lemma 2.11. Let {χ t , t ≥ 0} denote the contact process on , starting from any deterministic initial condition. For any t > 0, the law of χ t has positive correlations.
PROOF: For the contact process on [1, · · · , n], the result follows from Theorem 2.14 on page 80 of Liggett [6]. Our result then follows by letting n → ∞.
Note that Lemma 2.11 applies as well to the contact process {ξ t , t ≥ 0} on .
Let , P x,− t , x ∈ } as explained above. Note that on the event S x, y the process starting from {x} survives but this does not mean that if we start from {x, y} the progeny of (say) x lives forever. We now show that (recall the definition of ρ in (1.1)) Lemma 2.12. For all x, y ∈ (S x, y ) ≥ ρ 2 .
PROOF: Denoting by µ the upper invariant measure of the contact process {ξ t , t ≥ 0} on , i. e. µ is the limit as t → ∞ of the law of ξ t , we have by the same duality argument already used in the proof of Lemma 2.7 the identities Letting t → ∞ in the result of Lemma 2.11 applied to the contact process on implies that µ has positive correlations. Hence The result follows from this inequality and the three above identities.
We now fix some λ > λ c and let v = v(λ). We pick From now on t 0 will be a large enough multiple of 2 v so that the following holds : where Let us define new processes. For any z ∈ , we write where as usual the sup (resp. the inf) over an empty set is −∞ (resp. +∞). From now on, t 0 will be a large enough multiple of 2 v such that both the inequality (2.3) and the conclusion of Lemma 2.13 hold.
The two-type contact process with equal birth rates
Let η t denote the contact process with two types.
t , t ≥ 0} now denotes the contact process where at time zero A is the set of sites occupied by individuals of type 1, and B is the set of sites occupied by individuals of type 2. The dynamics is the same as before, using the same construction with the same collection of Poisson processes, except that now an individual of type α ∈ {1, 2} located at site z gives birth at time t to an individual of the same type at site z + 1 (resp. at site z − 1), if t is a point of the Poisson process P x,+ (resp. P x,− ) and the site z + 1 (resp. z − 1) is not occupied at time t.
In other words, the 1's and 2's together evolve like a one-type contact process, and all descendants of a 1 (resp. of a 2) are 1's (resp. 2's).
A single mutant in front of an infinite number of residents may survive
In this subsection, we consider the process {η A,B t , t ≥ 0} only in the case where A < B, meaning that all points in A are located on the left of each point of B. In other words, the initial configurations belong to the set: Given the nearest neighbor character of our process, whenever it starts in , it remains in with probability 1.
PROOF:
By Lemma 2.13 and symmetry arguments we have: Consequently for the initial configuration {0}{v t 0 } the first of these paths is always occupied by a type 1 particle. Therefore, adding to the initial configuration extra 1-type particles to the left of the origin does not alter the process to the right of that open path. Hence We will also need the following Lemma: converges to 0.
PROOF: It suffices to show that ], hence we only need to prove that for the one type contact process converges to 0 as t 0 goes to infinity. But on the event C(t 0 , ) the set of occupied points in the is the same whether the initial condition of the process is or {v t 0 } . Since starting from we have more occupied points than under the upper invariant measure the result follows from the fact that under the upper invariant measure the probability of having an empty interval of length n tends to 0 as n tends to infinity.
From now on we shall use { a ζ t , t ≥ 0} to denote the two-type contact process on (−∞, a]. Now we can prove: PROOF: In this proof we will consider the two type contact process on both and (−∞, 3 2 v t 0 ]. These two processes are constructed on the same probability space with the same Poisson processes. For the second of these pocesses The existence of the first of these paths has a probability which converges to 0 as t 0 goes to infinity by Lemma 2.2.
By reversing the arrows and using symmetry and again Lemma 2.2, we see that the same happens to the second path.
Hence, if we define converges to 0 as t 0 goes to infinity . The result follows from Corollary 3.1 and the following claim: starting both η t and 3v t 0 To justify this claim note first that on the event G there exists a rightmost open path from (v t 0 , 0) , which remains to the left of the line x = We now introduce the following partial order on {0, 1, 2} : (3.1) Intuitively means "more 1's" and "fewer 2's".
Remark 3.5. The reader might think that a more natural definition of the inequality in distribution would be to say that µ 1 µ 2 whenever µ 1 ( f ) ≥ µ 2 ( f ) for all f : {0, 1, 2} → which are increasing in the sense that η 1 η 2 implies f (η 1 ) ≥ f (η 2 ). Theorem II.2.4 in [6] says that for the standard partial order on {0, 1} the two definitions are equivalent. It is clear that this theorem can be extended to our partial order, but we shall not need this result here.
Note that η 1 η 2 implies br(η 1 ) ≥ br(η 2 ) and that if γ ζ, the coupling between the contact processes starting form different initial conditions deduced from the graphical representation produces the property In the sequel for any probability measure µ on {0, 1, 2} and any i ∈ , T i (µ) will denote the measure µ translated by i. That is the measure such that for all n ∈ , all x 1 < x 2 < · · · < x n and all possible values of a 1 , . . . , a n we have: Moreover, if µ is a measure on A [n,∞) where A is any non-empty subset of {0, 1, 2}, then T i (µ) will be the measure on A [n+i,∞) satisfying (*).
As before µ + denotes the upper invariant mesure for the contact process on and µ + 2 will be the measure obtained from µ + by means of the map: F : {0, 1} → {0, 2} given by F (η)(x) = 2η(x). With a slight abuse of notation the measures µ + and µ + 2 will also be seen as measures on {0, 1, 2} and a similar abuse of notation will be applied to the translates of these measures.
We start the process {η t , t ≥ 0} from the initial distribution µ determined by is the point mass on the configuration In the sequel η 0 will denote a random initial configuration distributed according to µ. In other words, we assume that η 0 = η 0 .
We now proceed as follows. We partition the probability space into a countable number of events: H, J 0 , J 1 , . . . and let the process run on a time interval of length t 0 . Then we show that the distribution of η t 0 conditioned on any event of the partition is than a convex combination of translations ofμ. Hence the unconditioned distribution of η t 0 is also such a convex combination. Then we replace η t 0 by a random configuration η 1 whose distribution is this convex combination and let the process run on another time interval of length t 0 and so on. Our partition of the probability space is given by : where Q t 0 = max{R t 0 , v t 0 } (recall that R t = sup s≤t r s ).
Since the initial distribution considered here is than the initial distribution of Corollary 3.3, we have We also claim that conditioned on H, the distribution of 3v t 0 2 ξ t 0 is ≥ T 3v t 0 2 µ + 2 (this follows from Lemma 2.8 and the fact that the process 3v t 0 2 ξ t is independent of H).
Therefore, the distribution of η t 0 conditioned on H is ν where ν is determined by: 1. The projection of ν on {0, 1, 2} (−∞, 3v t 0 2 ] is the point mass on the configuration It follows from Lemma 2.10 (applied to µ A similar argument shows that the conditional distribution of η t 0 given J m is where Y is distributed as above.
It follows from the above arguments that η t 0 µ 1 in distribution, where We can now state: Proposition 3.6. If t 0 is large enough, there exists a positive integer valued random variable Z(t 0 ) such that has an exponentially decaying tail.
PROOF:
Part a) follows from (3.4) and part b) follows from part a) of Lemma 2.6 and the fact that R t 0 is bounded by a Poisson random variable of parameter λt 0 . To prove part c) write Hence it follows from Lemma 2.2 that lim sup and the result follows from (3.3).
We can now prove: Let α t denote the number of descendants at time t of the unique initial type 2 individual (hence α t denotes also the number of type 2 individuals at time t). On the event that the lineage of the unique type 2 individual survives for ever we have α t → ∞ as t → ∞ a. s. Hence if that event has positive probability, (α t ) → ∞ as t → ∞. Consequently for any δ > 0, Denote by r t (x) the supremum of the set of sites occupied by the descendants of the individual (x, 0). Clearly, whatever the initial configuration is where as above From the result recalled at the beginning of section 2, there exists T such that Recall that in our initial configuration all sites are occupied (who occupies each site is irrelevant to contradict the fact that T δ < ∞, which we now do).
For n odd, let Z t (n) be the number of sites which at time t are in a line of descendance starting at time 0 in the interval [−(n − 1)/2, . . . , (n − 1)/2]. Now by stationarity whenever t ≥ T δ , On the other hand, if t ≥ T , by symmetry, Choosing n > 2t(v + 1)/δ, the last two inequalities yield a contradiction.
In order to deduce Theorem 3.9 from Proposition 3.10, we shall need the following Lemma where, as above, the η t 's for various initial conditions are defined with the same unique graphical representation.
From Corollary 3.8, symmetry and translation invariance, On the set {(x, t) : x ∈ , t ≥ 0} the Poisson processes used in the construction are n-fold mixing with respect to translations on for any n ∈ . Since x n+1 ≥ x n + 1, this implies that for all k ≥ 1 Consequently ∩ n≥0 C c n ≤ (1 − γ) k for all k ≥ 1. This shows that ∪ n≥0 C n = 1.
The result for the D m 's is proved similarly. From the last Lemma we know that (∪ n,m E n,m ) = 1. Hence, it suffices to show that for all n, m ∈ , we have: But on the event E n,m the evolution of "2"'s is not altered by adding "1"'s to the left of y m or to the right of x n . Therefore the result follows from Proposition 3.10.
Proof of Theorem 1.1
The only if part follows from Theorem 3.9. Let us prove the if part. We consider the case where |A ∩ B + | < ∞. The other case is treated similarly.
We let Hence from the strong Markov property it remains to show that whenever A ∩ B + = , A,B (the type 2 population survives for ever) > 0.
This last statement follows from translation invariance, (3.2) and Corollary 3.8.
Proof of Theorem 1.2
By the Markov property and symmetry it suffices to show that the theorem holds for some A and B.
To prove this, let (x n ) n≥0 and ( y m ) m≥0 be as in the statement of Lemma 3.11 and let C n and D m be as in the proof of that lemma. It follows from that same lemma that there exist n and m such that (C n ∩ D m ) > 0. This implies that { y m }{x n } (∀ t > 0 ∃ x, y : η t (x) = 1, η t ( y) = 2) > 0.
Hence, the theorem holds when A = {x n } and B = { y m }.
Corollary for the one-type contact process
The following is an immediate consequence of the above results. 2. if |A| = +∞ but sup A < ∞, then exactly one individual has a progeny which survives for ever.
PROOF: The first statement is a consequence of Theorem 3.9. For the second statement first note that it follows from (3.2) and Corollary 3.8 that for any initial condition having a rightmost individual, the probability that this individual has a progeny which survives forever is bounded below by γ > 0. We then define an increasing sequence of stopping times: τ 1 is the smallest time at which the progeny of the rightmost initial individual dies out, τ 2 is the smallest time at which the progeny of the rightmost individual at time τ 1 dies out and so on. It then follows from a repeated application of the Strong Markov Property that (τ n < ∞) ≤ γ n . Hence, with probability 1 for some k, τ k = ∞ which implies that at least one individual has a progeny wchich survives forever. Suppose now that two individuals, say x < y, have a progeny which survives for ever with positive probability. Adding infinitely many individuals at time t = 0 on the right of y cannot possibly modify the fate of the progeny of x. This would mean that the progeny of x would survive for ever with positive probability, in the presence of infinitely many individuals at time t = 0 on both of its sides. This contradicts Theorem 3.9.
Remarks about the case of unequal birth rates
Assume that the type 1 individuals have the birth rate µ, and type 2 individuals have the birth rate λ.
It is not hard to deduce from our argument that for µ > λ c there exists > 0 such that the conclusion of 1.1 remains true if µ − < λ < µ. However, we conjecture that this is not the case for all values of λ in the interval (λ c , µ). Consider now the right contact process, where each individual gives birth to offsprings on its right at rate λ, and does not give birth to any offspring on its left. Let now λ cc denote the critical value of the parameter λ, such that whenever λ > λ cc , the one-type right contact process starting from {0} has a positive probability of survival. Going back to our two-types contact process, whenever λ > λ cc , whatever the value of µ may be, the progeny of a single type 2 individual with a finite number of type 1 individuals on its right at time 0, has a positive probability of survival.
In the other direction, we conjecture that if the rates favor type 2 individuals (i.e. λ > µ > λ c ) then a unique type 2 individual has a positive probability of having descendants at all times even when all remaining sites are occupied at time 0 by type 1 individuals. | 7,789.6 | 2010-04-27T00:00:00.000 | [
"Mathematics"
] |
Integrating autonomously navigating assistance systems into the clinic: guiding principles and the ANTS-OR approach
Purpose Autonomously self-navigating clinical assistance systems (ASCAS) seem highly promising for improving clinical workflows. There is great potential for easing staff workload and improving overall efficiency by reducing monotonous and physically demanding tasks. However, a seamless integration of such systems into complex human-supervised clinical workflows is challenging. As of yet, guiding principles and specific approaches for solving this problem are lacking. Methods We propose to treat ASCAS orchestration as a scheduling problem. However, underlying objectives and constraints for this scheduling problem differ considerably from those found in other domains (e.g., manufacturing, logistics). We analyze the clinical environment to deduce unique needs and conclude that existing scheduling approaches are not sufficient to overcome these challenges. Results We present four guiding principles, namely human precedence, command structure, emergency context and immediacy, that govern the integration of self-navigating assistance systems into clinical workflows. Based on these results, we propose our approach, namely Auto-Navigation Task Scheduling for Operating Rooms (ANTS-OR), for solving the ASCAS orchestration problem in a surgical application scenario, employing a score-based scheduling strategy. Conclusion The proposed approach is a first step toward addressing the ASCAS orchestration problem for the OR wing. We are currently advancing and validating our concept using a simulation environment and aim at realizing a dynamic end-to-end ASCAS orchestration platform in the future.
Purpose
Mobile self-navigating robotic technology has successfully been applied to various domains, including logistics [1], housekeeping [2], agriculture [3], exploration [4], customer service [5], maintenance in hazardous environments [6] and delivery of goods [7]. Due to their high level of autonomy-often combined with versatile interfaces to the environment-robotic systems seem highly promising for the health-care domain, especially when dealing with pressing social problems like overaging and shortage of qualified personnel. There is great potential for easing staff workload and B Lukas Bernhard<EMAIL_ADDRESS>1 Research Group MITI, Klinikum rechts der Isar der Technischen Universität München, Munich, Germany 2 Department of Surgery, Klinikum rechts der Isar der Technischen Universität München, Munich, Germany improving overall efficiency by reducing monotonous and physically demanding tasks.
First concepts for domestic care-e.g., GARMI (Franka Emika, Munich, Germany), Twendy-One [8] and Care-O-bot [9]-as well as clinical care-e.g., Moxi (Diligent Robotics, Austin, USA) and RIBA [10]-have been presented. Complementarily, we anticipate that autonomously self -navigating clinical assistance systems (ASCAS) will play a crucial role in making clinical workflows more efficient, safe and ergonomic. Possible applications are manifold and include simple fetching of materials, assisting in repositioning of patients, device control, documentation tasks, inventory management and more. In particular, we envision ASCAS as a central component of the fully assisted OR environment of the future. Within such an environment, robotic team members collaborate closely and continuously with their human counterparts to guarantee patient safety and improve patient outcome, while making surgical processes more robust and efficient. Prospectively, this might lead to a partial or even complete merging of robotic and human spheres of influence within the clinic. This is in stark contrast to most industrial and domestic scenarios where robotic machines are usually operating in highly optimized but delimited environments, commonly called envelopes, to complete subtasks of the production workflow in an unhampered manner.
Consequently, fundamental challenges must be addressed before a broad integration of ASCAS to the clinic can become a reality. We need to find means to seamlessly integrate these systems into complex clinical workflows. Guiding principles for achieving this in a safe, ethical and economic way that leaves clinicians in control of the workflow while maximizing productivity are needed.
In this short communication, we aim at providing such principles, to govern human-machine collaboration in the clinic and leverage the full potential of ASCAS regarding workflow optimization. As a further key contribution, we apply our principles to a surgical application scenario and utilize them to derive an explicit scheduling strategy named Auto-Navigation Task Scheduling for Operating Rooms (ANTS-OR) for ASCAS orchestration across multiple operating rooms.
Methods
The ASCAS orchestration problem may be described as follows: There are n ASCAS deployed within a clinical unit (ward, OR wing, etc.). Each ASCAS offers a set of tasks that it can perform (e.g., fetching sterile material, moving patient beds, adjusting medical devices). Multiple ASCAS may offer the same type of task. The execution of a task takes a certain-in general unknown-amount of time and may require a change of location beforehand. Tasks may be assigned or canceled at any time by members of the clinical staff or by clinical information systems. The execution of a task may depend on the completion of another task, and thus underlie precedence relations, or depend on other preconditions. The overall goal is an optimal exploitation of the available ASCAS resources regarding one or more objectives (e.g., patient well-being, patient outcome, staff ergonomics, throughput, costs).
Clearly, this is closely related to scheduling problems that are common in many different domains including logistics, operational research, manufacturing and computer multitasking. Many variants of these problems have been described to address unique constraints or objectives associated with different contexts [11]. The ASCAS orchestration problem is characterized by its dynamic nature, since new tasks may be assigned at any time. Special consideration must also be given to objectives and constraints of the scheduling. As in the manufacturing or logistics domain, time-and cost-related objectives must be considered, though other factors, such as patient welfare, patient outcome and staff ergonomics, are of higher relevance. Tasks in the ASCAS orchestration problem are also subject to certain constraints, introduced by clinical command structures, emergency situations and precedence relations between tasks. Due to these unique and partly indistinct objectives and constraints, developing solutions to ASCAS orchestration based on known optimal or suboptimal scheduling algorithms is not straightforward.
Results
Inspired by Isaac Asimov's famous Laws of Robotics, we propose a set of fundamental principles that must be considered by any approach dealing with ASCAS orchestration. These principles aim at reflecting the unique requirements associated with clinical environments and describe the governing rules of human-machine collaboration in a simple and solution-independent manner. Based on these results, we propose our ANTS-OR approach for solving the ASCAS orchestration problem in a surgical application scenario, employing a score-based scheduling strategy.
Principles of ASCAS orchestration
(P1) Human Precedence: The clinical staff must remain in control of the workflow (P1a) and be aware of all autonomously performed actions and their consequences (P1b). In modern clinics, processes are not only controlled by human beings but also by clinical information systems. In the future, these systems might be capable of (semi-)autonomously assigning tasks to ASCAS resources: For example, a bed transportation robot might be dispatched to transfer the next patient from ward to operating room, as soon as the end of the previous surgery has been registered in the clinical information system. With the advent of AI-driven workflow recognition technology, the influence of autonomous decision systems might increase even more: imagine a workflow recognition engine-e.g., tracking the progress of a surgical intervention-assigning ASCAS tasks on the fly to offer automated context-dependent support for the surgical team. While these new technologies seem promising for improving clinical efficiency, it is vital to ensure that human staff remains in control of the process at any time.
(P2) Command Structure: Decisions made by senior team members supersede decisions made by subordinates. Clinical workflows and decision-making processes are grounded on clear hierarchies that allow for efficient collaboration, especially in frequently occurring critical and time-sensitive situations. These command structures must be conserved when dealing with ASCAS orchestration, e.g., by attaching a higher priority to tasks assigned by senior staff members than to tasks assigned by subordinates.
(P3) Emergency Context: Measures dealing with emergency situations must be executed with maximum priority. Timesensitive situations, e.g., when dealing with emergencies or adverse events, are occurring frequently in clinical units. These situations demand immediate measures to avoid or minimize severe consequences for the patient. Thus, tasks that are assigned in an emergency context or are inherently emergency related must be executed with minimal delay. Other upcoming tasks that are not emergency related must stand back until the critical situation has been resolved.
(P4) Immediacy: Timespan between task assignment and start of execution must be minimized. Even in a nonemergency context, immediate execution of newly assigned tasks is desired in clinical workflows. In contrast to, e.g., the manufacturing domain, tasks are normally not associated with a precise due date or deadline but are supposed to be processed as soon as possible after being released (assigned). One can define the current idle time for each released task as difference between the current time and the task's release time. We propose that a task should gain priority in ASCAS scheduling with increasing idle time.
ANTS-OR approach for ASCAS orchestration in workflow-assisted surgical interventions
ASCAS systems are a promising technology for supporting surgical teams during interventions and thus improving the overall workflow in the OR wing. We envision the following scenario for the scheduling approach presented in this section: Multiple operating rooms are running in parallel within an OR wing, each one with its own schedule of surgeries that are being processed by surgical teams. Some of these operating rooms may be equipped with workflow assistance technology that is able to track the surgical workflow and derive context-dependent supportive actions. The OR tract features a set of ASCAS with different abilities (e.g., fetching of materials or control of medical devices). Tasks may be assigned or canceled by the surgical staff or by the workflow assistance engine at any time.
We make the following simplifying assumptions: Firstly, tasks are non-preemptive. Secondly, there are no precedence relations or other preconditions. Thus, assigning a task implies that it is execution ready. Thirdly, traveling durations are known for all relevant ASCAS routes. Figure 1 summarizes all major steps of the proposed scheduling strategy. Incoming tasks-either assigned by the workflow assistance engine or by human staff members-are added to a global task list maintained by the scheduler. Pri-oritization is done by calculating the multiparameter score S AN T S for each task in the global list: (Fig. 2), L E V E L E Emergency level (Fig. 3), f I Idle time function, t release Task release time, w C , w E , w I Balancing weights.
To address ASCAS orchestration principles (P2) and (P3), we introduced the command and emergency levels depicted in Figs. 2 and 3. These levels are incorporated into the score as integer parameters L E V E L C (command level) and L E V E L E (emergency level). This ensures that task priority increases for higher command and/or emergency levels.
As shown in Fig. 2, AI-based systems (such as workflow assistance engines) have been included in the command hierarchy. Since (P1a) requires that humans must remain in control of the workflow at any time, we placed AI-based systems on a dedicated command level below human staff members (students and apprentices excepted). Thus, humanoriginated tasks are favored over AI-originated tasks and are expected to have shorter idle times. However, this does not mean that erroneous AI-originated tasks-e.g., caused by insufficient workflow recognition-can be replaced or corrected by dedicated human-originated task assignments this way. For that, we propose an additional safety routine during which an AI-task must explicitly be confirmed by an authorized staff member. This simultaneously enforces principle (P1b), since staff members are being made aware of all AIbased actions.
Since (P4) requires idle times to be as short as possible, we incorporated the time-dependent function f I into the score. By definition, this function yields higher values for longer idle times (t − t release ) and thus increases the task's score over time. This ensures that even tasks with low emergency or command levels are being executed eventually and not constantly being blocked by higher-level tasks. The optimal choice for f I is still subject of ongoing research, though we plan to benchmark different polynomial and exponential behaviors.
The factors w C , w E and w I are weights for adjusting the influence of the respective parameters and thus of different ASCAS orchestration principles. As of yet, the optimal values for these weights are free design parameters that still need to be determined in the future, based on firsthand experience or simulation. After scoring, tasks in the global list are sorted by score. Starting from the top of the global list, the scheduler now tries to match tasks from the global list with ASCAS systems based on individual capabilities and current occupation. In case that more than one ASCAS is available and capable of executing the task, the candidate with the shortest traveling duration to task location is chosen in order to improve overall throughput.
The following example illustrates the concept described above for a simple scenario: Let's suppose four operating rooms are being in use within an OR wing. The human staff members are supported by a single multifunctional ASCAS that is able to fetch supplies as well as operate medical devices. In OR 1, a laparoscopic cholecystectomy (gallbladder removal) is conducted, while the surgical workflow is tracked and assisted by an AI algorithm. Just now, this algorithm has assigned a new task T 1 with the goal of modifying the position of the OR table to prepare for wound closure. In OR 2, the surgical team is currently facing a severe bleeding during a partial hepatectomy. An ASCAS task T 2 is assigned by the attending surgeon with the goal of fetching new blood bags. OR 3 is currently being prepared for an upcoming surgery by the nursing team. The ASCAS is ordered to reset the surgical devices (task T 3 ). In OR 4, a port implantation is being performed by a surgical resident under the supervision of an attending surgeon. The resident orders the ASCAS to fetch suture material (task T 4 ).
Provided that all tasks have been assigned at the same time, Table 1 summarizes the S AN T S score for each task and the resulting processing order (rank). Idle times have been omitted, since-due to the simultaneous assignment-values are identical for any given future point in time. The weights w C and w E have been chosen such that command level and emergency level contribute equally to the score (w C 1; w E 3).
Thus, the highest score value is obtained for task T 2 , which is reasonable, since it is originating from a high-ranking team member and dealing with an emergency situation. Other tasks have to stand back until the execution of T 2 is finished.
Conclusion
The proposed scheduling approach ANTS-OR is a first step toward addressing the ASCAS orchestration problem for the OR wing. We are currently evaluating our concept using a simulation environment, where we benchmark different assisted and non-assisted scenarios to fine-tune and validate the algorithm. Besides improving our score-based approach, we aim at exploring and adapting known optimal and suboptimal scheduling algorithms to realize a dynamic end-to-end ASCAS orchestration platform in the future. | 3,491.6 | 2020-04-02T00:00:00.000 | [
"Computer Science"
] |
Optimality and Stability of Symmetric Evolutionary Games with Applications in Genetic Selection
Symmetric evolutionary games, i.e., evolutionary games with symmetric fitness matrices, have important applications in population genetics, where they can be used to model for example the selection and evolution of the genotypes of a given population. In this paper, we review the theory for obtaining optimal and stable strategies for symmetric evolutionary games, and provide some new proofs and computational methods. In particular, we review the relationship between the symmetric evolutionary game and the generalized knapsack problem, and discuss the first and second order necessary and sufficient conditions that can be derived from this relationship for testing the optimality and stability of the strategies. Some of the conditions are given in different forms from those in previous work and can be verified more efficiently. We also derive more efficient computational methods for the evaluation of the conditions than conventional approaches. We demonstrate how these conditions can be applied to justifying the strategies and their stabilities for a special class of genetic selection games including some in the study of genetic disorders.
1. Introduction. We consider an n-strategy evolutionary game defined by a symmetric fitness matrix A ∈ R n×n . Let S = {x ∈ R n : x ≥ 0, i x i = 1} be the set of all mixed strategies. The problem is to find an optimal strategy x * ∈ S such that We call this problem a symmetric evolutionary game or SEgame for short. The problem has important applications in population genetics, where it can be used to model and study the evolution of genotypes in a given population when their corresponding phenotypes are under selection pressures.
The modeling of genetic selection has a long history [6]. It may be traced back to the earliest mathematical work in population genetics in early last century including the Hardy-Weinberg's Law by G. H. Hardy and W. Weinberg in 1908 [8,20] and the Fundamental Theorem of Natural Selection by R. A. Fisher in 1930 [7]. The work has especially been revived in 1970s when J. Maynard Smith introduced the game theory to biology and developed the evolutionary game theory for the study of evolution of population of competing species [10]. In this theory, a genetic selection problem can in particular be modeled as a SEgame [9].
The SEgame has a close relationship with the generalized knapsack problem or GKproblem for short, which is to find an optimal solution x * ∈ R n for the following maximization problem: max x∈R n x T Ax/2 (2) subject to i x i = 1, x ≥ 0. The GKproblem has been studied extensively, with applications in solving maximum clique problems [11], in convex quadratic programming [15], and especially in game theoretic modeling [2].
In this paper, we review the theory for obtaining optimal and stable strategies for symmetric evolutionary games, and provide some new proofs and computational methods. In particular, we review the relationship between the symmetric evolutionary game and the generalized knapsack problem, and discuss the first and second order necessary and sufficient conditions that can be derived from this relationship for testing the optimality and stability of the strategies. Some of the conditions are given in different forms from those in previous work and can be verified more efficiently. We also derive more efficient computational methods for the evaluation of the conditions than conventional approaches. We demonstrate how these conditions can be applied to justifying the strategies and their stabilities for a special class of genetic selection games including some in the study of genetic disorders.
1.1. Further mathematical background. A two-player game is said to be symmetric if the players share the same fitness matrix and the same set of strategies. Let A ∈ R n×n be the fitness matrix and S = {x ∈ R n : x ≥ 0, i x i = 1} the set of all mixed strategies. Let x ∈ S be the strategy played by player I and y ∈ S by player II. Then, the fitness for player I can be defined by a function π(x, y) = x T Ay and for player II by π(y, x) = y T Ax. A pair of strategies (x * , y * ) is said to be optimal if x * T Ay * ≥ x T Ay * for all x ∈ S and y * T Ax * ≥ y T Ax * for all y ∈ S, where x * and y * are said to be the best response to each other (see Fig. 1).
A special class of symmetric games is to find a strategy x * ∈ S which is the best response to itself, i.e., player I and II play the same strategy x * and x * T Ax * ≥ x T Ax * for all x ∈ S. This class of games is often used to model the evolution of a population of competing species, with player I being a particular individual and player II being a typical individual in the population. A strategy x for player I means the species type the particular individual prefers to be. It could be a pure species type, i.e., x = e i for some i or a mixed one with x i = 1 for any i, where e i is the ith unit vector. Note that by a mixed species type x we mean the frequency of the individual to play species i is x i . On the other hand, a strategy y for player II means the typical species type of an individual in the population, which depends on the species composition of the population. More specifically, if the portion for species i in the population is y i , then the chance for a typical individual to be species i is also y i . Therefore, y is also a population profile, and x T Ay is basically Figure 1. Two-Player Game: A two-player, two-strategy symmetric game is demonstrated. The strategies for player I are given in vector x = (x 1 , x 2 ) T , and for player II in y = (y 1 , y 2 ) T , x, y ∈ S = {x ∈ R 2 : Σ i x i = 1, x i ≥ 0, i = 1, 2}. The fitness A i,j of strategy pair (x i , y j ) is given in the (i, j)-entry of a 2×2 fitness matrix A. A strategy pair (x * , y * ) is said to be optimal if x * T Ay * ≥ x T Ay * for all x ∈ S and y * T Ax * ≥ y T Ax * for all y ∈ S, when the game is said to reach the Nash equilibrium.
the fitness for species x in population y. Such a game is called a population game, or an evolutionary game, or a game against the field [16,19]. The goal of the game is to find an optimal strategy x * ∈ S so that in population x * , an individual cannot find a better strategy than x * , i.e., x * T Ax * ≥ x T Ax * for all x ∈ S, which is when the population has reached the so-called Nash equilibrium. Biologically, this is when the population has reached a state so that the optimal strategy for an individual is a species type consistent with the typical species type of the population. If the fitness matrix of a symmetric game itself is symmetric, the game is called a doubly symmetric game [19]. An evolutionary game with a symmetric fitness matrix is a doubly symmetric game, which is what we call a symmetric evolutionary game, i.e., a SEgame as given in (1).
1.2.
Further biological background. SEgames can be used to model genetic selection and in particular, allele selection. An allele is one of several possible forms of a gene. Most of multi-cellular organisms are diploid, i.e., their chromosomes form homologous pairs. Each pair of chromosomes has a pair of alleles at each genetic locus. Thus, n different alleles may form n 2 different allele pairs, as two alleles in each pair may not be the same. Different allele pairs are considered to be different genotypes, which may result in different phenotypes or in other words, different genetic traits (see Fig. 2).
The fitness of all different allele pairs or in other words, all different genotypes at a given genetic locus can then be given in a matrix with the rows corresponding to the choices for the first allele and the columns to the choices for the second allele in the allele pair. Again, n different alleles will give n different choices for both the first and second alleles in the allele pair, and hence an n × n fitness matrix. With such a fitness matrix, a genetic selection game can then be defined with the choices of the first and second alleles in the allele pair at a given genetic locus as the strategies for player I and II. Here, player I can be considered as an individual with a specific choice of allele at the given locus. The choice could be one of the possible alleles or a combination of them with each selected with some chance. The Figure 2. Genetic Selection: In diploid species, there are always two alleles at each genetic locus. Each pair of alleles determines a certain genotype, which in turn determines a certain phenotype. For example, in Wendel's classical experiment, the color of the flowers depends on the pairing of the alleles at a specific genetic locus, one for pink color and dominant, and another for white and recessive. Let the dominant allele be denoted by A and the recessive one by a. There can be four possible allele pairs, AA, Aa, aA, and aa. Since A is dominant, AA, Aa, and aA will produce pink flowers, while aa will produce white ones. These genotypic and phenotypic outcomes can be summarized in a 2×2 allele-pairing matrix as arranged in the figure.
former corresponds to a pure strategy, while the latter to a mixed one. In any case, if there are n different alleles, the strategy for player I can be represented by a vector x ∈ R n , x ≥ 0, i x i = 1. On the other hand, player II can be considered as a typical individual in the given population. This individual could have only one of possible alleles at the given locus or a combination of them with each selected with some chance. Similar to player I, if there are n different alleles, the strategy for player II can be represented by a vector y ∈ R n , y ≥ 0, i y i = 1. This strategy y really is the same as the composition of alleles at the given locus in the whole population. Therefore, it is also the allele profile of the population for this particular genetic locus. Let the fitness matrix be given by A ∈ R n×n . Let S = {x ∈ R n : x ≥ 0, i x i = 1}. The average fitness of an allele choice x ∈ S in an allele population y ∈ S will be x T Ay. We then want to find an optimal choice of x * ∈ S such that x * T Ax * ≥ x T Ax * for all x ∈ S, i.e., in allele population x * , any individual with allele choice x other than x * will not have a better average fitness than allele choice x * [9]. Note that the fitness for allele pair (i, j) usually is the same as that for (j, i). Therefore, the fitness matrix for genetic selection is typically symmetric, and the corresponding game is then a SEgame.
2. GKproblems vs. SEgames. For an evolutionary game, it is well known that a mixed strategy x * ∈ S is optimal for the game if and only if the fitness x * T Ax * = (Ax * ) i for all i such that x * i > 0 and x * T Ax * ≥ (Ax * ) i for all i such that x * i = 0 [16,19]. These conditions also apply to any symmetric evolutionary game, i.e., any SEgame in (1), and can be stated formally as in the following theorem.
Theorem 2.1. Let A ∈ R n×n be a symmetric fitness matrix and S = {x ∈ R n : x ≥ 0, i x i = 1} the set of all mixed strategies. Then, a strategy x * ∈ S is an optimal strategy for the SEgame in (1) if and only if there is a scalar λ * such that The proof of the above theorem can be found in many text books such as in [16,19]. Since it is helpful for the understanding of the nature of the optimal strategies of the SEgame, we also provide one here for the self-containedness of the paper: Proof. If x * ∈ S satisfies the conditions in (3) and (4), by adding all equations in (4), we then obtain λ * = x * T Ax * . Let x ∈ S be an arbitrary strategy. Multiply the second inequality in (3) by x i . Then, by adding all second inequalities in (3), we obtain λ * − x T Ax * ≥ 0, i.e., x * T Ax * ≥ x T Ax * , since λ * = x * T Ax * . Therefore, x * is an optimal strategy for the SEgame in (1).
If x * ∈ S is an optimal strategy for the SEgame in (1), then x * T Ax * ≥ x T Ax * for any x ∈ S and therefore, By adding all the left-hand sides of the equations in (4), we then obtain λ * > x * T Ax * , which contradicts to the fact that λ * = x * T Ax * . Therefore, As we have mentioned in Section 1, the symmetric evolutionary game, i.e., the SEgame in (1) is closely related to the generalized knapsack problem, i.e., the GKproblem in (2). A knapsack problem is originally referred to as a problem for selecting a set of objects of different sizes and values into a given sack of fixed size to maximize the total value of objects in the sack. The problem can be formulated as a linear program, with a linear objective function i a i x i for the total value of the sack, where x i and a i are the size and unit value of object i, respectively and with a linear constraint i x i ≤ s, x i ≥ 0, i = 1, . . . , n on the total size of the objects that can be put into the sack, where n is the number of objects and s the size of the sack. The GKproblem in (2) can therefore be considered as a knapsack problem of n "objects" with the objective function generalized to a symmetric quadratic form x T Ax/2 and with the "sack" restricted in a simplex S = {x ∈ R n : x ≥ 0, i x i = 1}. If we interpret the "objects" to be the species fractions in a given population and the matrix A to be the fitness matrix of the species, the objective function for the GKproblem in (2) is exactly half of the average fitness of the population of the SEgame in (1). Therefore, the goal of the GKproblem in (2) is basically to maximize the average fitness of the population of the SEgame in (1).
Based on general optimization theory, an optimal solution to the GKproblem in (2) must satisfy certain conditions. We first consider a general constrained optimization problem where f (x) is the objective function, c i (x) the constraint functions, E the set of indices for equality constraints, and I the set of indices for inequality constraints. Assume that f (x) and c i (x) are all continuously differentiable. Let x be a feasible solution for the problem, i.e., c i (x) = 0, i ∈ E and c i (x) ≥ 0, i ∈ I. Let E 0 (x) be the set of indices for the constraints active at x, i.e., E 0 (x) = E ∪{i ∈ I : c i (x) = 0} and C 0 (x) be the Jacobian of the constraints active at x, i.e., C 0 (x) = {∇c i (x) : i ∈ E 0 (x)} T . We then have a set of first-order necessary conditions for an optimal solution to the general constrained optimization problem in (5) as can be stated in the following theorem. Here, we say that x * ∈ R n is an optimal solution for the general constrained optimization problem in (5), if x * is feasible, i.e., x * satisfies all the constraints, and if f (x * ) ≤ f (x) for all x feasible in a small neighborhood U of x * . 14]). Let x * ∈ R n be an optimal solution to the general constrained optimization problem in (5). Assume that the gradients of the constraints active at x * , i.e., the vectors in C 0 (x * ), are linearly independent. Then, there must be a set of Lagrange multipliers λ * ∈ R |E| and µ * ∈ R |I| such that where L(x, λ, µ) is called the Lagrangian function of the problem in (5), The conditions in (6) are called the KKT conditions of the general constrained optimization problem in (5) named after W. Karush, H. Kuhn, and A. Tucker, who first discovered and proved the conditions. As stated in Theorem 2.2, an optimal solution x * of the general constrained optimization problem in (5) must satisfy the KKT conditions, but a feasible solution x * that satisfies the KKT conditions, called a KKT point, may not always be an optimal solution.
We now apply Theorem 2.2 to the GKproblem in (2). By changing the maximization problem to a standard minimization problem, we then have the objective function for this problem f (x) = −x T Ax/2. If we name the nonnegative constraints c i (x) = x i ≥ 0, i = 1, . . . , n to be the first to the nth constraints and the equality constraint c n+1 (x) = 1 − i x i = 0 to be the n+1th constraint, we then have I = {1, . . . , n} and E = {n + 1}. Let x be a feasible solution for the problem. Let E 0 (x) be the set of indices for the constraints active at x, i.e., where e i is the ith unit vector and e = i e i . For any x ∈ S, there is at least one i ∈ I such that x i = 0 since x ≥ 0 and i x i = 1. Therefore, E 0 includes the index n + 1 and a subset of indices {i ∈ I}, and C 0 (x) contains the vector −e T and a subset of vectors {e T i : i ∈ I}, which are always linearly independent. We then have the following first-order necessary conditions for the GKproblem in (2): Theorem 2.3. Let A ∈ R n×n be a symmetric fitness matrix and S = {x ∈ R n : the set of all feasible solutions for the GKproblem in (2). If x * ∈ S is an optimal solution for this problem, then there must be a scalar λ * such that x Proof. The Lagrangian function for the GKproblem in (2) can be written in the following form: where x ∈ R n , λ ∈ R, µ ∈ R n . Since for this problem the gradients of the active constraints at any x ∈ S, i.e., the vectors in C 0 (x), are linearly independent, by Theorem 2.2, if x * ∈ S is an optimal solution to the GKproblem in (2), then there must be λ * ∈ R, µ * ∈ R n such that By substituting µ * = λ * e − Ax * in all the formulas, we then have which are equivalent to the conditions in (7) and (8).
Note that the conditions in (3) and (4) of Theorem 2.1 and in (7) and (8) of Theorem 2.3 are the same. However, it does not imply that the SEgame in (1) is equivalent to the GKproblem in (2), because the conditions are necessary and sufficient for an optimal strategy for the SEgame in (1) but only necessary for an optimal solution for the GKproblem in (2). Therefore, an optimal solution for the GKproblem in (2) must be an optimal strategy for the SEgame in (1), while the converse may not necessarily be true. We state this conclusion as a corollary from Theorem 2.1 and 2.3 in the following. Corollary 1. An optimal solution x * ∈ S for the GKproblem in (2) must be an optimal strategy for the SEgame in (1), while an optimal strategy x * ∈ S for the SEgame in (1) is only a KKT point for the GKproblem in (2), which is necessary but not sufficient to be optimal for the GKproblem in (2).
In any case, the above two types of problems are closely related. The properties of the optimal strategies for a SEgame can be investigated by examining the nature of the optimal solutions to the corresponding GKproblem. For example, the existence of the optimal strategy for a general game, which usually requires a more involved theoretical proof [13], now becomes much easier to verify for a SEgame based on the relationship between the SEgame and the GKproblem: There is always an optimal solution for the GKproblem in (2), given the fact that the objective function of the problem is a continuous function and the feasible set is a bounded and closed simplex. Based on Corollary 1, an optimal solution for the GKproblem in (2) is an optimal strategy for the SEgame in (1). Then, the next corollary follows: There is always an optimal strategy or in other words, a Nash equilibrium for a given SEgame in (1).
The fact that an optimal strategy for the SEgame in (1) maximizes the objective function of the GKproblem in (2) has been recognized in [16,19] and discussed in great detail in [2]. However, they have focused on the equivalence between the two types of problems when the strategy is evolutionarily stable, weak or strong. Here, we have made a clear distinction between them and shown that the strategies for the SEgame in (1) are not necessarily always be optimal solutions of the GKproblem in (2). When not, they can be local minimizers or saddle points of the GKproblem in (2). Though unstable, they can be interesting to analyze as well, as we will mention again in our concluding remarks in Section 8. Besides, we have provided detailed proofs for the necessary and sufficient conditions for both types of problems. Based on these proofs, we have been able to obtain the Corollary 2 easily for the existence of the equilibrium state of the SEgame in (1).
3. Second-order optimality conditions. We now focus on the GKproblem in (2) and derive additional second-order necessary and sufficient conditions for its optimal solutions, and extend them to the solutions for the SEgame in (1). These conditions have been mentioned in several literature [16,19] and especially analyzed in great detail in [2]. Here we review the conditions, with some given in different forms from those in [2]. They are in fact weaker conditions, but easier to verify, which is important for the later development of our computational methods for justifying the solutions and their stabilities for the GKproblems as well as the SEgames. We will comment more on these differences in the end of this section.
Consider again the general constrained optimization problem in (5). Let x * be an optimal solution to the problem. Let E 0 (x * ) be the set of indices for the constraints active at x * , i.e., E 0 (x * ) = E ∪ {i ∈ I : c i (x * ) = 0} and C 0 (x * ) be the Jacobian of the constraints active at x * , i.e., C 0 (x * ) = {∇c i (x * ) : i ∈ E 0 (x * )} T . We then have the following second-order necessary conditions for x * to be an optimal solution to the problem in (5). 14]). Let x * ∈ R n be an optimal solution to the general constrained optimization problem in (5). Assume that C 0 (x * ) has full row rank m. Let Z 0 ∈ R n×(n−m) be the null space matrix of C 0 (x * ). Then, i.e., the reduced Hessian of Now consider a KKT point x * ∈ R n for the general constrained optimization problem in (5). Let E 0 (x * ) be the set of indices for the constraints strongly active at x * , i.e., E 0 (x * ) = E∪{i ∈ I : c i (x * ) = 0 and µ * i > 0} and C 0 (x * ) be the Jacobian of the constraints strongly active at x * , i.e., C 0 ( where µ * i are the Lagrangian multipliers for the inequality constraints in the KKT conditions. We then have the following second-order sufficient conditions for x * to be a strict optimal solution to the problem in (5) Theorem 3.2 ( [14]). Let x * ∈ R n be a KKT point for the general constrained optimization problem in (5). Assume that C 0 (x * ) has full row rank m.
i.e., the reduced Hessian of f (x) at x * , Z 0T ∇ 2 f (x * )Z 0 , is positive definite, then x * must be a strict optimal solution to the problem in (5).
We now apply Theorem 3.1 and 3.2 to the GKproblem in (2). By changing the maximization problem to a standard minimization problem, we then have the objective function for the GKproblem in (2) . . , n to be the first to the nth constraints and the equality constraint c n+1 (x) = 1 − i x i = 0 to be the n+1th constraint, we then have I = {1, . . . , n} and E = {n + 1}. Let x * ∈ S be a KKT point for the GKproblem in (2). Let E 0 (x * ) be the set of indices for the constraints where e i is the ith unit vector and e = i e i . For any x * ∈ S, there is at least one i ∈ I such that x * i = 0 since x * ≥ 0 and i x * i = 1. Therefore, E 0 includes the index n + 1 and a subset of indices {i ∈ I}, and C 0 (x * ) contains the vector −e T and a subset of vectors {e T i : i ∈ I} as the rows, and is of full row rank. Note also that the Hessian of the objective function ∇ 2 f (x * ) = −A. We then have the following second-order necessary conditions for x * to be an optimal solution to the GKproblem in (2). Theorem 3.3. Let x * ∈ S be an optimal solution to the GKproblem in (2). Let the row rank of C 0 (x * ) be equal to m, and Z 0 ∈ R n×(n−m) the null space matrix of C 0 (x * ). Then, y T Z T 0 AZ 0 y ≤ 0 for all y ∈ R n−m , y = 0, (11) i.e., the reduced Hessian of the objective function of the GKproblem in (2) at x * , Z T 0 AZ 0 , must be negative semi-definite. Now consider a KKT point x * ∈ S. Let E 0 (x * ) be the set of indices for the constraints strongly active at x * , i.e., E 0 (x * ) = {i ∈ I : c i (x * ) = 0 and µ * i > 0} ∪ E and C 0 (x * ) be the Jacobian of the constraints strongly active at x * , i.e., C 0 ( i are the Lagrangian multipliers for the inequality constraints in the KKT conditions for the GKproblem in (2) where e i is the ith unit vector and e = i e i . Again, for any x * ∈ S, there is at least one i ∈ I such that x * i = 0 since x * ≥ 0 and i x * i = 1. Therefore, E 0 includes the index n + 1 and a subset of indices {i ∈ I}, and C 0 (x * ) contains the vector −e T and a subset of vectors {e T i : i ∈ I} as rows, and is of full row rank. Note also that the Hessian of the objective function ∇ 2 f (x * ) = −A. We then have the following second-order sufficient conditions for x * to be a strict optimal solution to the GKproblem in (2). Theorem 3.4. Let x * ∈ S be a KKT point for the GKproblem in (2). Let the row rank of C 0 (x * ) be equal to m. Let Z 0 ∈ R n×(n−m) be the null space matrix of C 0 (x * ). Then x * must be a strict optimal solution to the GKproblem in (2) if y T Z 0T AZ 0 y < 0 for all y ∈ R n−m , y = 0, i.e., the reduced Hessian of the objective function of the GKproblem in (2) at x * , Z 0T AZ 0 , is negative definite.
Note that the conditions in Theorem 3.3 and 3.4 are either necessary or sufficient but not both. In fact, since the GKproblem in (2) is a quadratic program, it is possible to establish a second-order necessary and sufficient condition for its optimal solution. For this purpose, we go back to the general constrained optimization problem (5) again. Let x ∈ R n be any feasible solution for the problem. We define the reduced tangent cone T (x) at x to be the set of vectors d ∈ R n such that ∇c i (x) T d ≥ 0, for all i ∈ I such that c i weakly active at x.
Then, based on general optimization theory, we know that if the general constrained optimization problem in (5) is a quadratic program, a feasible solution x * ∈ R n will be a strict optimal solution to the problem if and only if d , where C 0 and C 0 are as defined in Theorem 3.1 and Theorem 3.2. Then, clearly, In particular, when all the active inequality constraints are strongly active at x * , C 0 (x * ) = C 0 (x * ) and T 0 (x * ) = T 0 (x * ). It follows that if the general constrained optimization problem in (5) is a quadratic program, then x * will be a strict optimal solution to the problem if and only if d We now consider the GKproblem in (2), which is a typical quadratic program and ∇ 2 f (x * ) = −A. Let Z 0 and Z 0 be the null space matrices of C 0 (x * ) and C 0 (x * ), respectively. If all the active inequality constraints are strongly active at x * , C 0 (x * ) = C 0 (x * ), T 0 (x * ) = T 0 (x * ), and Z 0 = Z 0 . Let Z = Z 0 = Z 0 . Then, Z ∈ R n×(n−m) , and T (x * ) = T 0 (x * ) = T 0 (x * ) = {d ∈ R n : d = Zy : ∀y ∈ R n−m }, where m is the row rank of C 0 (x * ) and C 0 (x * ). It follows that x * ∈ S is a strict optimal solution to the problem if and only if y T Z T AZy < 0 for all y ∈ R n−m , y = 0. More accurately, we have Theorem 3.5. Let x * ∈ S be a KKT point for the GKproblem in (2). Assume that the active inequalities in S are all strongly active at x * . Then, x * ∈ S is a strict optimal solution to the GKproblem in (2) if and only if y T Z T AZy < 0 for all y ∈ R n−m , y = 0, i.e., the reduced Hessian of the objective function of the GKproblem in (2) at x * , Z T AZ, is negative definite.
The second-order optimality conditions presented in this section can be useful for checking the optimality of the solutions for the GKproblems and hence the strategies for the SEgames beyond the conditions given in Theorem 2.1 and 2.3. In order to apply these conditions, all we need to do is to find the null space matrices Z 0 or Z 0 and the eigenvalues of the reduced Hessians Z T 0 AZ 0 or Z 0T AZ 0 to see if they are negative semi-definite or negative definite. For example, suppose that we have a KKT point x * ∈ S for the GKproblem in (2) at which the only active constraint is the equality constraint 1 − i x i = 0. Then, C 0 (x * ) = C 0 (x * ) = {−e T }, for which we can construct a null space matrix Z = Z 0 = Z 0 ∈ R n×(n−1) such that Z i,j = 0 for all i and j, except for Z i,i = 1 and Z i+1,i = −1. Then the optimality of x * can be tested by checking the eigenvalues of the reduced Hessian Z T AZ. If any of the eigenvalues is positive, x * is not optimal, and if all the eigenvalues are negative, x * must be optimal and even strictly optimal. Here, in both cases, x * remains to be an optimal strategy for the corresponding SEgame in (1). However, the stability of the solution may be different, as we will discuss in greater detail in next section.
Note that the second order necessary and sufficient conditions for the optimal solutions of the GKproblem in (2) have been discussed in great detail in [2], where, related to our discussion, there are two necessary and sufficient conditions: (1) A feasible solution x * ∈ S for the GKproblem in (2) is a strict optimal solution if and only if d T Ad < 0 for all d ∈ T (x * ), d = 0, where T (x * ) is the reduced tangent cone of the problem at x * . (2) If all active inequalities for the GKproblem in (2) are strongly active at x * , then x * is a strict optimal solution if and only if Z T AZ is negative definite, when T (x * ) becomes a linear space defined by matrix Z. In our analysis, corresponding to (1), we have a necessary condition in Theorem 3.3 and sufficient condition in Theorem 3.4 separately. They are not equivalent to, but are in fact weaker than the condition in (1). The reason for doing so is that the condition in (1) is hard to test. It is equivalent to solving a matrix co-positivity problem, which is NP-hard in general [12]. On the other hand, the condition in Theorem 3.3 is equivalent to d T Ad < 0 for all d ∈ T 0 (x * ), which is a smaller cone than T (x * ), and is actually a linear space defined by Z 0 . Therefore, the condition is equivalent to Z T 0 AZ 0 negative definite, which can be verified in polynomial time [18]. Likewise, the condition in Theorem 3.4 is equivalent to d T Ad < 0 for all d ∈ T 0 (x * ), which is a larger cone than T (x * ), and is actually a linear space defined by Z 0 . Therefore, the condition is equivalent to Z 0T AZ 0 negative definite, which can again be verified in polynomial time. In our analysis, corresponding to (2), we have an equivalent necessary and sufficient condition in Theorem 3.5. They are equivalent because if all active constraints for the GKproblem in (2) are strongly active at x * , T (x * ) = T 0 (x * ) = T 0 (x * ) and Z = Z 0 = Z 0 . It follows that d T Ad < 0, for all d ∈ T (x * ), d = 0 is equivalent to Z T AZ negative definite. This condition is polynomial time verifiable. We do not need to modify it. The second order optimality conditions in Theorem 3.3, 3.4, and 3.5 are the basis for the later development of our second order stability conditions in Section 5 and computational methods in Section 6. 4. Evolutionarily stable states. An important concept in evolutionary game theory is the evolutionary stability of an optimal strategy. It characterizes the ability of a population to resist small changes or invasions when at equilibrium. Let x * ∈ S be an optimal strategy. Then, the population is at equilibrium state x * . Let x ∈ S be another arbitrary strategy. Mix x * and x = x * so that the population changes to a new state, x+(1− )x * , for some small fraction > 0. Then, x * is said to be evolutionarily stable if it remains as a better response to the new "invaded" population state. More accurately, we have the following definition. 16,19]). An optimal strategy x * ∈ S for an evolutionary game defined by a fitness matrix A is evolutionarily stable if there is a small number ∈ (0, 1) such that for any x ∈ S, x = x * , Usually, it is not easy to prove the evolutionary stability of the optimal strategies for an evolutionary game based on its definition. A more straightforward condition is to consider the strategies y in a small neighborhood U of the optimal strategy x * and check if no y = x * prevails x * such that y T Ay ≥ x * T Ay. It turns out that this condition is necessary and also sufficient: 16,19]). An optimal strategy x * ∈ S for an evolutionary game is evolutionarily stable if and only if there is a small neighborhood U of x * such that y T Ay < x * T Ay for all y ∈ U ∩ S, y = x * .
Note that a SEgame is an evolutionary game. Therefore, the condition in (18) also applies to a SEgame. For a SEgame, x * T Ay = y T Ax * since A is symmetric. Then, y T Ay < x * T Ax * for all y ∈ U ∩ S, y = x * since y T Ax * ≤ x * T Ax * for all y ∈ S. This implies that if x * is an evolutionary stable strategy for a SEgame, it must be a strict local maximizer of the corresponding GKproblem. It turns out that the converse is also true. We state this property in the following theorem, and also provide a slightly different proof from those given in [16,19]. 16,19]). An optimal strategy x * ∈ S for a SEgame in (1) is evolutionarily stable if and only if it is a strict local maximizer of the corresponding GKproblem in (2).
Proof. Let x * ∈ S be an evolutionarily stable strategy for the SEgame in (1). Then, the necessary condition follows directly from Theorem 4.2, as we have discussed above.
To prove the sufficiency, we assume that x * is a strict local maximizer of the GKproblem in (2). Then, there must be a neighborhood U = {y ∈ R n : y − x * < < 2} of x * such that for any y ∈ U ∩ S, y = x * , y T Ay < x * T Ax * . Let x ∈ S be any mixed strategy. Let y = x + (1 − )x * , 0 < < 1. Note that x − x * ≤ x + x * < 2, and y − x * = x − x * < 2 . Then, for all < /2 < 1, y ∈ U and y T Ay < x * T Ax * . Note also that Replace /2 by and /4 by . Then, Since the above inequality holds for all x ∈ S, by Definition 4.1, x * must be an evolutionarily stable strategy for the SEgame in (1).
5.
Second-order stability conditions. By combining Theorem 4.3 with the second-order optimality conditions for the optimal solutions to the GKproblem in (2) derived in Section 3, we can easily obtain a set of second-order stability conditions for the optimal strategies for the SEgame in (1): Let x * ∈ S be an optimal strategy for the SEgame in (1). Let C 0 (x * ) be a matrix with {e T i : x * i = 0} and {−e T } being the rows, where e i is the ith unit vector and e = i e i . Theorem 5.1. Let x * ∈ S be an evolutionarily stable strategy for the SEgame in (1). Let the row rank of C 0 (x * ) be equal to m. Let Z 0 ∈ R n×(n−m) be the null space matrix of C 0 (x * ). Then, Z T 0 AZ 0 must be negative semi-definite.
Proof. If x * ∈ S is an evolutionarily stable strategy for the SEgame in (1), then by Theorem 4.3, it must be a strict local maximizer of the GKproblem in (2). It follows from Theorem 3.3 that Z T 0 AZ 0 must be negative semi-definite. Now, let x * ∈ S be an optimal strategy for the SEgame in (1). Let C 0 (x * ) be a matrix with {e T i : x * i = 0 and µ * i > 0} and {−e T } being the rows, where e i is the ith unit vector, e = i e i , and µ * i = x * T Ax * − (Ax * ) i . Theorem 5.2. Let x * ∈ S be an optimal strategy for the SEgame in (1). Let the row rank of C 0 (x * ) be equal to m. Let Z 0 ∈ R n×(n−m) be the null space matrix of C 0 (x * ). If Z 0T AZ 0 is negative definite, then x * must be an evolutionarily stable strategy.
Proof. If x * ∈ S is an optimal strategy for the SEgame in (1), then by Corollary 1, it must be a KKT point for the GKproblem in (2). Therefore, if Z 0T AZ 0 is negative definite, x * must be a strict local maximizer of the GKproblem in (2) by Theorem 3.4 and an evolutionarily stable strategy for the SEgame in (1) by Theorem 4.3.
Finally, let x * ∈ S be an optimal strategy for the SEgame in (1). If µ * i > 0 for all i such that x * i = 0, i.e., all the active inequalities in S are strongly active at x * , Theorem 5.3. Let x * ∈ S be an optimal strategy for the SEgame in (1). Assume that the active inequalities in S are all strongly active at x * . Then, x * ∈ S is an evolutionarily stable strategy for the SEgame in (1) if and only if Z T AZ is negative definite.
Proof. If x * ∈ S is an optimal strategy for the SEgame in (1), then by Corollary 1, it must be a KKT point for the GKproblem in (2). Therefore, x * is a strict local maximizer of the GKproblem in (2) if and only if Z T AZ is negative definite by Theorem 3.5 and an evolutionarily stable strategy for the SEgame in (1) by Theorem 4.3.
Although Theorem 5.1, 5.2, and 5.3 are simple extensions from Theorem 3.3, 3.4, and 3.5, they have great implications in practice, for they can be used to check the evolutionary stability of the optimal strategies for the SEgame in (1) directly. For example, if the fitness matrix A is positive definite, the reduced Hessian Z T 0 AZ 0 will never be negative semi-definite unless the dimension of the null space of C 0 (x * ) is zero or in other words, unless the row rank of C 0 (x * ) is n. Then, x * i = 0 for all but one i, and the optimal and stable strategies of the SEgame in (1) can only be pure strategies. On the other hand, if the fitness matrix A is negative definite, the reduced Hessian Z 0T AZ 0 will always be negative definite unless the dimension of the null space of C 0 (x * ) is zero, and then, all optimal and non-pure strategies for the SEgame in (1) will be evolutionarily stable. Even when C 0 (x * ) is only of rank one, i.e., i x * i = 1 but x * i > 0 for all i, x * is still evolutionarily stable. Note that an optimal strategy for the SEgame in (1) must be a KKT point of the GKproblem in (2), but it may not be a local maximizer of the GKproblem in (2). It could be a local minimizer or saddle point for the GKproblem in (2). Even if it is a local maximizer of the GKproblem in (2), it may not be evolutionary stable unless it is a strict local maximizer of the GKproblem in (2). In other words, as a KKT point for the GKproblem in (2), an optimal strategy for the SEgame in (1) could be a local maximizer, local minimizer, or saddle point of the GKproblem in (2) while evolutionarily unstable.
Since the second-order stability conditions in Theorem 5.1 and 5.2 are derived from Theorem 3.3 and 3.4, they are in different but weaker forms from those given in [2] as well. As we have mentioned in the end of Section 3, the advantage of introducing these forms is that they can be checked more efficiently in polynomial time than that given in [2]. The latter is equivalent to a matrix co-positivity problem and can be NP-hard to compute. The condition in Theorem 5.3 is equivalent to the one given in [2] since it can be verified in polynomial time as those in Theorem 5.1 and 5.2. 6. Computational methods. As we have discussed in previous sections, in order to test the second-order optimality or stability conditions, all we need to do is to form a reduced Hessian for the objective function of the GKproblem in (2) and see if it is negative semidefinite or negative definite. The Hessian of the objective function of the GKproblem in (2) is basically the fitness matrix A, while the reduced Hessian is Z T 0 AZ 0 or Z 0T AZ 0 , where Z 0 and Z 0 are the null space matrices of C 0 (x * ) and C 0 (x * ), respectively, for x * ∈ S to be tested, There are three major steps to complete a second-order optimality or stability test: (1) Compute the null space matrices Z 0 or Z 0 . (2) Form the reduced Hessians Compute the eigenvalues of the reduced Hessians. In step (1), it can be computationally expensive to find the null space matrix for a given matrix using a general approach, say the QR factorization, which typically requires O((n − m)n 2 ) floating-point calculations [18] if Z 0 or Z 0 is a n × (n − m) matrix. In step (2), each of the reduced Hessians involves two matrix-matrix multiplications, which also requires O(2(n − m)n 2 ) floating-point calculations. However, because of the special structures of C 0 (x * ) and C 0 (x * ), the calculations in step (1) and step (2) can actually be carried out in a very simple way, without much computational cost: First of all, the matrices C 0 (x * ) and C 0 (x * ) do not need any computation. They can be constructed straightforwardly as follows: First, form an (n + 1) × n matrix with the ith row equal to e T i and the last row equal to −e T , where e i is the ith unit vector and e = i e i . Then, for C 0 (x * ), remove row i such that x * i > 0; for C 0 (x * ), in addition to row i such that x * i > 0, remove row i such that x * i = 0 and µ * i = 0. We demonstrate the structure of C 0 (x * ) and C 0 (x * ) in the following matrix form: · · · · · · 0 · · · 1 · · · 0 · · · · · · −1 · · · −1 · · · −1 Next, given the simple structure of C 0 (x * ) and C 0 (x * ), we in fact do not have to compute the null space matrices Z 0 and Z 0 , either. They can also be constructed easily: First, form an n × n identity matrix with row k replaced by −e T for some k such that x * k > 0. Then, remove the kth column; in addition, for Z 0 , also remove column j such that x * j = 0; for Z 0 , only remove column j such that x * j = 0 and µ * j > 0. The following are the matrix forms of Z 0 and Z 0 : ⇐ row k such that x * k > 0 for some k (Remove column k. In addition, also remove column j such that x * j = 0.) ⇐ row k such that x * k > 0 for some k (Remove column k. In addition, remove only column j such that x * j = 0 and µ * j > 0.) It is easy to see that Z 0 or Z 0 are of full column rank n − m, where m is the row rank of C 0 (x * ) or C 0 (x * ). It is also easy to verify that C 0 (x * )Z 0 = 0 and C 0 (x * )Z 0 = 0, and therefore, Z 0 and Z 0 can indeed be used as null space matrices of C 0 (x * ) and C 0 (x * ), respectively. Yet, the construction of Z 0 and Z 0 does not have computational cost at all.
Finally, with Z 0 and Z 0 as given above, the computation of the reduced Hessians Z T 0 AZ 0 or Z 0T AZ 0 does not require full matrix-matrix multiplications. Let H = Z T AZ with Z = Z 0 or Z 0 . We show how H can be calculated with less computational cost: Let B = AZ. Then, H = Z T AZ = Z T B. Let B j and Z j be column j of B and Z, respectively. Assume that Z j = e i −e k for some i. Then, B j = AZ j can be obtained by subtracting column k from column i of A with n floating-point calculations. Since B has only n − m columns, the computation of B requires n(n − m) floating-point calculations. Let H i and Z iT be row i of H and Z T . Also assume that Z iT = e T j − e T k for some j. Then, H i = Z iT B can be obtained by subtracting row k from row j of B with n − m floating-point calculations. Since H has only n − m rows, the computation of H requires (n − m) 2 floating-point calculations. By putting the calculations for B and H together, we then obtain the computation for the whole reduced Hessian Z T AZ to be (n − m)(2n − m) floating-point calculations, which is much less costly than full matrix-matrix multiplications. 7. Games for genetic selection. A genetic selection problem and in particular, the problem for allele selection at single or multiple genetic loci can be formulated as a symmetric evolutionary game. Recall that the fitness of different allele pairs or in other words, different genotypes at a given genetic locus can be given in a matrix with the rows corresponding to the choices for the first allele and the columns to the choices for the second allele in the allele pairs. If there are n different alleles, there will be n different choices for both the first and second alleles, and the fitness matrix will be an n × n matrix. With such a fitness matrix, the allele selection game can be defined with the choices of the first and second alleles as the strategies for player I and player II of the game, where player I can be considered as a specific individual and player II as a typical individual in the given population. If there are n different alleles, the strategy for player I can be represented by a vector x ∈ R n , x ≥ 0, i x i = 1, and the strategy for player II by a vector y ∈ R n , y ≥ 0, i y i = 1. Let the fitness matrix be given by A ∈ R n×n . Let S = {x ∈ R n : x ≥ 0, i x i = 1}. The average fitness of an allele choice x ∈ S in an allele population y ∈ S will be x T Ay. We then want to find an optimal choice of x * ∈ S such that i.e., in allele population x * , any individual with allele choice x other than x * will not have a better average fitness than allele choice x * . Note that the fitness for allele pair (i, j) usually is the same as that for (j, i). Therefore, the fitness matrix for allele selection is typically symmetric, and the game in (19) is then a SEgame.
As we have discussed in previous sections, the selection game in (19) can be studied with a generalized knapsack problem: subject to i x i = 1, x ≥ 0. By Corollary 1, an optimal strategy of the selection game in (19) is equivalent to a KKT point of the GKproblem in (20), and by Theorem 4.3, if it is evolutionarily stable, it must correspond to a strict local maximizer of the GKproblem in (20), and vice versa. In addition, the optimality and stability conditions derived in previous sections all apply to the selection game in (19). We demonstrate the applications of these conditions with several example selection games including some from the study of genetic disorders.
We first consider a genetic locus with two alleles, one dominant and another recessive. Many genetic traits are due to the genotypic differences in a specific locus of two alleles. For example, in the well-known Mendel's experiment, the color of the flowers depends on the pair of alleles at certain genetic locus, one for pink color and dominant, and another for white and recessive. Let the dominant allele be denoted by A and recessive one by a. There can be four possible allele pairs, AA, Aa, aA, and aa. Since A is dominant, AA, Aa, and aA will produce pink flowers, while aa will produce white ones (see Fig. 2). According to the Hardy-Weinberg Law, if pink flowers and white flowers have the same selection chance, the distributions of the genotypes AA, Aa, aA, and aa and the alleles A and a in the population will not change over generations. Otherwise, different genotypes may have different fitness, and some may be selected while others eliminated [5].
Indeed, some alleles, either dominant or recessive, may cause genetic disorders. When they are dominant, both homozygote and heterozygote pairs containing the dominant allele will cause the disorders. When they are recessive, only the homozygote pairs of two recessive alleles will cause the problem. In either case, the genotypes that cause the genetic disorders will have lower fitness than those that do not. For example, cystic fibrosis is a disease caused by a recessive allele. The normal allele or the dominant one codes for a membrane protein that supports the transportation of ions for cells. It functions normally even when in the heterozygote form with one abnormal allele. However, if both alleles are the recessive ones, there will not be normal membrane protein expressions, giving rise to the cystic fibrosis disease. A further example is the Huntington's disease, a degenerative disease of the nerve system, caused by a lethal dominant allele. Both homozygote and heterozygote pairs of alleles containing the dominant allele will be harmful. Only the could happen for example in the study of malaria infection, where A represents the wild-type gene, while a represents the mutated gene. Individuals with AA types are susceptible to malaria infection, while those with Aa and aA types appear to be able to resist the infection. However, when aa types are formed, the individuals will develop a serious disease called the sickle cell disease. In any case, the SEgame in (21) has a single solution Since both x * 1 > 0 and x * 2 > 0, it is easy to construct a null space matrix Z = (1, −1) T , and see that Z T AZ = A 1,1 + A 2,2 − A 1,2 − A 2,1 < 0. Therefore, by Theorem 3.5, x * must be a strict local maximizer of the GKproblem in (22), and by Theorem 4.3 or 5.3, it is an evolutionarily stable state.
Next, we consider a more complicated case related to genetic mutations for malaria infections. In Africa and Southeast Asia, where human population has been exposed to serious malaria infection, certain genetic mutations have survived for a gene that codes the hemoglobin proteins of blood cells. These mutations resist malaria infection, but may cause other serious illness as well when in homozygote forms such as the sickle cell disease. Here we consider three well-studied allele forms of this gene, the wild type, S-mutation, and C-mutation, denoted by W , S, and C alleles. The normal genotype would be W W , but subnormal ones include W S, W C, and SC, which may have malaria resistance functions. Other forms, SS and CC, may cause other illness. These functions can be described with a 3 × 3 fitness matrix A, with rows corresponding to the choices of W , S, and C for the first allele, and the columns to the choices of W , S, and C for the second allele, when forming the allele pairs or in other words, the genotypes. Based on an estimate given in [17], this fitness matrix can be defined as follows: From this matrix, we see that the genotype W S has good fitness, while CC is the best. The genotype W W is not very good because it is susceptible to malaria infection, while SS is the worse because it causes the sickle cell disease. We may wonder how the alleles will eventually distribute in the population under such selection pressures. We have solved a SEgame with this fitness matrix and obtained three solutions: x (1) = (0, 0, 1) T , x (2) = (0.879, 0.121, 0) T , and x (3) = (0.832, 0.098, 0.070) T . The first solution suggests that the population may end up with all C alleles since the genotype CC seems have the best fitness. The second solution suggests a large portion of W alleles, with a small percentage of S alleles, which increases the resistance to malaria infection, yet does not have a large chance for SS combinations. The third solution means that the three alleles may co-exist.
We have also solved a corresponding GKproblem with the above matrix A, using a Matlab code. It turned out that we have only found two local maximizers for the GKproblem corresponding to x (1) and x (2) . At least, computationally, we have not found x (3) as a local maximizer, which suggests that x (1) and x (2) may be evolutionarily stable, while x (3) may not. Indeed, at solution x (3) , the only active constraint for the GKproblem is i x i = 1. The null space matrix Z for the Jacobian of this equation can be constructed as We then have the reduced Hessian of the GKproblem to be Based on the above analysis, we would predict that x (3) for the co-existing of three alleles in the population will never happen because it is unstable. The solution x (1) corresponds to a global maximizer of the GKproblem. Based on our simulation (not shown), it also has a large attraction region in the sense that most solutions would converge to x (1) unless the initial value for C allele is very small, say less than 5%. In current population, C allele is indeed rare and therefore, the population does not have much chance to evolve to this state. The population have typically a large percentage of W alleles, a small percentage of S alleles, and some rare C alleles, and therefore, we would predict that x (2) will be the most likely and stable state of the population in the end.
8. Concluding remarks. In this paper, we have reviewed the theory for obtaining optimal and stable strategies for SEgames, and provided some new proofs and computational methods. In particular, we have reviewed the relationship between the SEgame and the GKproblem, and discussed the first and second order necessary and sufficient conditions that can be derived from this relationship for testing the optimality and stability of the strategies. Some of the conditions are given in different forms from those in previous work and can be verified more efficiently. We have also derived more efficient computational methods for the evaluation of the conditions than conventional approaches. We have demonstrated how these conditions can be applied to justifying the strategies and their stabilities for a special class of genetic selection games including some in the study of genetic disorders. Further studies can be pursued in the following possible directions though: First, novel methods can be developed for solving special types of SEgames and especially for obtaining the evolutionarily stable strategies for the games by solving some special classes of GKproblems. For example, if the fitness matrix for a SEgame is negative definite, then the corresponding GKproblem is a strictly convex quadratic program and can be solved efficiently using some special algorithms [4]. Further, the solution is guaranteed to be a strict local maximizer for the GKproblem and hence an evolutionarily stable strategy for the SEgame. A more complicated case is when the fitness matrix is positive definite. Then, only pure strategies may be evolutionarily stable. A special algorithm can then be developed to only find the solutions for the GKproblem that correspond to the pure strategies of the SEgame.
Second, in Theorem 3.5 and 5.3, we have stated two optimality and stability conditions. They are necessary and sufficient, but require all active constraints to be strongly active at x * , when C 0 (x * ) = C 0 (x * ), T 0 (x * ) = T 0 (x * ), and Z 0 = Z 0 . However, in practice, this assumption may not hold. A more general necessary and sufficient condition, without the above assumption, is to require d T Ad < 0 for all d ∈ T (x * ), d = 0, where T (x * ) is the reduced tangent cone at x * , as given in [2]. As we have mentioned in previous sections, this condition is not easy to test. It is equivalent to testing the copositivity of a matrix, which is difficult in general [1,12]. But still, an efficient algorithm may be developed for SEgames and GKproblems for small sizes of problems or problems with special structures.
Third, it is not so hard to verify that the GKproblem is NP-hard in general, because the maximum clique problem can be formulated as a GKproblem [11,15]. However, how to extend this result to the SEgame is not so clear, because the SEgame is not exactly equivalent to the GKproblem. Several related questions are asked: is any maximal clique a local maximizer of the GKproblem for the maximum clique problem? If not, what condition is needed? If yes, is it a strict local maximizer? Is the maximum clique a global maximizer? Is it an evolutionarily stable strategy for the corresponding SEgame? We are interested in all these questions and are trying to find their answers.
Fourth, though not equivalent, the correspondence between the SEgame and GKproblem is interesting. A similar relationship may be found between a class of nonlinear games and nonlinear optimization problems. Indeed, we can define an nstrategy two-player game by a fitness function x T π(y) with π(y) being a nonlinear function. The game then becomes a nonlinear game. If π(y) is a gradient field, i.e., there is a function f (y) such that ∇f (y) = π(y), then, an optimal strategy x * ∈ S such that x * T π(x * ) ≥ x T π(x * ) for all x ∈ S corresponds to an optimal solution x * ∈ S such that f (x * ) ≥ f (x) for all x in a small neighborhood of x * , x ∈ S. Then, it would be interesting to see what additional relationships between the SEgame and GKproblem can be extended to their nonlinear cases.
Finally, we have demonstrated the applications of SEgames to allele selection at single genetic loci. They can be extended to alleles at multiple genetic loci, if there is no mutation or recombination. In this case, an individual can be identified by a sequence of alleles at the multiple loci. In other words, a selection strategy will be a choice of a specific sequence of alleles. This would certainly increase the strategy space substantially. For example, if there are two loci G 1 and G 2 , with two possible alleles A and a for G 1 and two other possible ones B and b for G 2 , then there will be four possible sequences of alleles for the two loci: AB, Ab, aB, ab, each corresponding to one pure strategy. In general, if there are m loci G i , i = 1, . . . , m, with m i possible alleles for G i , then there will be n = i=1:m m i possible sequences of alleles. The number of pure strategies and hence the dimension of the game will be n, which can be a large number. In any case, in practice, mutation and recombination often are not negligible, and therefore, our model must incorporate such effects. The topics could include other so-called linkage disequilibrium factors, but they are all beyond the scope of this paper [17]. We will pursue these issues in our future efforts. | 16,151.8 | 2015-01-01T00:00:00.000 | [
"Mathematics"
] |
E ff ect of Sodium Borate on the Preparation of TiN from Titanomagnetite Concentrates by Carbothermic Reduction–Magnetic Separation and Acid Leaching Process
: Carbothermic reduction–magnetic separation and acid leaching processes were used to produce TiN and direct reduced iron (DRI) from titanomagnetite concentrates. The e ff ects of sodium borate on the reduction behavior of TMCs, the magnetic separation of the reduced products, and the purification of the impure TiN by acid leaching were investigated. Results of x-ray di ff raction, scanning electron microscopy, and energy-dispersive spectroscopy analysis showed that magnesium aluminate spinel (MgAl 2 O 4 ) was generated in the reduced products, which could hinder the purification of the TiN. Adding sodium borate not only inhibited the formation of MgAl 2 O 4 , but also promoted the formation of TiN by decreasing the roasting temperature and time. Adding sodium borate slightly a ff ected the separation of metallic Fe and TiN. By adding 16% sodium borate, a DRI with 94.3% Fe, 0.6% Ti, and 0.1% V was obtained by magnetic separation. After HCl + HF leaching, the TiN product containing 74.1% Ti and 2.8% V was obtained with the Ti recovery of 94.6% and V recovery of 58.3%. that
Introduction
Titanium nitride is an important technological material due to its excellent characteristics such as high melting point (2950 • C), extreme hardness (8-9 on the Mohs scale), high chemical and thermal stability, gold color, and good thermal and electrical conductivity [1]. This material has been considerably applied in various fields such as wear-resistant coatings on machine tools and bearings, and nontoxic exteriors on medical implants. Several physical and chemical methods have been reported in the literature for synthesizing TiN such as direct nitridation of metallic Ti [2], carbothermal reduction in TiO 2 in N 2 atmosphere [3], microwave-assisted or direct carbothermic reduction-nitridation of FeTiO 3 [4], self-propagating high-temperature synthesis [5], and microwave plasma synthesis [6]. However, these processes require costly raw materials, high temperatures, long reaction time, and/or costly equipment. Therefore, affordable processes using low-cost materials for the preparation of TiN are needed.
The Panzhihua titanomagnetite deposit accounts for more than 90% of the Ti reserves in China. Titanomagnetite concentrates (TMCs) are currently used to produce molten iron through the blast furnace process in the Panzhihua. Nearly all amounts of Ti were concentrated into the slag after smelting in the blast furnace. As a result, more than three million tons of the slag containing 22-25% TiO 2 are produced annually [7][8][9]. However, no appropriate and economical method to process this slag is available to date because of the dispersed distribution of Ti in various fine-grained (<10 µm) mineral phases. Thus, this slag is treated as a solid waste and stored; this process not only wastes Ti resources, but also poses a threat to the environment [10]. Therefore, many studies have been devoted to develop a cleaner production process than before for comprehensively utilizing Fe, Ti, and V compounds of TMCs such as the direct reduction-smelting process [11], and the direct reduction-magnetic separation process [12]. Among these, iron oxides are first reduced to metallic iron, whereas titanium oxides are rarely reduced. The reduced products are then separated by smelting or magnetic separation to produce DRI and titanium slag. However, the subsequent process of extracting titanium from the titanium slag is very complicated and is not environmentally friendly due to its high impurity content and low reactivity. Moreover, several studies have been conducted on the preparation of iron-based wear-resistant material from TMCs through the carbothermal process, in which iron oxides are reduced to metallic iron while titanium oxides are transformed into TiC or Ti(C,N) [13,14]. However, all impurities contained in the TMCs and reductant remain in the prepared material, which may degrade its performance.
In our previous study, a new process was proposed to prepare TiN and direct reduced iron (DRI) from TMCs [15]. In this process, the TMCs were first reduced by anthracite to metallic Fe and TiN, and then magnetically separated to produce DRI and impure TiN. This process provides a way to realize the high value utilization of iron and titanium simultaneously from the TMCs. Our results showed that the Ti component was nearly completely transformed into TiN under the conditions of reduction temperature of 1300 • C, the anthracite dosage of 26%, and the reduction time of 90 min. Moreover, the separation results revealed that metallic iron and TiN can be separated precisely through grinding and magnetic separation. Therefore, the purification of impure TiN is the key technology of this process. Traditional beneficiation methods including flotation, magnetic separation, and gravity separation are unsuitable for purifying impure TiN because the particle size of TiN is generally less than 10 µm. The acid leaching process can be used to remove impurities from impure TiN given that TiN has excellent acid resistance. However, large amounts of MgAl 2 O 4 , which are difficult to dissolve in acid solution [10], are observed in the roasted product. Magnesium aluminate spinel is also concentrated in the impure TiN after magnetic separation and will thus hinder the purification of TiN by acid leaching. Therefore, the formation of MgAl 2 O 4 should be inhibited during the reduction roasting process.
Sodium roasting is a commonly used technique to convert acid-resistant substances to soluble substances in extractive metallurgy [16,17]. Sodium borate can promote the carbothermic reduction of titanomagnetite and ilmenite [18,19]. Herein, sodium borate was used as an additive to inhibit the formation of MgAl 2 O 4 during reduction to create a convenient condition for purifying TiN. The effects of sodium borate on the carbothermic reduction of TMCs, the magnetic separation of the reduced products, and the acid leaching of impure TiN were also studied.
Materials
The titanomagnetite concentrates used in this study was obtained from Panzhihua in Sichuan Province, China. The particle size of the TMCs was 54.17%, passed through 0.074 mm. Anthracite, which was used as a reductant, was obtained from Jincheng, Shanxi Province, China. Anthracite contained 0.80% moisture, 10.91% ash, 7.18% volatiles, 81.11% fixed carbon, and 0.39% S. The anthracite used in the experiments was crushed and ground to 100%, passed through 0.1 mm.
Sodium carboxymethylcellulose (Na-CMC) was used as a binder in the pelleting process, and HCl and HF were used as lixiviants to purify TiN. Sodium borate (Na 2 B 4 O 7 ·10H 2 O) was used as an additive. All the chemicals were purchased from Sinopharm (Shanghai, China), were of analytical grade, and used as received.
Methods
As depicted in Figure 1, the experimental procedure mainly included: (1) pelleting of the mixture of TMCs and anthracite; (2) reduction roasting of pellets; (3) grinding and magnetic separation of the reduced pellets; and the (4) acid leaching of impure TiN.
Methods
As depicted in Figure 1, the experimental procedure mainly included: (1) pelleting of the mixture of TMCs and anthracite; (2) reduction roasting of pellets; (3) grinding and magnetic separation of the reduced pellets; and the (4) acid leaching of impure TiN. Pelleting of the mixture was conducted by hands following the procedure as follows. TMCs (20 g), anthracite (26%), sodium borate (0, 4%, 8%, 12%, 16%), Na-CMC (0.5%), and water (approximately 25%) were thoroughly mixed to produce material that could be shaped by hands to make pellets with a diameter of 6-8 mm. The dosages of anthracite, binder, and water were expressed as percentages that refer to their mass ratios to TMCs. The wet pellets were oven dried at 105 °C for 2 h.
Reduction roasting experiments were performed in a muffle furnace. The schematic diagram of the furnace has been previously described [20]. The reduced pellets were ground in a grinder and then separated in a magnetic separator. The reduced pellets were ground to about 80%, and passed 0.074 mm. A XCGS-50 Davis tube (Nanchang Li Yuan Mining and Metallurgy Equipment Co., Ltd., Nanchang, China) with a magnetic induction intensity of 0.04T was used to separate the slurry. The methods of roasting, grinding, and magnetic separation were based on previous studies [15,21].
Leaching experiments were carried out in a 200 L plastic conical flask. The impure TiN concentrates were first leached by HCl (36.0 mass %-38.0 mass %) for 12 h using a liquid-to-solid mass ratio of 10:1. The slurry was filtered, and the leach residue was washed with distilled water and dried. Then, the leach residue was leached using HF (≥40.0 mass %) with the liquid-to-solid ratio of 5:1 for 12 h. The leach residue was repeatedly washed with distilled water and dried. The types of leaching agent and the leaching method were determined based on the results of exploration experiments.
Characterization
The Fe, TiO2, and V2O5 contents of the DRI were measured by an IRIS Intrepid II inductive coupled plasma emission spectrometer (ICP, Thermo Electron Corporation, Waltham, MA, USA). The composition of the TiN product was analyzed by an x-ray fluorescence analyzer (XRF) equipped with Omnian standardless analysis software (Axios max, PANalytical, Almelo, The Netherlands). The TiN product was analyzed by XRF for elements from Na to higher molar masses in the periodic table. Crystal phases were identified in the powdered samples using a DX-2700 x-ray diffractometer (XRD, Hao Yuan Instrument, Dandong, China) with a Cu target ranging from 10° to 80° in 0.02° Pelleting of the mixture was conducted by hands following the procedure as follows. TMCs (20 g), anthracite (26%), sodium borate (0, 4%, 8%, 12%, 16%), Na-CMC (0.5%), and water (approximately 25%) were thoroughly mixed to produce material that could be shaped by hands to make pellets with a diameter of 6-8 mm. The dosages of anthracite, binder, and water were expressed as percentages that refer to their mass ratios to TMCs. The wet pellets were oven dried at 105 • C for 2 h.
Reduction roasting experiments were performed in a muffle furnace. The schematic diagram of the furnace has been previously described [20]. The reduced pellets were ground in a grinder and then separated in a magnetic separator. The reduced pellets were ground to about 80%, and passed 0.074 mm. A XCGS-50 Davis tube (Nanchang Li Yuan Mining and Metallurgy Equipment Co., Ltd., Nanchang, China) with a magnetic induction intensity of 0.04T was used to separate the slurry. The methods of roasting, grinding, and magnetic separation were based on previous studies [15,21].
Leaching experiments were carried out in a 200 L plastic conical flask. The impure TiN concentrates were first leached by HCl (36.0 mass %-38.0 mass %) for 12 h using a liquid-to-solid mass ratio of 10:1. The slurry was filtered, and the leach residue was washed with distilled water and dried. Then, the leach residue was leached using HF (≥40.0 mass %) with the liquid-to-solid ratio of 5:1 for 12 h. The leach residue was repeatedly washed with distilled water and dried. The types of leaching agent and the leaching method were determined based on the results of exploration experiments.
Characterization
The Fe, TiO 2 , and V 2 O 5 contents of the DRI were measured by an IRIS Intrepid II inductive coupled plasma emission spectrometer (ICP, Thermo Electron Corporation, Waltham, MA, USA). The composition of the TiN product was analyzed by an x-ray fluorescence analyzer (XRF) equipped with Omnian standardless analysis software (Axios max, PANalytical, Almelo, The Netherlands). The TiN product was analyzed by XRF for elements from Na to higher molar masses in the periodic table.
Crystal phases were identified in the powdered samples using a DX-2700 x-ray diffractometer (XRD, Hao Yuan Instrument, Dandong, China) with a Cu target ranging from 10 • to 80 • in 0.02 • intervals at a scanning rate of 5 • /min. The reduced pellets were mounted in epoxy resin and polished for scanning electron microscopy (SEM), and energy-dispersive spectroscopy (EDS) analysis (MLA650F, FEI, Hillsboro, OR, USA).
Effect of Sodium Borate on the Formation of MgAl2O4
The reduced pellets with different dosages of sodium borate were detected by XRD. The roasting conditions were as follows: the anthracite dosage of 26%, the roasting temperature of 1300 °C, and the roasting time of 90 min. Figure 3 illustrates the results. Figure 3 shows that the dosage of sodium borate significantly affected the formation of MgAl2O4. The intensity of the peaks of MgAl2O4 decreased as the dosage of sodium borate increased. When the dosage of sodium borate increased to 16%, the peaks of MgAl2O4 disappeared. This phenomenon indicated that the formation of MgAl2O4 was completely suppressed. Therefore, the dosage of sodium borate was selected as 16% for the subsequent experiments. In TMCs, elements of Mg and Al were mainly present in the magnetite and ilmenite crystal lattices in the form of isomorphism. Specifically, Al 3+ replaced Fe 3+ , whereas Mg 2+ replaced Fe 2+ [22]. In the absence of sodium borate, the remaining MgO reacted with Al2O3 to form MgAl2O4 as the iron and titanium oxides were reduced. In contrast, Na2B4O7 melted at a low temperature (751 °C) and reacted with MgO, Al2O3, and other components (e.g., SiO2 and CaO) to form a molten slag. No other crystalline substances were detected in the
Effect of Sodium Borate on the Formation of MgAl 2 O 4
The reduced pellets with different dosages of sodium borate were detected by XRD. The roasting conditions were as follows: the anthracite dosage of 26%, the roasting temperature of 1300 • C, and the roasting time of 90 min. Figure 3 illustrates the results. Figure 3 shows that the dosage of sodium borate significantly affected the formation of MgAl 2 O 4 . The intensity of the peaks of MgAl 2 O 4 decreased as the dosage of sodium borate increased. When the dosage of sodium borate increased to 16%, the peaks of MgAl 2 O 4 disappeared. This phenomenon indicated that the formation of MgAl 2 O 4 was completely suppressed. Therefore, the dosage of sodium borate was selected as 16% for the subsequent experiments. In TMCs, elements of Mg and Al were mainly present in the magnetite and ilmenite crystal lattices in the form of isomorphism. Specifically, Al 3+ replaced Fe 3+ , whereas Mg 2+ replaced Fe 2+ [22]. In the absence of sodium borate, the remaining MgO reacted with Al 2 O 3 to form MgAl 2 O 4 as the iron and titanium oxides were reduced. In contrast, Na 2 B 4 O 7 melted at a low temperature (751 • C) and reacted with MgO, Al 2 O 3 , and other components (e.g., SiO 2 and CaO) to form a molten slag. No other crystalline substances were detected in the reduced pellets by XRD except for Fe, TiN, and Fe 3 C. This may be because these sodium-containing substances form a liquid at high temperature, and then form a glass phase material after rapid cooling. Moreover, the peaks of Fe 3 C increased as the dosage of sodium borate increased. This phenomenon revealed that the addition of sodium borate promoted the carburization of metallic iron. The role of sodium salt in promoting carburization has been widely recognized [23]. reduced pellets by XRD except for Fe, TiN, and Fe3C. This may be because these sodium-containing substances form a liquid at high temperature, and then form a glass phase material after rapid cooling. Moreover, the peaks of Fe3C increased as the dosage of sodium borate increased. This phenomenon revealed that the addition of sodium borate promoted the carburization of metallic iron. The role of sodium salt in promoting carburization has been widely recognized [23]. 10
Effect of Sodium Borate on the Formation of TiN
The addition of sodium borate has been reported to promote the reduction of ilmenite and titanomagnetite to Fe and titanium oxides [18,19], but the effect of sodium borate on the conversion of the TMCs to TiN has never been reported. In order to study the effect of sodium borate addition on the formation of TiN, the phase transformations of the composite pellets without sodium borate and with 16% sodium borate reduced at different conditions were detected by XRD. The results are displayed in Figures 4 and 5, respectively. Figure 4 shows the XRD patterns of the pellets without sodium borate and with 16% sodium borate reduced at different temperatures for 90 min. Figure 4a shows that a small amount of TiN started to form at 1200 °C without the additive. As the reduction temperature increased, the diffraction peak intensity of the M3O5-type solid solution decreased and the diffraction peak intensity of TiN increased gradually. The M3O5-type solid solution were important intermediates during the carbothermic reduction process of titanomagnetite and ilmenite with general formulas of m((Ti,Mg,Mn,Fe)O·2TiO2)·n((Ti,Fe,Al,Cr,V)2O3·TiO2) [12,24]. When the reduction temperature increased to 1300 °C, the diffraction peak intensity of M3O5 disappeared, indicating that the Ti component was completely converted into TiN. In contrast, Figure 4b shows that TiN also started to form at 1200 °C when 16% sodium borate was added. However, the Ti component could be completely converted into TiN as the reduction temperature increased to 1250 °C. This result implied that the addition of sodium borate promoted the formation of TiN. Moreover, MgAl2O4 started to form at 1200 °C in the absence of sodium borate. In the case of 16% sodium borate, MgAl2O4 was not observed over the studied temperature range.
Effect of Sodium Borate on the Formation of TiN
The addition of sodium borate has been reported to promote the reduction of ilmenite and titanomagnetite to Fe and titanium oxides [18,19], but the effect of sodium borate on the conversion of the TMCs to TiN has never been reported. In order to study the effect of sodium borate addition on the formation of TiN, the phase transformations of the composite pellets without sodium borate and with 16% sodium borate reduced at different conditions were detected by XRD. The results are displayed in Figures 4 and 5, respectively. Figure 4 shows the XRD patterns of the pellets without sodium borate and with 16% sodium borate reduced at different temperatures for 90 min. Figure 4a shows that a small amount of TiN started to form at 1200 • C without the additive. As the reduction temperature increased, the diffraction peak intensity of the M 3 O 5 -type solid solution decreased and the diffraction peak intensity of TiN increased gradually. The M 3 O 5 -type solid solution were important intermediates during the carbothermic reduction process of titanomagnetite and ilmenite with general formulas of m((Ti,Mg,Mn,Fe)O·2TiO 2 )·n((Ti,Fe,Al,Cr,V) 2 O 3 ·TiO 2 ) [12,24]. When the reduction temperature increased to 1300 • C, the diffraction peak intensity of M 3 O 5 disappeared, indicating that the Ti component was completely converted into TiN. In contrast, Figure 4b shows that TiN also started to form at 1200 • C when 16% sodium borate was added. However, the Ti component could be completely converted into TiN as the reduction temperature increased to 1250 • C. This result implied that the addition of sodium borate promoted the formation of TiN. Moreover, MgAl 2 O 4 started to form at 1200 • C in the absence of sodium borate. In the case of 16% sodium borate, MgAl 2 O 4 was not observed over the studied temperature range. Figure 5 shows the XRD patterns of the pellets without sodium borate and with 16% sodium borate reduced at 1300 °C for different times, respectively. Figure 5a shows that TiN started to form when reduced for 20 min without the additive. As the reduction time increased to 90 min, the Ti component was completely converted into TiN. Figure 5b reveals that TiN started to form at 10 min when 16% sodium borate was added, and the Ti component was completely converted into TiN when reduced for 50 min. This result also demonstrated that the addition of sodium borate promoted the formation of TiN. In addition, MgAl2O4 started to form after being reduced for 20 min without the additive. In contrast, MgAl2O4 was not observed over the studied roasting time range in the presence of sodium borate. Figure 5 shows the XRD patterns of the pellets without sodium borate and with 16% sodium borate reduced at 1300 • C for different times, respectively. Figure 5a shows that TiN started to form when reduced for 20 min without the additive. As the reduction time increased to 90 min, the Ti component was completely converted into TiN. Figure 5b reveals that TiN started to form at 10 min when 16% sodium borate was added, and the Ti component was completely converted into TiN when reduced for 50 min. This result also demonstrated that the addition of sodium borate promoted the formation of TiN. In addition, MgAl 2 O 4 started to form after being reduced for 20 min without the additive. In contrast, MgAl 2 O 4 was not observed over the studied roasting time range in the presence of sodium borate. The carbothermic reduction and nitridation reactions of titanomagnetite and ilmenite could be divided into two stages. The first stage was the reduction of titanomagnetite and ilmenite to Fe and M3O5, and the second stage was the reduction and nitridation of M3O5 to TiN/Ti(C,N) [15,[25][26][27][28][29][30]. The possible reactions were described as Equations (2-9). Titanomagnetite was regarded as Fe3O4·Fe2TiO4 since titanomagnetite is a solid solution of magnetite-ulvospinel. FeTi2O5 and Ti3O5 were selected to represent M3O5 because they are the two most important substances in the M3O5-type solid solution. Assuming that the carbon gasification reaction reached equilibrium in the temperature range of 500−1500 °C, the Gibbs free energy changes (ΔG, kJ/mol) of the possible reduction reactions of the titanomagnetite concentrates were calculated by using Fact-Web [31]. Plots of Gibbs free energy changes against temperature are shown in Figure 6. Standard Gibbs free energy changes instead of the ΔG of Equation (8) were provided since the reaction equilibrium constant cannot be calculated.
Fe3O4 + CO → 3FeO + CO2 FeO + CO → Fe + CO2 (4) Assuming that the carbon gasification reaction reached equilibrium in the temperature range of 500−1500 • C, the Gibbs free energy changes (∆G, kJ/mol) of the possible reduction reactions of the titanomagnetite concentrates were calculated by using Fact-Web [31]. Plots of Gibbs free energy changes against temperature are shown in Figure 6. Standard Gibbs free energy changes instead of the ∆G of Equation (8) were provided since the reaction equilibrium constant cannot be calculated. Fe2TiO4 + CO → Fe + FeTiO3 + CO2 (5) 2FeTiO3 + CO → Fe + FeTi2O5 + CO2 (6) 3/5FeTi2O5 + CO → 3/5Fe + 2/5Ti3O5 + CO2 (7) 1/5Ti3O5 + C + 3/10N2 → 3/5TiN + CO (8) MgO + Al2O3 → MgAl2O4 (9) In the first stage, the reactions occurred by means of the gaseous intermediates CO and CO2. Moreover, the overall rate of reaction was controlled by the gasification of coal. Sodium borate is a catalyst for the carbon gasification [32]. Thus, the addition of sodium borate could promote the formation of Fe and M3O5.
In the second stage, M3O5 was reduced by solid carbon instead of CO, and the diffusion of carbon to the surface of M3O5 is the rate determining step for the formation of nitride. Gou reported that when the roasting temperature was above the eutectic temperature (1154 °C) of the Fe-C binary system, the liquid-phase iron and Fe3C become an important medium for transmitting carbon to the surface of M3O5 [26]. As above-mentioned, sodium borate can promote the carburization of metallic iron to the formation of liquid-phase iron and Fe3C. Thus, sodium borate could promote the formation of TiN. Therefore, the promotion mechanism of sodium borate on the formation of TiN can be summarized as follows: sodium borate first promoted the formation of Fe and M3O5 through catalyzing coal gasification and then promoted the reduction and nitridation of M3O5 to TiN by accelerating the carburization of metallic iron. Figure 7 shows the SEM images and the EDS results of the pellets without an additive reduced at 1300 °C for 90 min. Figure 7a shows that the metallic iron and TiN did not form a close relationship, which was conducive to the separation process. The surface scanning results in Figure 7b indicated that the enrichment region of Mg and Al in the observed area were strongly coincident, and the Mg-Al-rich region contained fewer contents of Ca and Si than the peripheral region. These results combined with the XRD and EDS results indicated that the Mg-Al-rich phase was MgAl2O4. Figure 7a also shows that TiN and MgAl2O4 particles were intimately intermixed. Thus, separating them by physical sorting technology was difficult. Moreover, the EDS results in Table 2 revealed that the TiN phase contained 3.1% V and 2.0% C, indicating that small amounts of V and C were dissolved in the In the first stage, the reactions occurred by means of the gaseous intermediates CO and CO 2 . Moreover, the overall rate of reaction was controlled by the gasification of coal. Sodium borate is a catalyst for the carbon gasification [32]. Thus, the addition of sodium borate could promote the formation of Fe and M 3 O 5 .
SEM Observation and EDS Analysis
In the second stage, M 3 O 5 was reduced by solid carbon instead of CO, and the diffusion of carbon to the surface of M 3 O 5 is the rate determining step for the formation of nitride. Gou reported that when the roasting temperature was above the eutectic temperature (1154 • C) of the Fe-C binary system, the liquid-phase iron and Fe 3 C become an important medium for transmitting carbon to the surface of M 3 O 5 [26]. As above-mentioned, sodium borate can promote the carburization of metallic iron to the formation of liquid-phase iron and Fe 3 C. Thus, sodium borate could promote the formation of TiN. Therefore, the promotion mechanism of sodium borate on the formation of TiN can be summarized as follows: sodium borate first promoted the formation of Fe and M 3 O 5 through catalyzing coal gasification and then promoted the reduction and nitridation of M 3 O 5 to TiN by accelerating the carburization of metallic iron. Figure 7 shows the SEM images and the EDS results of the pellets without an additive reduced at 1300 • C for 90 min. Figure 7a shows that the metallic iron and TiN did not form a close relationship, which was conducive to the separation process. The surface scanning results in Figure 7b indicated that the enrichment region of Mg and Al in the observed area were strongly coincident, and the Mg-Al-rich region contained fewer contents of Ca and Si than the peripheral region. These results combined with the XRD and EDS results indicated that the Mg-Al-rich phase was MgAl 2 O 4 . Figure 7a also shows that TiN and MgAl 2 O 4 particles were intimately intermixed. Thus, separating them by physical sorting technology was difficult. Moreover, the EDS results in Table 2 revealed that the TiN phase contained 3.1% V and 2.0% C, indicating that small amounts of V and C were dissolved in the TiN phase. TiN, TiC, VC, and VN had the same NaCl-type structures. The atomic radius of C was similar to that of N, and the atomic radius of V was similar to that of Ti. These compounds could form a continuous solid solution. Introducing the right amounts of C and V could improve the properties of TiN material [1,33,34]. TiN phase. TiN, TiC, VC, and VN had the same NaCl-type structures. The atomic radius of C was similar to that of N, and the atomic radius of V was similar to that of Ti. These compounds could form a continuous solid solution. Introducing the right amounts of C and V could improve the properties of TiN material [1,33,34]. Figure 8 shows the SEM images and the EDS results of the pellets with 16% sodium borate reduced at 1300 °C for 90 min.
SEM Observation and EDS Analysis
The EDS element distribution map shown in Figure 8b indicated the evident overlap of the distribution areas of Mg, Al, Ca, Si, O, and Na elements, but did not detect MgAl2O4. This finding indicates that the addition of sodium borate inhibited the generation of MgAl2O4. This condition resulted in the formation of complex compounds of Na2O-MgO-Al2O3-CaO-SiO2-FeO-TiO2. Nacontaining substances are generally easily dissolved by acid; thus, the Na-containing complex compounds may be removed during acid leaching. Moreover, the EDS results in Table 3 showed that Na-containing substances are generally easily dissolved by acid; thus, the Na-containing complex compounds may be removed during acid leaching. Moreover, the EDS results in Table 3 showed that V was undetected in the metallic iron phase and the slag. These results imply that V was nearly transformed to the TiN phase.
Effect of Sodium Borate on the Separation of Metallic Iron and TiN
The pellets reduced at 1300 °C for 90 min were subjected to grinding and magnetic separation. The results are presented in Table 4. Table 4 shows that the addition of sodium borate had no significant effect on the magnetic separation index. In the absence of sodium borate, DRI containing 95.3% Fe, 0.5% Ti, and 0.1% V was obtained. The recoveries of Fe, Ti, and V in this DRI were 90.1%, 3.9%, and 15.5%, respectively. When 16% sodium borate was added, DRI containing 94.3% Fe, 0.6% Ti, and 0.1% V was obtained. The recoveries of Fe, Ti, and V in this DRI were 91.2%, 5.1%, and 17.3%, respectively. These results revealed that metallic Fe and TiN could be separated precisely through grinding-magnetic separation.
Effect of Sodium Borate on the Separation of Metallic Iron and TiN
The pellets reduced at 1300 • C for 90 min were subjected to grinding and magnetic separation. The results are presented in Table 4. Table 4 shows that the addition of sodium borate had no significant effect on the magnetic separation index. In the absence of sodium borate, DRI containing 95.3% Fe, 0.5% Ti, and 0.1% V was obtained. The recoveries of Fe, Ti, and V in this DRI were 90.1%, 3.9%, and 15.5%, respectively. When 16% sodium borate was added, DRI containing 94.3% Fe, 0.6% Ti, and 0.1% V was obtained. The recoveries of Fe, Ti, and V in this DRI were 91.2%, 5.1%, and 17.3%, respectively. These results revealed that metallic Fe and TiN could be separated precisely through grinding-magnetic separation. Figure 9 shows the effect of sodium borate on the acid leaching of impure TiN. Figure 9a illustrates that, without sodium borate, the main phases presented in the impure TiN concentrates were TiN, MgAl2O4, Fe, and C. After HCl leaching, no significant change in the diffraction peaks were observed. After HF leaching, the diffraction peaks of MgAl2O4 were still strong, but a new impurity of CaMg2Al2F2 formed. This phenomenon revealed the difficulty in Figure 9a illustrates that, without sodium borate, the main phases presented in the impure TiN concentrates were TiN, MgAl 2 O 4 , Fe, and C. After HCl leaching, no significant change in the diffraction peaks were observed. After HF leaching, the diffraction peaks of MgAl 2 O 4 were still strong, but a new impurity of CaMg 2 Al 2 F 2 formed. This phenomenon revealed the difficulty in obtaining pure TiN by acid leaching in the absence of sodium borate. In contrast, Figure 9b shows that the crystal phases of the impure TiN obtained by adding 16% sodium borate were TiN, Fe, and C. This result indicates that other components were present in the form of the glass phase in the sample. After HCl leaching, the Fe was removed. After HF leaching, only TiN and a small amount of C were present in the TiN products. The XRD pattern of the TiN product was very smooth. Chemical composition of the impure TiN and TiN product are presented in Table 5. This shows that the Ti and V contents of the impure TiN product were 19.5% and 0.7%, respectively. After leaching, the Ti and V contents of the TiN product increased to 74.1% and 2.8%, respectively, and their total content was close to the theoretical Ti content of TiN (77.4% Ti). These results imply that the addition of sodium borate during the roasting stage promoted the purification of TiN by acid leaching. The total content of TiN was much less than 100% since C, N, and O were not detected. A total of 1.61 g TiN was obtained from 20 g of TMCs. Consequently, on the basis of mass balance, the Ti and V enrichments in the TiN product were calculated to be 94.6% and 58.3%, respectively. The SEM image in Figure 10 revealed that the sizes of the TiN particles were below 10 µm. that the crystal phases of the impure TiN obtained by adding 16% sodium borate were TiN, Fe, and C. This result indicates that other components were present in the form of the glass phase in the sample. After HCl leaching, the Fe was removed. After HF leaching, only TiN and a small amount of C were present in the TiN products. The XRD pattern of the TiN product was very smooth. Chemical composition of the impure TiN and TiN product are presented in Table 5. This shows that the Ti and V contents of the impure TiN product were 19.5% and 0.7%, respectively. After leaching, the Ti and V contents of the TiN product increased to 74.1% and 2.8%, respectively, and their total content was close to the theoretical Ti content of TiN (77.4% Ti). These results imply that the addition of sodium borate during the roasting stage promoted the purification of TiN by acid leaching. The total content of TiN was much less than 100% since C, N, and O were not detected. A total of 1.61 g TiN was obtained from 20 g of TMCs. Consequently, on the basis of mass balance, the Ti and V enrichments in the TiN product were calculated to be 94.6% and 58.3%, respectively. The SEM image in Figure 10 revealed that the sizes of the TiN particles were below 10 μm. Table 5. Chemical composition of the impure TiN and TiN product (mass %), as determined by XRF analysis.
Conclusions
The main conclusions can be summarized as follows: (1) During the carbothermal reduction of TMCs, the addition of sodium borate not only inhibits the formation of MgAl2O4, but also promotes the reduction and nitridation of TMCs.
(2) The promotion mechanism of sodium borate on the formation of TiN can be summarized as follows: sodium borate first promotes the formation of Fe and M3O5 by catalyzing coal gasification and then promotes the reduction and nitridation of titanium oxide to TiN by accelerating the carburization of iron.
(4) Sodium borate slightly affects the separation of metallic Fe and TiN. Adding 16% sodium borate resulted in DRI containing 94.3% Fe, 0.6% Ti, and 0.1% V after magnetic separation. The recoveries of Fe, Ti, and V in this DRI were 91.2%, 5.1%, and 17.3%, respectively.
(5) After HCl + HF leaching, a TiN product containing 74.1% Ti and 2.8% V was obtained with a Ti recovery of 94.6% and V recovery of 58.3%. In contrast the resulting TiN product contained a considerable amount of MgAl2O4 and CaMg2Al2F2 without the addition of sodium borate.
Conclusions
The main conclusions can be summarized as follows: (1) During the carbothermal reduction of TMCs, the addition of sodium borate not only inhibits the formation of MgAl 2 O 4 , but also promotes the reduction and nitridation of TMCs. (4) Sodium borate slightly affects the separation of metallic Fe and TiN. Adding 16% sodium borate resulted in DRI containing 94.3% Fe, 0.6% Ti, and 0.1% V after magnetic separation. The recoveries of Fe, Ti, and V in this DRI were 91.2%, 5.1%, and 17.3%, respectively. (5) After HCl + HF leaching, a TiN product containing 74.1% Ti and 2.8% V was obtained with a Ti recovery of 94.6% and V recovery of 58.3%. In contrast the resulting TiN product contained a considerable amount of MgAl 2 O 4 and CaMg 2 Al 2 F 2 without the addition of sodium borate. | 8,412.6 | 2019-11-02T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
Prediction for the Newsroom: Which Articles Will Get the Most Comments?
The overwhelming success of the Web and mobile technologies has enabled millions to share their opinions publicly at any time. But the same success also endangers this freedom of speech due to closing down of participatory sites misused by individuals or interest groups. We propose to support manual moderation by proactively drawing the attention of our moderators to article discussions that most likely need their intervention. To this end, we predict which articles will receive a high number of comments. In contrast to existing work, we enrich the article with metadata, extract semantic and linguistic features, and exploit annotated data from a foreign language corpus. Our logistic regression model improves F1-scores by over 80% in comparison to state-of-the-art approaches.
Exploding Comment Threads
In the last decades, media and news business underwent a fundamental shift, from one-directional to bi-directional communication between users on the one side and journalists on the other. The use of social media, blogs, and the possibility to immediately share, like, and comment digital content transformed readers into active and powerful agents in the media business. This shift from passive "consumers" to active "agents" deeply impacts both media and communication science and has many positive aspects.
However, the possibilities and powers can also be misused. Pressure groups, lobbyists, trolls, and others are effectively trying to influence discussions according to their (very different) interests. An easy approach consists in burying unwanted arguments or simply destroying a discussion by blowing it up. After such an attack, readers have to crawl through hundreds of nonsense and meaningless comments to extract meaningful and interesting arguments. Blowing up a thread can be 1.
Comment Volume Prediction
Time Figure 1: Integration of comment volume prediction into the newsroom workflow. achieved by injecting provocative (but not necessarily off-topic) arguments into discussions. Bystanders are completing the goal of the destroyers, and they do so often unknowingly: with eachoften well-intentioned -reaction to the provocation, they make it more difficult for others to follow the actual argumentation path and/or tree.
It is costly in terms of working power and time to keep the discussion area of a news site clean from attacks like that, and to watch the compliance of users ("netiquette"). As a reaction, many large online media sites worldwide closed their discussion areas or downsized them significantly (prominent examples of the last years are the Internet Movie Database, Bloomberg or the US-American National Public Radio). Other news provider and media sites, including us, take a different approach: A team of editors reads and filters comments on a 24/7-basis. This results in a huge workload with several thousand reader comments published each day. In its lifetime, an article receives between less than ten and more than 1500 comments; typical are about 100 to 150 comments. The number of published comments presumably depends to a large extent on time, weather, and season as well as for each article on subject, length, style of writing, and author, among others.
Being able to predict which articles will receive high comment volume would be beneficial at two positions in the newsroom: 1. for the news director to schedule the publication of news stories, and 2. for scheduling team sizes and guiding the focus of the comment moderators and editors. Figure 1 gives an overview of how comment volume prediction can be integrated into the workflow of a modern online news site. The incoming news articles are ranked based on the estimated number of comments they will attract. The news director takes these numbers into account in the decision process when to schedule which article for publication. This can balance the distribution of highly controversial topics across a day, giving not only readers and commenters the possibility to engage in each single one, but also distribute the moderation workload for comment editors evenly. Further, knowing which articles will receive many comments can help in the moderation process.
Guiding the main focus of attention of moderators towards controversial topics not only facilitates efficient moderation, but also improves the quality of a comment thread. Our experience has shown that moderators entering the online discussion at an early stage can help keeping the discussion focused and fruitful.
In this paper, we study the task of identifying the weekly top 10% articles with the highest comment volume. We consider a new real-world dataset of 7 million news comments collected over more than nine years. In order to enrich our dataset and increase its meaningfulness, we propose to transfer a classifier trained on the Englishlanguage Yahoo News Annotated Comments Corpus (Napoles et al., 2017b) to our Germanlanguage dataset and leverage the additional class labels for comments in a post-publication prediction scenario. Experiments show that our logistic regression model based on article metadata, linguistic, and topical features outperforms state-ofthe-art approaches significantly. Our contributions are summarized as (1) a transfer learning approach to learn early comments' characteristics, (2) an analysis of a new 7-million-comment dataset and (3) an improvement of F1-score by 81% compared to state-of-the-art in predicting most commented articles.
Related Work
Related work on newsroom assistants focuses on comment volume prediction for pre-publication and post-publication scenarios. By the nature of news articles, the attention span after article publication is short and in practice post-publication prediction is valuable only within a short time frame. Tsagkias et al. (2009) classify online newspaper articles using random forests. First, they classify whether an article will receive any comments at all. Second, they classify articles as receiving a high or low amount of comments. The authors find that the second task is much harder and that predicting the actual number of comments is practically infeasible. Badari et al. (2012) conclude the same, analyzing Twitter activity as a popularity indicator for news: Predicting popularity as a regression task results in large errors. Therefore, the authors predict classes of popularity by binning the absolute numbers (1-20, 20-100, 100-2400 received tweets). However, predicting the number of received tweets includes modeling both, the user behavior and the platform, which is problematic. It is part of a platform's business secrets how content is internally ranked and distributed to users, making it hard to distinguish cause and effect from the outside. In our scenario, we even see no benefit in predicting the exact number of comments. Instead, we predict which articles belong to the weekly top 10% articles with the highest comment volume, which is one of the tasks defined by Tsagkias et al. (2009).
In a post-publication scenario, Tsagkias et al. (2010) consider the comments received within the first ten hours after article publication. Based on this feature, they propose a linear model to predict the final number of comments. Comparing comment behavior at eight online news platforms, they observe seasonal trends. Tatar et al. (2011) consider the shorter time frame of five hours after article publication to predict article popularity. They also use a linear model and find that neither adding publication time and article category to the feature set nor extending the dataset from three months to two years improves prediction results. Their survey on popularity prediction for web content summarizes features with good predictive capabilities and lists fields of application for popularity prediction (Tatar et al., 2012). Rizos et al. (2016) focus on user comments to predict a discussion's controversiality. They extract a comment tree and a user graph from the discussion and investigate for example comment count, number of users, and vote score. The demonstrated improvement of popularity prediction with this limited, focused features motivates us to further explore content-based features of comments in our work.
Recently, research on deep learning (Nobata et al., 2016;Pavlopoulos et al., 2017) addresses (semi-) automation of the entire moderation task, but we see several issues that prevent us from putting these approaches into practice. First, the accuracy of these methods is not high enough. For example, reported recall (0.79) and precision (0.77) at the task of abusive language detection (Nobata et al., 2016) are not sufficient for use in production. With this recall, an algorithm would let pass every fifth inappropriate comment (containing hate speech, derogatory statements, or profanity), which is not acceptable. Pavlopoulos et al. (2017) address this problem by letting human moderators review comments that an algorithm could not classify with high confidence. Second, acceptance of these kind of black-box solutions is still limited in the community and the models lack comprehensibility. A compromise can be (ensemble) decision trees, because they achieve comparable results and can give reasons for their decisions (Kennedy et al., 2017). Still, moderators and users do not feel comfortable with machines deciding which comments are allowed to be published -not least because of fear of concealed censorship or bias.
Predicting High Comment Volume
For each news article, we want to predict whether it belongs to the weekly top 10% articles with the highest comment volume. We chose this relative amount to account for seasonal fluctuations and also to even out periods with low news worthiness. This traditional classification setting enables us to use established methods, such as logistic regression, to solve the task and provide explanations on why a particular article will receive many comments or not.
As a baseline to compare against, we implemented a random forest model with features from Tsagkias et al. (2009). For our approach we extend this feature set and categorize the features into five groups. Our metadata features consist of article publication time, day of the week, and whether the article is promoted on our Facebook page. We consider temperature and humidity during the hour of publication 1 and the number of "competing articles" as context features. Competing articles is the number of similar articles and the total number of articles published by our newspaper in the same hour. These articles compete for readers and user comments. Figure 2 visualizes how the number of received comments is not affected by the significantly higher number of published articles on Thursdays. The publication peek on Thursdays is caused by articles that are published in our weekly printed edition and at the same time published online one-to-one. Further, we incorporate publisher information, such as genre, department, and which news agency served as a source for the article. We include these features in order to study their impact and performance at comment volume prediction tasks and not in order to focus on engineering complex features.
In addition, we propose to leverage the article content itself. Starting with headline features, we use ngrams of length one to three as well as author provided keywords for the article. To capture topical information in the body, we rely on topic modeling and document embedding besides traditional bag-of-word (BOW) features. These guarantee that we also grasp some semantic representations of the articles. To this end, topic distributions, document embeddings, and word n-grams serve as semantic representa- tions of articles. In order to model topics of news article bodies, we apply standard latent Dirichlet allocation (Blei et al., 2003). For the document embedding, we use a Doc2Vec implementation that downsamples higher-frequency words for the composition (Mikolov et al., 2013). We choose the vector length, number of topics, and window size based on F1-score evaluation on a validation set. Despite recent advances of deep neural networks for natural language processing, there is a reason to focus on other models: For the application in newsrooms and the integration in semiautomatic processes, comprehensibility of the prediction results is very important. A black-box model -even if it achieved better performanceis not helpful in this scenario. Human moderators need to understand why the number of comments is predicted to be high or low. This comprehensibility issue justifies the application of decision trees and regression models, which allow to trace back predictions to their decisive factors. Table 1 lists precision, recall, and F1-score for the prediction of weekly top 10% articles with the highest comment volume. Especially the bag-of-words (BOW) and the topics of the article body, but also headline keywords and publisher metadata achieve higher F1-score than the metadata features. The highest precision is achieved with the binary feature whether an article is promoted on Facebook, whereas author and competing articles achieve the highest recall.
Automatic Translation of Comments
Whether the first comment is a provocative question in disagreement with the article or an offtopic statement influences the route of further conversation. We assume that this assumption holds not only for social networks (Berry and Taylor, 2017), but also for comment sections at news websites. Therefore, we consider the tone and sentiment of the first comments received shortly after article publication as an additional feature. Typical layouts of news websites (including ours) list comments in chronological order and show only the first few comments to readers below an article. Pagination hides later received comments and most users do not click through dozens of pages to read through all comments. As a consequence, early comments attract a lot more attention and, with their tone and sentiment, influence comment volume to a larger extent. Presumably, articles that receive controversial comments in the first few minutes after publication are more likely to receive a high number of comments in total.
To classify comments as controversial or engaging, we need to train a supervised classification algorithm, which takes thousands of annotated comments. Such training corpora exist, if at all, mostly for English comments, while our comments are written in German. We propose to apply machine translation to overcome this language barrier: Given a German comment, we automatically translate it into English. From a classifier that has been trained on an annotated English dataset, we can derive automatic annotations for the translated comment. The derived annotations serve as another feature for our actual task of comment volume prediction.
We reimplemented the classifier by Napoles et al. (2017a) and train on their English dataset. The considered annotations consist of 12 binary labels: addressed audience (reply to a particular user or broadcast message to a general audience), agreement/disagreement with previous comment, informative, mean, controversial, persuasive, off-topic regarding the corresponding news article, neutral, positive, negative, and mixed sentiment. We au- tomatically translate all comments in our German dataset into English using the DeepL translation service 2 . For the translated comments, we automatically generate annotations based on Napoles et al.'s classifier. Thereby, we transfer the knowledge that the classifier learned on English training data to our German dataset despite its different language. This approach builds on the similar content style of both corpora, which is described in the next section.
Dataset
We consider two datasets that both contain user comments received by news articles with similar topics. First, our German 7-million-comment dataset, which we call Zeit Online Comment Corpus (ZOCC) 3 and second, the English 10kcomment Yahoo News Annotated Comments Corpus (YNACC) (Napoles et al., 2017b). ZOCC consists of roughly 200,000 online news articles published between 2008 and 2017 and 7 million associated user comments in German. Out of 174,699 users in total, 60% posted more than one comment, 23% more than 10 comments and 7% more than 100 comments. For both, articles and comments, extensive metadata is available, such as author list, department, publication date, and tags (for articles) and user name, parent comment (if posted in response), and number of recommendations by other users (for comments). Not surprisingly, ZOCC is following a popularity growth with an increasing number of articles and comments over time. While our newspaper published roughly 1,300 articles per month in 2010 and each article received roughly 20 comments on average, we nowadays publish roughly 1,500 articles per month, each receiving 110 comments on average. As both corpora's articles and comments cover a similar time span of several years and many different departments, they deal with a broad range of topics. While the majority of articles in YNACC is about economy, ZOCC's major department is politics. More than 50% of the comments in ZOCC are posted in response to articles in the politics department, whereas in YNACC culture, society, and economy share an almost equal amount of around 20% each and politics on forth rank with 12%. On average, an article in ZOCC receives 90% of its comments within 48 hours, while it takes 61 hours for an article in YNACC. Despite their slight differences, both corpora cover most popular departments, which motivates the idea to transfer a classifier trained on YNACC to ZOCC. For YNACC, Napoles et al. propose a machine learning approach to automatically identify engaging, respectful, and informative conversations (2017a). By identifying weekly top 10% articles with the highest comment volume, we focus on a different task. Nonetheless, both corpora, ZOCC and YNACC, have similar properties: both corpora contain user comments posted in reaction to news articles across similar time span and similar topics. However, only the much smaller YNACC provides detailed annotations regarding, for example, comments' tone and sentiment.
Evaluation
We compare to the approach by Tsagkias et al. and evaluate on the same task (Tsagkias et al., 2009(Tsagkias et al., , 2010. Therefore, we consider a binary classification task, which is to identify the weekly top 10% articles with the largest comment volume. Table 3 lists our final evaluation results on the hold-out test set. We choose F1-score as our evaluation metric, since precision and recall are equally relevant in our scenario. On the one hand, we want to achieve high recall so that no important article and its discussion is overlooked. On the other hand, we have limited resources and cannot afford to moderate each and every discussion. A high precision is crucial so that our moderators focus only on articles that need their attention. All experiments are conducted using time-wise split with years 2014 to 2016 for training, January 2017 to March 2017 for validation, and April 2017 for testing. We find that our additional article and metadata features, but also the automatically annotated first comments outperform the baseline. Due to the diversity of the different features, their combination further improves the prediction results. In comparison to the approach by Tsagkias et al., we finally achieve an 81% larger F1-score.
Automatically Translated Comments
With another experiment, we study the classification error introduced by translation. Therefore, we train two classifiers with the approach by Napoles et al.: First, we train and test a classifier on the original, English YNACC. Second, we automatically translate all comments in YNACC from English into German and use this translated data for training and testing of the second classifier. Comparing these two classifiers, we find that both precision and recall slightly decrease after translation, as shown in Table 4. Based on this result, we can assume that the translation of German comments into English introduces only a small error. Although YNACC and ZOCC differ in language, we can transfer a classifier that has been trained on YNACC to ZOCC. For each article, we use the labels assigned to the first four comments, which are visible on the first comment page below an article. The first four comments are typically received within very few minutes after article publication.
Number of Early Comments
As a baseline feature for comparison, we use the number of comments 4 received in a short time span after article publication. Annotated first page comments, but also article and metadata features significantly outperform the baseline until 32 minutes after article publication. After 32 minutes, the number of received comments outperforms every single feature (but not the combination of all our features). This is because the difference between final number of comments and so far received comments converges over time.
Conclusions
In this paper, we studied the task of predicting the weekly top 10% articles with the highest comment volume. This prediction helps to schedule the publication of news stories and supports moderation teams in focusing on article discussions that require most likely their attention. Our supervised classification approach is based on a combination of metadata and content-based features, such as article body and topics. Further, we automatically translate German comments into English to make use of a classifier pre-trained on English data: We classify the tone and sentiment of comments received in the first minutes after article publication, which improves prediction even further. On a 7-million-comment real-world dataset our approach outperforms the current state-of-theart by over 81% larger F1-score. We hope that our prediction will help to reduce the number of cases where newspapers have no other choice but to close down a discussion section because of limited moderation resources. | 4,762.8 | 2018-06-01T00:00:00.000 | [
"Computer Science",
"Political Science"
] |
Improved Confidence Intervals for Fixed Term Survival Probabilities in a Small Two-Arm Trial
16 Background 17 The confidence interval for survival probability at a fixed time point provides valuable 18 information on how the subject performs in terms of survival rate. However, in a two-arm trial 19 when the sample size in each group is small or when the distribution of events that occurred 20 within the group is skewed, the confidence interval might become very unstable, and thus may 21 not provide accurate information for estimating survival rate. In addition, when there are other 22 covariates available in the dataset, it is important to select those significant variables and include 23 them in the model. On the other hand, researchers such as physicians who pay more attention to 24 the final result often analyze the treatment group and control group separately, which may lead to 25 inaccurate prediction. 26 Methods 27 In this study, two treatment groups are combined, and the group indicator variable is considered 28 as a covariate and is included in the model for computation. Yuan and Rai’s adjusted effective 29 sample size methods are further extended along with Cox proportional hazard model, Weibull 30 model, and log-logistic model to compute predicted fixed-term overall survival probabilities and 31 corresponding confidence intervals with other covariates adjusted. Simulations are conducted to 32 obtain coverage probability.
The data used in this paper come from a randomized clinical trial conducted by the Radiation 88 Therapy Oncology Group [7]. The dataset is publicly available, and therefore, neither ethical 89 approval nor informed consent is needed for our study. The entire trial contains data from 15 90 sites with 16 participating institutions; however, in this paper, only the data on three sites with 91 the six largest institutions will be used. At the beginning of this study, 193 patients were 92 randomly assigned into two treatment groups. Group one (only radiation therapy) has 99 patients 93 with 27 censored subjects. Group two (radiation therapy with a chemotherapeutic agent) has 94 94 patients with 26 censored subjects. Other variables including sex, age, condition, T-staging, and 95 N-staging. Summary statistics can be found in Table 1. To have a good understanding of the data, survdiff in R is used to calculate whether or not there 103 is any difference between the two treatment groups. It appears that according to the p-value 104 (p=0.3), the two treatment groups are not significantly different. In this case, a closer look at the confidence interval becomes necessary. Zhu et. al [8] evaluate several test procedures for 106 survival functions comparison when data is interval-censored and the distribution of censoring is 107 unequal. Similar approach could be tested for right-censored data in future research. 109 Kaplan-Meier curve can be easily produced with the help of R. Its confidence interval can also 110 be obtained. Note that the default method for R in calculating confidence interval is the 111 Greenwood (log) method, which can be treated as a Wald confidence interval, and has been 112 proven not to be robust regardless the size of the sample [9]. The Kaplan-Meier estimate S(t) is 113 ̂( ) = ∏(1 − ). =1 (1) and are the number of deaths and the number of patients at risk at time respectively. The 114 R-code is as follows: 118 Brown et al.
Agresti-Coull-Peto
[9] recommend the Agresti-Coull interval when the sample size is greater than 40. It 119 is a score interval and appears to be a better way to calculate the confidence interval. Moreover, 120 Yuan and Rai further suggest that the combination of Agresti-Coull interval with Peto's adjusted 121 effective sample size provides better coverage probability [1]. To construct the AC confidence 122 interval, the formula can be written as: where ̃= + 1− /2 2 /2 + 1− /2 2 and 1− /2 is the critical value at 95% confidence level [10].
124
Here M is defined as the number of estimated events. Also, the sample size n needs to be 125 adjusted by using Peto's effective sample size. n will be replaced by [11]. can be easily 126 obtained by is defined as the number of observations that remain at risk at time t divided by the survival 128 probability at t. Replace n with in equation 2 and it will generate the new Peto's adjusted 129 confidence interval.
130
The R code is as follows: The survival probability at a fixed time point can be carried out as where is the scale parameter and is the shape parameter. The confidence interval can also be The survival probability at a fixed time point can be carried out as where is the scale parameter and is the shape parameter. To calculate confidence intervals, Therefore, the confidence interval for the survival function can be written as Similarly, flexsurvreg can be used to obtain survival probability and confidence interval. Same 228 approach that were used in Cox regression and Weibull to fix variable at certain level can also be 229 used in log-logistic model. R code as follow:
246
Of the 193 patients, approximately 27% are censored ( Table 1). The proportion of events within (Table 3). In terms of coverage probability, AC-Peto has better coverage than Kaplan-
263
Meier but is close to Wilson-Peto in the early stage, whereas, and in the later time, AC-Peto has 264 the best coverage among all methods (Table 4).
265
To better predict survival outcome, significant covariates must be taken into consideration. are relatively close among all levels (Table 7). Therefore, combining level 1 and level 2 becomes 277 reasonable. In the variable Condition, the distribution of subjects is skewed wherein level 1 has 278 around 73% of the total, but level 3 and level 4 have only 3% and 0.5%, respectively (Table 7).
279
Taking a closer look at the distribution of events, due to the small sample size of level 4, the 280 proportion will be either 100% or 0% in this case. This situation does not provide much valuable 281 information, and based on the definition of this variable, people with a higher level of the 282 condition tend to have a higher risk. It is therefore reasonable to see that the proportion increases 283 from level 1 to level 3. In this study, level 3 and level 4 are combined for computation. In group 284 1, log-logistic has the highest survival probabilities at all time points (Table 8). Weibull has 285 relatively higher survival probabilities than Cox in the later stage but vice versa in the early 286 stage. In terms of confidence intervals, at 3-months, the log-logistic interval is around 15% 287 shorter than Cox, and about 21% shorter at 6-months ( least coverage probabilities at all tested time points (Table 11). Log-logistic has slightly better 290 coverage than Cox at 3-months, 6-months, and 12-months, but not at 18-months. In group 2,
291
Weibull and log-logistic have higher survival probabilities at most time points (Table 8).
292
Confidence intervals follow the same pattern as group 1 (Table 9). In terms of coverage 293 probability, log-logistic has the best improvement at all time points (Table 11). Weibull has 294 better coverage than Cox at earlier stages but becomes worse at later stages.
295
Further comparisons were made between semi/parametric models and AC/Wilson-Peto methods.
296
As seen in Table 10, in both groups semi-parametric and parametric models produce shorter 297 confidence intervals than AC/Wilson-Peto methods in earlier stages, but in the long term, basic 298 models tend to perform better. In terms of coverage, semi-parametric and parametric models 299 produce better coverage than basic models only at 3-months in group 1 (Table 12). Survival 300 curves for Kaplan-Meier, AC-Peto, and Cox regression can be found in Figure 2. All methods 301 compared to Kaplan-Meier can be found in Figure 3 and Figure 4. In most cases, semi-302 parametric and parametric models produce shorter confidence intervals in the early stage, but the 303 pattern does not hold for later stages. Similarly, coverage is higher at early stages for semi-304 parametric and parametric methods but becomes worse in the long term.
306
This paper illustrates the group effect with other covariates adjusted in survival calculation. This 307 method can also be expanded to three or more groups. It can further be expanded to determine if 308 making group as a covariate will benefit large sample size as well. The method provides more 309 important and more accurate information to researchers as well as clinicians when making a 310 survival prediction. Note that when distributions of subjects and events are skewed in certain 311 variables, it is important to determine the best way to combine levels within the variable. There 312 are many ways to combine levels, such as making it into two blocks. The best way always 313 depends on the dataset.
314
In this paper, the predictive survival probability is used rather than directly obtaining the result 315 from data analysis. The reason for doing so is that for a fixed term estimation, it is more accurate. For example, one event occurred at 9-months, and the next event occurred at 13months, if we want to find fixed term survival rate at 12-months (1-year), we know that it is the 318 same between 9-month and 13-month because it is a stepwise function. But the interval, in this 319 case, is very wide. Suppose there is no event at 12-months, then the survival function is zero, but 320 the size of the risk set is not. In order to make sense of the data, we need to calculate the 321 estimated event at 12-months, and then find its corresponding survival probability and 322 confidence interval.
323
In this paper, only 3-month, 6-month, 12-month, and 18-month survival are tested. The reason 324 for doing so is that 1-year survival is a normal clinical indicator for many terminal illnesses, thus, . This could also be tested in the future.
342
In summary, we have examined six methods for predicting overall survival probabilities and 343 confidence intervals. Coverage probabilities for each method are obtained through simulation. In 344 this paper, we combined both two treatment groups and labeled the group indicator variable as a 345 covariate. We included group covariate in all of our parametric and semi-parametric models. Our 346 aim was to see if grouping has any impact on the model. We also wanted to see which method 347 will provide the best predictive estimation with this improved confidence interval calculation 348 method. Our overall aim is to provide a guideline on how basic survival data should be analyzed. | 2,478 | 2020-11-24T00:00:00.000 | [
"Mathematics"
] |
CAR T Cell Therapy of Non-hematopoietic Malignancies: Detours on the Road to Clinical Success
Chimeric antigen receptor (CAR)-engineered T cells represent a breakthrough in personalized medicine. In this strategy, a patient's own T lymphocytes are genetically reprogrammed to encode a synthetic receptor that binds a tumor antigen, allowing T cells to recognize and kill antigen-expressing cancer cells. As a result of complete and durable responses in individuals who are refractory to standard of care therapy, CAR T cells directed against the CD19 protein have been granted United States Food and Drug Administration (FDA) approval as a therapy for treatment of pediatric and young adult acute lymphoblastic leukemia and diffuse large B cell lymphoma. Human trials of CAR T cells targeting CD19 or B cell maturation antigen in multiple myeloma have also reported early successes. However, a clear and consistently reproducible demonstration of the clinical efficacy of CAR T cells in the setting of solid tumors has not been reported to date. Here, we review the history and status of CAR T cell therapy for solid tumors, potential T cell-intrinsic determinants of response and resistance as well as extrinsic obstacles to the success of this approach for much more prevalent non-hematopoietic malignancies. In addition, we summarize recent strategies and innovations that aim to augment the potency of CAR T cells in the face of multiple immunosuppressive barriers operative within the solid tumor microenvironment. Advances in the field of CAR T cell biology over the coming years in the areas of safety, reliability and efficacy against non-hematopoietic cancers will ultimately determine how transformative adoptive T cell therapy will be in the broader battle against cancer.
INTRODUCTION
The use of genetically engineered T cells as a form of cancer therapy heralds a new era of synthetic biology and medicine. Within the past few years, clinical trials using chimeric antigen receptor (CAR) T cells to recognize and eliminate hematopoietic malignancies have demonstrated high rates of response as well as durability of remission that are unprecedented in ALL (1-3), chronic lymphocytic leukemia (CLL) (4,5), and refractory B cell lymphomas (6,7). This culminated in the recent United States Food and Drug Administration approvals of CD19-directed CAR T cells for relapsed/refractory pediatric and young adult ALL and diffuse large B cell lymphoma (DLBCL). While CAR T cell therapy is poised to revolutionize the treatment of leukemias and lymphomas, the field awaits a clear demonstration of efficacy against non-hematopoietic malignancies. The key challenges for these immunotherapies are how to: (I) safely enhance the potency and sustain the function of CAR T cells in vivo and (II) develop mechanism-based strategies to increase the resistance of CAR T cells to intrinsic and extrinsic dysfunction. Advances in basic and translational research aimed at improving the safety, consistency and effectiveness of CAR T cells against tumors of non-hematopoietic origin will ultimately determine whether this approach can find wider applications in cancer as well as other diseases.
Adoptive cellular immunotherapy involves expanding T cells from a patient or donor in vitro, followed by reinfusion of tumor-specific lymphocytes as cancer therapy. Transfer of expanded tumor infiltrating lymphocytes (TILs) from a subset of individuals with metastatic melanoma has shown potent antitumor effects (8,9). It is likely that TILs target neoantigens within the broad landscape of mutant peptides encoded by de novo somatic mutations (10)(11)(12)(13)(14). In rare instances, adoptive transfer of autologous T cells targeting antigens encoded by somatically mutated genes has also resulted in clinically meaningful regressions of colon, metastatic bile duct, cervical and breast cancers (15)(16)(17)(18)(19). However, this strategy has little effect on other common epithelial malignancies that have lower mutation rates.
Transfer of genetically-redirected T cells bypasses many of the mechanisms involved in immunological tolerance by the creation of antigen-specific lymphocytes independently of intrinsic tumor immunogenicity that is driven at least in part by a high mutational burden. T cells can be directed to novel tumor antigens by introducing genes encoding new antigen receptors, including natural T cell receptors (TCRs) and CARs. CARs are synthetic molecules that combine the effector functions of T cells with the ability of antibodies to detect pre-defined antigens with a high degree of specificity in a non-major histocompatibility complex (MHC) restricted manner (20). These receptors can therefore recognize intact proteins and do not rely on endogenous antigen processing and presentation. CARs are typically comprised of an extracellular domain for tumor recognition and an intracellular signaling domain that mediates T cell activation [reviewed in (21)(22)(23)(24)]. The antigenbinding function of a CAR is usually conferred by a single chain variable fragment (scFv) containing the variable heavy (V H ) and variable light (V L ) chains of an antibody fused to peptide linker (20,25,26). This extracellular portion of the receptor is fused to a transmembrane domain followed by intracellular signaling modules. First-generation chimeric receptors bearing CD3ζ alone were not sufficient to elicit proliferation or cytokine production in peripheral T cells (27), which likely explains their failure to consistently expand and persist in some of the earliest clinical trials of CAR T cells (28,29). However, the incorporation of co-stimulatory endodomains into CARs can recapitulate natural co-stimulation (30)(31)(32). We and others have demonstrated remarkable rates of complete and durable remission in patients with CLL (4, 5, 33), ALL (1)(2)(3), and Non-Hodgkin lymphomas (6,7,34) treated with second-generation CD19-directed CARs incorporating 4-1BB or CD28 co-stimulation. Early clinical trials of CAR T cells for the treatment of multiple myeloma have also demonstrated promising results (35)(36)(37). Thus, in the setting of hematopoietic malignancies, CAR T cells are emerging as a powerful therapy with the curative potential of allogeneic stem cell transplantation, but without the acute and chronic toxicity of graft-vs.-host disease and conditioning regimens. In contrast, CAR modified T cells are less effective than immune checkpoint blockade and in some cases TIL-based immunotherapy in treating patients with solid tumors to date. In this review, we will discuss the history and current status of CAR T cell therapy for non-hematopoietic malignancies, outline intrinsic mechanisms of T cell potency, describe extrinsic barriers operative in the setting of treating solid tumors, and suggest strategies to enhance the effectiveness of this approach for a variety of these incurable cancers.
Initial Clinical Trials of Car T Cell Therapy in Solid Tumors
In early clinical trials of first-generation CAR T cells for solid tumors, safety and therapeutic efficacy were difficult to determine because of the aforementioned poor in vivo expansion and persistence of the transferred lymphocytes. These studies included patients with advanced epithelial ovarian cancer or metastatic renal cell carcinoma and targeted the folate receptor or carbonic anhydrase IX (CAIX), respectively (28,29). A clinical trial of L1-cell adhesion molecule-specific (CD171) CAR T cells for the treatment of metastatic neuroblastoma demonstrated similar results of short-persisting (1-7 days) CAR T cells in individuals with bulky disease, but significantly longer persistence (42 days) in a single patient with limited tumor burden (38). Later trials of first-generation GD2-targeted CAR T cells administered to children with advanced neuroblastoma were more encouraging, with 3 of 11 patients experiencing complete remission, no substantial toxicity observed and sustained therapeutic benefit reported for several subjects (39,40). Although the results of these trials were encouraging and provided the impetus to incorporate co-stimulatory signaling motifs in addition to CD3ζ, a third-generation CAR specific to the tumor antigen Her2 and integrating CD28, 4-1BB, and CD3ζ signaling moieties resulted in death of a patient with metastatic colon cancer (41). In this case, toxicity was caused by on-target, off-tumor reactivity of the CAR T cells with Her2 on normal lung and/or cardiac tissue (41). This serious adverse event was likely attributed to the infusion of substantially higher numbers of CAR T cells following lymphodepleting chemotherapy compared to most other trials. A second-generation Her2 CAR was also tested in patients with sarcoma without evidence of toxicity (42). Although there were some indications of anti-tumor activity in this trial, T cell persistence was limited, similar to earlier clinical studies.
Recent Clinical Studies of Car T Cell Therapy in Non-hematopoietic Malignancies
Less dramatic clinical responses have also been observed in recently conducted clinical trials designed for the treatment of solid tumors with CAR T lymphocytes. Although evaluable data are not yet available from many of these studies, there is enough proof-of-concept from successful human studies of CAR T cells in leukemia and lymphoma to establish a concrete platform to treat these other indications. A complete response to CAR T cell therapy of recurrent multifocal glioblastoma was achieved using autologous T cells genetically-redirected to the tumor-associated antigen interleukin-13 receptor alpha 2 (IL13Rα2) (43). Interestingly, multiple intracavitary and intraventricular administrations of IL13Rα2 CAR T cells induced increases in the frequencies and absolute numbers of endogenous immune cells (i.e., CD3 + T cells, CD14 + CD11b + HLA-DR + mature myeloid populations, CD19 + B cells, and few CD11b + CD15 + granulocytes) in association with the elaboration of inflammatory cytokines. This case underscores the possible role of the endogenous immune system in potentiating the anti-tumor activity of engineered CAR T cells and the potential of this approach to safety and dramatically increase quality of life in patients with malignant brain tumors (43).
We have recently generated CARs directed against the epidermal growth factor receptor variant III (EGFRvIII) and used them to gene engineer glioblastoma multiforme (GBM)specific T cells. We found that we can redirect GBM patient T cells to target glioma tumors via lentiviral transduction with a CAR recognizing EGFRvIII in vitro, as well as in vivo in murine models (44) and in 10 patients (45) without the systemic toxicity associated with current standard-of-care treatments. In our first-in-human trial of EGFRvIII CAR T cells, we were able to confirm that a single intravenous infusion of these modified lymphocytes resulted in T cell engraftment in the peripheral blood, trafficking to the brain and antigen-directed activity (45). However, we observed that the inhibitory tumor microenvironment ultimately hampers clinical efficacy: following CAR T cell administration, several immunosuppressive factors were upregulated in the tumor environment including programmed death-ligand 1 (PD-L1), tryptophan 2,3-dioxygenase, indoleamine 2,3-dioxygenase, and IL-10. The lack of CAR T cell anti-tumor activity was accompanied by the presence of immunosuppressive regulatory T cells (T REGS ) based on their expression of CD4, CD25, and FoxP3. Furthermore, the heterogeneity of EGFRvIII expression was a clear barrier to ongoing clinical responses in this study (45). Thus, adoptive cell therapies for non-hematopoietic malignancies will need to address how to increase both the potency and persistence of CAR T cells in the face of antigen heterogeneity and a strongly suppressive tumor microenvironment (Figure 1). This clinical report (45) presents several known obstacles to CAR T cell therapy for solid tumors which are described below in detail.
TUNING CAR T CELL SPECIFICITY AND INTRINSIC FITNESS FOR IMMUNOTHERAPY OF SOLID TUMORS Tumor Antigen Expression and Heterogeneity
Despite the fact that antigens such as CD19 and B-cell maturation antigen (BCMA) have been successfully targeted by CARs in the setting of hematopoietic cancer, there is an unmet need to identify similarly ideal antigens expressed by solid tumors. A major barrier to the development of CARs for solid tumor indications is, indeed, the identification of tumor antigens that can be targeted safely and effectively [reviewed in (46)]. In an optimal setting, CAR T cells should be directed against a tumor-restricted antigen to avoid on-target, off-tumor reactivity with healthy tissues. The proposed target antigen should be differentially expressed on tumor cells relative to essential normal tissues. In addition, the chimeric receptor must be highly specific for an antigen that is broadly expressed on the majority of cancer cells (46,47). A variety of tumor-specific and tumorassociated antigens that can be targeted using CAR T cell therapy in non-hematopoietic malignancies have been identified (e.g., EGFR/EGFRvIII, IL13Rα2, Her2, CD171, mesothelin (MSLN), folate receptor alpha, GD2, carcinoembryonic antigen (CEA), chondroitin sulfate proteoglycan 4, c-Met, etc.). Antigens that display high constitutive expression that is tumor-restricted (e.g., chondroitin sulfate proteoglycan 4) may permit the application of CAR T cell therapy to higher proportions of patients and reduce the likelihood of tumor escape (48). However, because most tumor-associated antigens are heterogeneously expressed in tumor tissue, the efficacy of CAR T cells is often limited. Thus, combination therapies incorporating CARs that target multiple antigens will likely be required. There is progress in more safely and specifically targeting nonhematopoietic tumors with CAR T cells, either through creating CAR T cells specific for RNA splice variants or tumor-specific glycans (49,50), or by generating CAR T cells that are conditionally specific for solid tumors. The latter is achieved by employing sensing and switching strategies in the tumor microenvironment (51)(52)(53)(54). In addition to selectively replicating in and killing tumor cells directly, oncolytic viruses armed with payloads (e.g., bispecific T cell engagers, cytokines) may further synergize with CAR T cells to overcome tumor heterogeneity, while simultaneously bolstering anti-tumor activity (55, 56) (Figure 2).
Car T Cell Trafficking to Solid Tumors
Following infusion of CAR T cells targeting an appropriate antigen into patients, these lymphocytes are faced with the immediate obstacle of having to successfully localize to the tumor bed. This process is critically dependent on chemokine receptors expressed by the transferred cells and the chemokine gradient produced by the tumor. This presents a challenge because T cells often do not express the cognate receptors for the chemokines produced by tumors. In addition to this chemokine/chemokine receptor mispairing, tumors produce very small amounts of the chemokines needed for successful trafficking of T cells to the lesion. For example, melanoma cells do not produce sufficient amounts of CXCR3 ligands and this results in inefficient localization of CXCR3 receptor-bearing effector CD8 + T cells to metastatic sites (57). We and others have co-expressed better matched chemokine receptors with CARs which resulted in improved trafficking of CAR T cells and enhanced tumor elimination (58,59).
Characteristics of Intrinsic Car T Cell Potency
Systematic evaluations of patients with hematologic malignancies responding or not responding to CAR T cell therapy has yielded insights into key determinants of T cell potency that may inform treatment of solid tumors. In CLL, CAR T cells that were particularly effective exhibited robust proliferative capacity as well as long-term persistence in vivo. Transcriptomic profiling of patient-derived cell products revealed that CAR T cells from complete-responding patients were enriched in memory related genes, including IL-6/STAT3 signatures, whereas products from non-responding patients upregulated programs involved in effector T cell differentiation, glycolysis, exhaustion, and apoptosis (33). Unexpectedly, there was no association with typical patient-(e.g., age, sex, prior therapy) or disease-related (prior therapies, genetic and other risk profile, tumor burden, etc.) factors with likelihood of response. This makes the important point that cell-intrinsic properties are major determinants of success and failure in CAR T cell therapy (Figure 1). FIGURE 2 | Strategies to improve the safety (e.g., tumor-sensing strategies) as well as to augment the anti-tumor efficacy of CAR T cells are shown. Genetic engineering can be accomplished using viral (e.g., lentiviruses, retroviruses) and non-viral (e.g., CRISPR/Cas9) approaches to endow CAR T cells with gain-of-function or loss-of-function alterations. The overall aim of these approaches is to improve intrinsic T cell fitness and allow these cells to elicit optimal effector activity in the setting of several extrinsic barriers operative within solid tumors, as shown in Figure 1.
Generation of Quality Car T Cells
The optimal "seed" population of T cells needed for the generation of CAR T cells that can sustain durable responses against cancer is still a matter of debate. One school of thought is that effector CD8 + T cells producing high amounts of interferon-gamma are most effective at eliminating tumors, while other investigators believe that naïve or early memory CD8 + T cells which differentiate and expand at the tumor site are superior for eliciting long-lasting anti-tumor immunity (60)(61)(62). If one assumes a linear model of CD8 + T cell differentiation, naïve T lymphocytes (T N ) are programmed into the earliest identifiable memory T cell stage, stem cell memory (T SCM ). This population is thought to give rise to the successive stages of differentiation: central memory (T CM ), effector memory (T EM ), terminally differentiated effector memory RA (T EMRA ), and effector (T EFF ) cells (63). Many studies have supported the idea that early memory CD8 + T cells generate the most potent CAR T cells against both liquid and solid tumors. For example, CARengineered T SCM cells directed to mesothelin were significantly more effective at regressing established solid tumors compared to T EM and T EFF cells (63). Retrospective profiling of ex vivo CD4 + and CD8 + T cells from CLL patients treated with anti-CD19 CAR T cells revealed that responding and non-responding patients did not differ in their frequencies of T N , T CM , T EM , or T EFF cells at the time of T cell collection. However, responding patients did exhibit a modest increase in T SCM cells compared to non-responders (33). More significantly, unbiased biomarker analysis revealed that the frequency of apheresed CD27 + CD45RO − CD8 + T cells from patients responding to CAR T cell therapy was significantly higher compared to non-responder T cells. Notably, this subpopulation of CD8 + T cells possessed functional characteristics of early memory as well as effector T cells (33).
Based on growing pre-clinical and clinical evidence of less-differentiated cells mediating superior anti-tumor efficacy, there is interest in developing ways to conduct large-scale T cell expansion, while simultaneously preserving the functional features of early-memory T cells. Human T cells undergo a series of profound changes with successive rounds of division in vitro and in vivo. Among these changes are the loss of certain co-stimulatory receptors (e.g., CD28, CD27) and the erosion of telomeres. Depending on the molecular design, costimulatory endodomains from these receptors may or may not be incorporated into the CAR. Therefore, culture systems that can prevent telomere loss or potentiate the maintenance of endogenous co-stimulatory receptor expression could restore proliferative potential to conventional effector T cells and presumably increase the functional lifespan of these cells following re-infusion into patients (64,65). We have recently described a culture system for the production of CAR T cells in 3-5 days, relative to a traditional 9-day process (66). This process allowed us to generate CD19-directed CAR T cells that were less differentiated and, at limited cell doses, significantly more potent against leukemia in an in vivo animal model (66). Alternative approaches for reducing CAR T cell differentiation during in vitro expansion include inhibition of signaling mediators downstream of the IL-2 pathway such as subunits of Glycogen synthase kinase 3β (60), Protein kinase B (AKT) (67), and Phosphoinositide 3-kinase (68). In addition, replacement of IL-2 with other cytokines such as IL-7 and IL-15 that signal through the γ-common chain receptor (69), but regulate survival and homeostatic T cell proliferation independently of TCR stimulation (70-72) may enhance the in vivo expansion and persistence of CAR T cells (73,74). Genetic reprogramming of induced pluripotent stem cells derived from somatic cells could also be used to generate more naïve-like CAR T lymphocytes for adoptive transfer (75). Finally, in a "bedside-to-bench" study, we demonstrated that unintentional disruption of the gene encoding the methylcytosine dioxygenase TET2 resulted in the massive clonal expansion of CAR T cells that were all derived from a single cell. Furthermore, TET2-disrupted lymphocytes exhibited a predominantly T CM phenotype at the peak of the anti-tumor response (76). These findings, along with other recent reports (77)(78)(79)(80)(81), underscore the power of epigenetic modulation in effectively re-programming T lymphocyte fate for the generation of CAR T cells with optimal anti-tumor potency (Figure 2).
SURMOUNTING TUMOR-MEDIATED BARRIERS TO CAR T CELL THERAPY OF NON-HEMATOPOIETIC CANCERS
A major issue to be addressed for improving the efficacy of CAR T cells against non-hematopoietic malignancies is determining how to effectively enhance the persistence and function of these lymphocytes in toxic tumor microenvironments. CAR T cells are vulnerable to both immunological and metabolic checkpoints as well as other suppressive factors present in the tumor bed. In pre-clinical mouse models, both CAR and TCR transgenic T cells cease to function or die shortly after entering the tumor microenvironment (82,83). Although repeated infusions of freshly engineered T cells may help to improve engraftment, this approach is not always clinically feasible. Tumor-imposed extrinsic barriers as well as strategies to overcome several of these hurdles for the generation of efficacious CAR T cells to treat solid cancers are described below.
Overcoming Physical Barriers in Solid Tumors
Unlike liquid tumors which do not typically possess physical barriers that would prevent their interactions with CAR T cells, many solid tumors have a formidable barricade that renders these masses inaccessible to invasion by immune cells. This landscape includes stromal cells, immune cells, cancer cells and extracellular matrix (ECM) components (i.e., proteins and glycans). Collagens, fibronectin, laminin, hyaluronan, and proteoglycans heavily contribute to the proliferation of fibrous or connective tissue (desmoplasia). The fibrotic tumor stroma of many solid malignancies, including pancreatic, breast and ovarian cancer is thought to impede effective drug delivery (84)(85)(86) and may also prevent infiltration by CAR T cells (Figure 1). Accordingly, diffusion of the CAR T cells into tumor tissue was shown to be blocked by the ECM are therefore often trapped (87) and unable to deeply penetrate tumor tissue (88). Desmoplasia combined with high interstitial fluid pressure and rapid tumor cell proliferation also contributes to the collapse of vasculature, which may further impede CAR T cell infiltration from vessels into tumor tissue (89). Tumor vessels may also not possess the receptors necessary for T cell homing and extravasation, including E-and P-selectins, VCAM-1, and ICAM-1 (87). Furthermore, following in vitro culture, CAR T cells often lack normal expression of the enzyme heparanase which degrades matrix proteoglycans and potentiates extravasation (90).
Administration of collagenases or hyaluronidase into solid tumors has been shown to enhance ECM breakdown, rendering the tumor more penetrable and thus susceptible to drug and cell-based therapies. Collagenase or hyaluronidase treatment has aided in increased antibody diffusion and chemotherapy uptake in pre-clinical in vivo and in vitro models of disease (91)(92)(93)(94). Alternatively, reprogramming of myeloid cells, which naturally traffic and infiltrate into solid tumors, can effect anti-fibrotic activity and ECM breakdown (95). Depletion of ECM-producing cells (e.g., cancer-associated fibroblasts) can also render solid tumors more susceptible to therapy (96). In this regard, targeting stromal fibroblasts with anti-fibroblast activation protein (FAP) CAR T cells significantly stalls the growth of multiple types of solid tumors (97). In addition, administration of CAR T cells engineered to overexpress heparanase leads to partial ECM degradation, enhanced T cell infiltration and anti-tumor activity (90). Although these strategies seem promising, the potential negative impact of tumor ECM depletion should not be overlooked. In some studies, ECM reduction can paradoxically accelerate disease progression (98,99). To avoid this potential negative outcome, direct intracavitary or intratumoral injection relative to intravenous infusion of CAR T cells may circumvent many of the physical barriers described above. In this vein, Klampatsa et al. used intracavitary methods to eliminate mesothelioma cell lines with some success (100), and Adusumilli and colleagues demonstrated that intrapleural administration of CAR T cells was significantly more successful at eliciting antitumor activity than the intravenous route (101).
Targeting the Tumor Vasculature and Immune Stimulatory Car T Cell Modifications
In addition to tumor antigens, CARs can be targeted to the tumor vasculature in an effort to restrict blood flow and nutrient supplies to the tumor, which impedes malignant growth and simultaneously increases T cell localization (102). A strategy based on regional infusion of IL-12 secreting CAR T cells directed against VEGFR-2 which is expressed on angiogenic endothelial cells resulted in enhanced accumulation of these lymphocytes and tumor regression in multiple pre-clinical models (103). "Armored CARs" or "TRUCKs" (T cells Redirected for Universal Cytokine Killing) delivering other cytokines such as IL-15 (104,105) or IL-18 (106) to the tumor microenvironment have also demonstrated superior anti-tumor activity compared to conventional CAR T cells (Figure 2). Furthermore, echistatin CARs targeting the angiogenic integrin αvβ3, which is commonly expressed on vascular endothelium of solid tumors (107), increased nanoparticle deposition in tumors (108). These findings indicate that the use of vasculature-targeted CAR T cells may be a potential "lead-in" strategy to enhance delivery of drugs or other adoptively transferred immune cells.
Overcoming Cell-Mediated Immunosuppression in the Solid Tumor Microenvironment
Along with physical barriers, the tumor microenvironment is composed of multiple cellular components and molecular factors that can abrogate the elicitation of effective endogenous anti-tumor immune responses. This immunosuppressive milieu can also severely inhibit the effector functions of adoptively transferred CAR T cells. However, CAR T cell hypofunction is tightly dependent on the tumor microenvironment and in some instances removal of engineered T cells from the tumor restores their functional activity (109). This report as well as other studies (110)(111)(112) suggest that favorably altering the toxic tumor microenvironment by directly targeting immunosuppressive cells or engineering T cells to resist tumor-specific inhibitory mechanisms may provide new opportunities to improve CAR T cell function.
Tumor associated macrophages (TAMs) are an immunosuppressive cell type commonly found in solid tumors, and these cells aid in tumor cell survival and growth. While the phenotype of macrophages is pliable and these cells can be programmed to be either tumor-promoting or tumor-suppressive, macrophage function is ultimately dictated by signals from the surrounding tissue-specific niche (113). The tumor microenvironment often pushes macrophages toward a tumor-promoting phenotype (114), and this aids in angiogenesis, growth, immune evasion and metastasis. Therefore, targeting TAMs may improve the efficacy of CAR T cells against solid tumors. Ruella and colleagues recently devised a strategy to deplete tumor-promoting macrophages with macrophagetargeted CAR T cells. This approach was efficacious in a mouse model of Hodgkin lymphoma and led to the establishment of long-term immunological memory (115).
Myeloid derived suppressor cells (MDSCs) are another immunosuppressive cell type found in solid tumors that can dampen CAR T cell function. MDSCs express arginase and indoleamine, which metabolize amino acids that are essential for effector T cell activation and proliferation (116). Accordingly, Burga et al. demonstrated that depletion of GR1 + cells (targeting immunosuppressive tumor-associated neutrophils and MDSCs) augmented the ability anti-carcinoembryonic antigen CAR T cells to reduce colorectal cancer liver metastases (117). MDSCs also produce high levels of reactive oxygen species, which may impair the cytotoxic ability and proliferative capacity of CAR T cells (118). To overcome this oxidative stress, CAR T cells have been modified to express the anti-oxidant enzyme catalase into the local environment and this modification significantly improves their anti-tumor activity (119).
T REGS are well-documented suppressors of T cell function capable of inhibiting anti-tumor activity through multiple mechanisms, including cell-cell contact inhibition, sequestration of IL-2 and the production of immunosuppressive cytokines such as TGF-β and IL-10 (120). Although these cells promote the growth and metastasis of tumors, they are difficult to directly deplete due to the lack of specificity of targeting agents, and the potential to induce autoimmune diseases when global disruption approaches are used (121). Given the high level of TGF-β produced by T REGS , MDSCs, and tumor cells, blocking TGF-β signaling through overexpression of a dominant-negative TGFβ receptor on adoptively-transferred T cells may improve their anti-tumor potency (122,123). Overexpression of dominantnegative TGF-β receptor II on CAR T cells results in enhanced T cell proliferation, cytokine production, in vivo persistence and ability to eradicate tumors in mouse models of aggressive human prostate cancer (124).
Many types of cells including tumor cells, fibroblasts, endothelial cells and immune cells produce the lipidsignaling molecule prostaglandin E2 (PGE 2 ) by activation of cyclooxygenase (COX)-2 and prostaglandin E synthase. PGE 2 enhances tumor progression by stimulating multiple pathways, including those that mediate angiogenesis and immunosuppression (125). For example, PGE 2 plays a significant role in the suppression of effector T cells and the attraction of T REGS and MDSCs. PGE 2 and adenosine activate protein kinase A (PKA), which then inhibits antigen receptor -triggered T cell activation. PGE 2 is also known to cooperate with adenosine in the dampening of immune responses mediated by T REGS (126). Recently, Newick et al. engineered CAR T cells to produce a small peptide that inhibits the association of PKA with ezrin, thus reducing the negative effects of PKA on TCR activation (127). This PKA inhibitor ameliorated the immunosuppressive actions of both adenosine and PGE 2 , resulting in increased CAR T cell trafficking, tumor cell cytotoxicity, and pro-inflammatory cytokine production (127).
Enhancing the Metabolic Fitness of Car T Cells
Immune cell function and metabolism are impacted by the solid tumor microenvironment. Glucose utilization is heterogeneous within the tumor and associated with perfusion, with lesserperfused regions of the tumor displaying higher glucose metabolism (128). Both proliferating tumors and effector T cells responding to antigen challenge rely primarily on aerobic glycolysis to fuel expansion, creating competing demands for metabolites within nutrient-poor regions of the tumor (129). This competition for nutrients, metabolites and oxygen (O 2 ) is thought to impact T cell metabolism, limit T cell-mediated anti-tumor efficacy and contribute to T cell exhaustion and cancer progression (130)(131)(132). Stabilization of HIF-1α drives glucose uptake, induces production of S-2-hydroxyglutarate (S-2HG) and consequential epigenetic remodeling as well as increased expression of IL-2, which potentiates CD8 + T cell mediated anti-tumor activity (133,134). However, under O 2 and glucose limiting conditions, reduction of HIF-1α expression may enhance T cell function (135). In a recent study, CD8 + TILs isolated from clear cell renal cell carcinoma (ccRCC) were shown to exhibit an impaired ability to consume glucose, mitochondrial fragmentation and hyperpolarization, as well as increased production of ROS (136). Because ccRCC develops a unique pathological pseudo-hypoxic response [reviewed in (137)], with increased aerobic glycolysis and vascularization, it is tempting to speculate that the altered tumor microenvironment in ccRCC may have contributed to these observed defects in ccRCC CD8 TIL metabolism (136). Likewise, hypoxic areas within solid tumors are often negatively correlated with patient survival and thought to promote tumor metastasis and resistance to radiotherapy (138)(139)(140). Another metabolic checkpoint in the tumor microenvironment regulating immune modulation is amino acid limitation (129). For example, degradation of Larginine by MDSCs in the tumor microenvironment can lead to reduced expression of CD3ζ and impaired T cell responses (141). In contrast, increased levels of arginine shift T cell metabolism to oxidative phosphorylation and increase central memory differentiation (142).
Activation, growth, proliferation, effector and memory function, and return to homeostasis are linked to the metabolic profile of the T cell (131). T cell subsets differently metabolize nutrients and regulation of nutrient availability can influence T cell differentiation as well as fate (129). Naïve T cells are metabolically quiescent and rely on glucose, fatty acids and amino acids as fuel sources for oxidative phosphorylation (143,144).
T CM cells maintain spare respiratory capacity through oxidation of fatty acids in mitochondria which allows for a rapid recall of the memory response upon antigen re-challenge (145, 146). In contrast, effector T cells, like tumor cells, rely on aerobic glycolysis to provide energy, metabolic intermediates for rapid cell growth and NAD + /NADH to maintain redox balance (147); although under metabolically challenging conditions CD8 + TILs can partially preserve effector function by catabolizing fatty acids (135). Glutamine is also essential for effector function (148). After conversion to α-ketoglutarate, glutamine can serve as a TCA intermediate or contribute to the citrate pool. Similarly, altering metabolism can impact T cell phenotype; restraining glycolysis, AKT, and mTOR activity or enhancing STAT3 or Wnt/β catenin signaling can arrest T cell development and retain T CM differentiation, which are associated with enhanced T cell persistence and may promote the efficacy of adoptive cell therapy (60,(149)(150)(151)(152).
Different types of co-stimulatory endodomains incorporated into a CAR can differentially program T cell metabolism and mitochondrial biogenesis (153). This indicates that the fate of CAR T cells toward memory or effector differentiation can be directed, as cells expressing CARs with 4-1BB signaling domains have enhanced mitochondrial biogenesis and fatty acid oxidation, while CARs with CD28 signaling domains have enhanced aerobic glycolysis (i.e., Warburg metabolism) (153). Therefore, in addition to being able to direct CARs to virtually any cell surface structure on tumor cells, we also have the potential to engineer these lymphocytes to be resistant to the tumor microenvironment by specifying their metabolic program. Alternatively, host preconditioning strategies involving the treatment of tumors with HIF blocking agents or metabolic enzymes may represent a promising strategy to limit the metabolic flexibility of tumors as well as the localization of inhibitory immune cells (154). This would allow CAR T cells to function in a more nutrient replete and less suppressive tumor microenvironment.
Engineering Car T Cell Resistance to Immune Checkpoint Inhibitors
Tumors cells can also directly modulate effector T cell activation by expression of inhibitory signals that block T lymphocyte activation and function, thus preventing immune control of tumor growth (155). In addition to secreting immunosuppressive cytokines, tumor cells or other cells in the tumor microenvironment express a number of proteins on their surface that are capable of inactivating CAR T cells. These include PD-1 ligands, PD-L1 (B7-H1), and PD-L2 (B7-DC), all belonging to the B7 receptor superfamily. Other B7 family members, such as B7-H3 and B7-H4, and the unrelated receptors herpes virus entry mediator (HVEM), inhibitory receptor Iglike transcript-3 and−4 (ILT3 and 4) are also abundantly expressed in the solid tumor microenvironment [reviewed in (156)]. Furthermore, by providing a persistent source of antigen while avoiding clearance, tumors potentially promote T cell exhaustion. As discussed above, checkpoint blockade has been a successful approach to sustain T cell function, and blockade of inhibitory receptors such as T-cell membrane protein-3 (TIM-3), lymphocyte-activation protein-3 (LAG-3), T cell Ig and ITIM domain (TIGIT), cytotoxic T lymphocyte-associated antigen 4 (CTLA-4), and programmed death-1 (PD-1) or their cognate ligands are being tested in clinical trials to reverse or prevent exhaustion [reviewed in (47)]. The upregulation of these receptors has been previously reported to abrogate the persistence and activity of the anti-tumor response of CAR T cells (155). Accordingly, John et al. reported that combining anti-Her2 CAR T cells and PD-1 blocking antibodies enhances tumor growth inhibition in association with decreased frequencies of GR1 + CD11b + MDSCs (157). Strategies in which CAR T cells are engineered to secrete immune checkpoint inhibitors such as anti-PD-L1 (110), and -PD-1 (158) antibodies or PD-1-blocking single-chain variable fragments (112) possess the advantage of increasing the local delivery of these agents to the tumor microenvironment, while avoiding toxicities associated with systemic checkpoint blockade. Co-expression of a dominantnegative PD-1 receptor with mesothelin-targeted CAR T cells has also been shown to render these cells resistant to PD-1-induced inhibition and to significantly improve their in vivo anti-tumor efficacy following a single administration (155). The Clustered Regularly Interspaced Short Palindromic (CRISPR)/CRISPR associated protein 9 (Cas9) provides a robust and multiplexable genome editing tool that permits knock-out of inhibitory receptors (Figure 2). This system can be used to knock-out PD-1 and CTLA-4 on allogeneic universal CAR T cells (159). Finally, it is intriguing to consider the possibility of directing CAR transgenes to specific genomic loci encoding inhibitory receptors using recently developed viral and non-viral technologies (160,161).
CONCLUDING REMARKS
Many pre-clinical studies indicate that adoptive cell transfer therapy with autologous T cells is a powerful approach for the treatment of cancer. In contrast to the recent FDA approvals of CAR T cells in hematologic malignancies, the effectiveness of this approach for a variety of more common non-hematopoietic cancers is much lower. As was underscored in this review, CAR T cells may hold great promise for the treatment of solid tumors; these malignancies have a highunmet medical need and are generally considered incurable with present therapies. However, the achievement of complete and durable remissions for patients with non-hematopoietic cancers will require optimization of CAR T cells in the areas of improving antigen targeting, enhancing T cell trafficking, bolstering intrinsic T cell potency and arming these lymphocytes to do battle in the face of multiple immunosuppressive barriers imposed by the solid tumor microenvironment. Both current and future advances in cellular engineering, site-specific genome editing and synthetic biology will undoubtedly bolster the safety, reliability and efficacy of CAR T cell therapy for a variety of diseases. Thus, while there are currently some detours on the road to clinical success, CAR T cells are on the fast track to becoming a potentially curative modality for many different cancers.
AUTHOR CONTRIBUTIONS
KL, RY, and JF conceptualized, wrote, and edited the manuscript. AB, MD, JM, SL, DD, and BL provided feedback and edited the manuscript.
FUNDING
Publication of this open-access article was supported through internal funding from the Center for Cellular Immunotherapies at the University of Pennsylvania. | 8,500.8 | 2018-12-03T00:00:00.000 | [
"Medicine",
"Biology"
] |
Conditions for equivalence of Statistical Ensembles in Nuclear Multifragmentation
Statistical models based on canonical and grand canonical ensembles are extensively used to study intermediate energy heavy ion collisions. The underlying physical assumption behind canonical and grand canonical models is fundamentally different, and in principle agree only in the thermodynamical limit when the number of particles become infinite. Nevertheless, we show that these models are equivalent in the sense that they predict similar results if certain conditions are met even for finite nuclei. In particular, the results converge when nuclear multifragmentation leads to the formation of predominantly nucleons and low mass clusters. The conditions under which the equivalence holds are amenable to present day experiments.
I. INTRODUCTION
In the disintegration of a nuclear system formed by the collision of two heavy-ions at intermediate energy, it is assumed that a statistical equilibrium is reached. This facilitates the use of statistical models [1][2][3] in order to obtain the yields of the composites at the freezeout volume. In such models of nuclear disassembly the populations of the different channels are solely decided by their statistical weights in the available phase space. One can use different ensembles (microcanonical, canonical or grand canonical) in order to account for the fragmentation of the nucleus into different channels. The partitioning into available channels can be solved in the canonical model [1] where the number of particles in the nuclear system is finite (as it would be in experiments). Even when the number of particles is fixed one can replace a canonical model by a grand canonical model where the particle number fluctuates but the average number is constrained to a given value [4,5]. Both canonical and grand canonical models have been extensively used to study the physics of intermediate energy heavy ion collisions [1,2,6,7] and results for different observables have been routinely compared to experimental data [8][9][10][11][12].
It is well known that results from different statistical ensembles agree in the thermodynamical limit [5], that is, when the number of particles become infinite. For example, for one kind of particle (nucleon) and for arbitrarily large nuclear system (therefore approximates the thermodynamical limit) [13], it was observed that results agree with each other under certain conditions. This equivalence is generally known not to be valid for nuclear systems of finite size.
The main result of this work lies in showing that results from the canonical and grand canonical models can agree even for finite nuclei. This equivalence is observed when nuclear multifragmentation leads to the formation of predominantly nucleons and low mass clusters. This condition can be achieved either by increasing the temperature or freeze-out volume of the fragmenting nucleus or source size, or by decreasing the asymmetry of the source. In fact, when all the four conditions are satisfied then one can get the best agreement between the two models. We have confined our study to the observables and conditions that can be easily accessed by present day experiments.
Specifically we investigate the multiplicity of the fragments leading to charge and mass distributions from the canonical and grand canonical distributions under varying conditions and identify the underlying reasons behind the differences. This led us to identify the conditions under which results from both the models converge. For example by comparing charge distributions of fragments obtained from both models under varying temperature, freeze-out volume, fragmenting source size and asymmetry, it becomes possible to obtain the conditions under which the models give rise to similar results.
II. THEORETICAL FORMALISM
In this section we describe briefly the canonical and the grand canonical model of nuclear multifragmentation. The basic output from canonical or grand canonical model is multiplicity of the fragments. This allows one to obtain the charge or the mass distribution of the fragments. By multiplicity we mean that the average number of fragments produced for each proton number Z and neutron number N . Assuming that the system with A 0 nucleons and Z 0 protons at temperature T , has expanded to a volume higher than normal nuclear volume and thermodynamical(statistical) equilibrium is reached at this freeze-out condition, the partitioning into different composites can be calculated according to the rules of equilibrium statistical mechanics.
In a canonical model [1], the partitioning is done such that all partitions have the correct A 0 , Z 0 (equivalently N 0 , Z 0 ). The canonical partition function is given by where the sum is over all possible channels of break-up (the number of such channels is enormous) satisfying N 0 = N × n N,Z and Z 0 = Z × n N,Z ; ω N,Z is the partition function of the composite with N neutrons and Z protons and n N Z is its multiplicity. The partition function Q N0,Z0 is calculated using a recursion relation [1]. From Eq. (1) and the recursion relation, the average number of composites is given by [1] It is necessary to specify which nuclei are included in computing Q N0,Z0 . For N, Z we include a ridge along the line of stability. The liquid-drop formula gives neutron and proton drip lines and the results shown here include all nuclei within the boundaries.
In the grand canonical model [4], if the neutron chemical potential is µ n and the proton chemical potential is µ p , then statistical equilibrium implies [5] that the chemical potential of a composite with N neutrons and Z protons is µ n N + µ p Z. The average number of composites with N neutrons and Z protons is given by [4] n N,Z gc = e βµnN +βµpZ ω N,Z The chemical potentials µ n and µ p are determined by solving two equations N 0 = N e βµnN +βµpZ ω N,Z and Z 0 = Ze βµnN +βµpZ ω N,Z . This amounts to solving for an infinite system but we emphasize that this infinite system can break-up into only certain kinds of species as are included in the above two equations. We can look upon the sum on N and Z as a sum over A and a sum over Z. In principle A goes from 1 to ∞ and for a given A, Z can go from 0 to A. Here for a given A we restrict Z by the same drip lines used for canonical model.
In both the models, the partition function of a composite having N neutrons and Z protons is a product of two parts: one is due to the the translational motion and the other is the intrinsic partition function of the composite: where A = N + Z is the mass number of the composite and V is the volume available for translational motion.
Note that V will be less than V f , the volume to which the system has expanded at break up. In general, we take V f to be equal to three to six times the normal nuclear volume. We use V = V f − V 0 , where V 0 is the normal volume of nucleus with Z 0 protons and N 0 neutrons. For nuclei in isolation, the internal partition function is given by For mass number A = 5 and greater we use the liquid-drop formula for calculating the binding energy and the contribution for excited states is taken from the Fermi-gas model. The properties of the composites used in this work are listed in details in [14]. We compare the total charge distribution n Z = N n N,Z obtained from both the ensembles at different temperatures (3.8 MeV, 5 MeV and 8 MeV) from disassembly of a particular source (Z 0 = 25, A 0 = 60) at a fixed freeze-out volume 3V 0 (Fig. 1). The differ- ence in result is maximum at the lowest temperature 3.8 MeV where fragmentation is less and the disassembly of the nucleus results in more of 'liquid-like' fragments or higher mass fragments. As one increases the temperature, fragmentation increases, the number of such higher mass fragments decrease (at the expense of the lower mass ones) and the results from the canonical and grand canonical ensembles begin to converge. This is easily seen at the two higher temperatures. At 8 MeV the results from both the ensembles are very close to each other since fragmentation is maximum at this temperature, the nucleons and the lower mass fragments dominating the distribution. The effect of increasing the freeze-out volume (decreasing the density) is equivalent to that of increasing the temperature and this is seen in Fig. 2. Here we have repeated the same calculation for the same source at T = 5 MeV for three different freeze-out volumes. It is seen that results from both the ensembles agree with each other as one increases the freeze-out volume when the nucleus fragments more into smaller pieces. Similar effect is also seen if we vary the source asymmetry y = (N 0 − Z 0 )/(N 0 + Z 0 ) keeping the temperature fixed 5 MeV, freeze-out volume at 3V 0 and source size at A 0 = 60. Fig 3 shows the charge distribution for three nuclei having y = 0.33, 0.17 and 0 respectively. We observe that the difference in results between both the ensembles is maximum when the asymmetry is more (Fig 3(a)) and the difference is least for the symmetric nucleus (Fig 3(c)).
The reason behind the differences is the same as that in the case of temperature variation. When the nucleus is more asymmetric, fragmentation (breaking of the nucleus) is less and the fraction of higher mass fragments is more as compared to the more symmetric case which will be shown later. This effect is also seen if we keep both temperature, freeze-out volume and the asymmetry parameter fixed but increase the source size(mass) as shown in Fig. 4. The difference in result between both the ensembles is maximum when the source size is minimum as expected and the results become close to each other for a large nucleus. We can say that the nucleus fragments more and more as one increases the source size (keeping other parameters fixed) and the effect is similar to that of increasing the temperature keeping the source size fixed.
In order to investigate the effect more, we have calculated the ratio (normalized) of higher mass fragments formed to that of the total number of fragments (total multiplicity). The fragment whose size is more than 0.8 times A 0 (more than 80% of the source in size) are considered as higher mass fragments i.e. the ratio is defined as This criteria of choosing the higher mass fragments is not very rigid and can be relaxed. We have checked that even if we make it 0.75 or 0.85 instead of 0.8 the trend of the results remain same. We have done this calculation in both canonical and grand canonical models and the results are similar. We have shown the results in Fig 5 from the grand canonical model. In Fig 5.a we show the variation of this ratio as a function of temperature (keeping source size, freeze-out volume and asymmetry fixed) and it is seen that the ratio decreases with increase in T . This shows that for a source with lower values of T , the fraction of higher mass fragments formed as a result of fragmentation is more as compared to those with higher T values. We emphasize that the difference in the charge distributions from the canonical and grand canonical ensembles is mainly caused by the presence of the higher mass fragments in the distribution. The lesser is the fraction of the higher mass fragments, the deviation in results between both the ensembles will be less and this is exactly what we saw in Fig. 1. Similar effect is seen when one plots this ratio (Fig 5.b) as a function of V f /V 0 keeping other parameters fixed. It is clearly seen that with increase in the freeze-out volume, the fraction of higher mass fragments decrease and this causes the results between both the ensembles to be very close when V f is maximum as shown in Fig. 2. We also plot η as a function of the asymmetry parameter y of the source, the source size (A 0 = 60), temperature (5 MeV) and freeze-out volume (3V 0 ) being kept fixed and it is seen that the ratio increases with y (Fig 5(c)). So here we observe that the less is the asymmetry of the source, less is the number of large fragments and hence fragmentation of the nucleus is more. In this scenario, when the nucleus is more symmetric the results from the two ensembles agree to a very good extent than when the nucleus is less symmetric as seen in Fig 3. The same effect is seen (Fig 5(d)) if one increases the source size keeping the other parameters fixed and we assert that the effect of increasing the source size is similar to that of increasing the temperature or freeze-out volume or decreasing the asymmetry of the source as far as convergence between both the ensembles is considered. What we wish to convey is that the differences in results between the canonical and the grand canonical ensemble is mainly because of the presence of the higher mass fragments in the fragmentation of a nucleus. If the conditions are such that the fragmentation is more and there are only lower mass clusters, then the results from both the ensembles agree to a much better extent. The same condition is also valid for convergence between microcanonical and canonical ensembles where energy plays the role of the extensive variable instead of the total number of particles. The more the nucleus disintegrates, the less will be the fluctuation in energy and better will be the convergence between the microcanonical and the canonical ensembles.
IV. SUMMARY AND CONCLUSION
This Letter analyzes the charge distributions of fragments formed in nuclear multifragmentation in both canonical and grand canonical versions of the multifragmentation model. Both models are typically used to study experimental data from heavy-ion collisions at intermediate energies. We have shown that results from both models are in agreement for finite nuclei provided the nucleus fragments predominantly into nucleons and low mass clusters. We have seen that this condition is achieved under certain conditions of temperature, freezeout volume, source size and source asymmetry. The main message that we wish to convey in this work is that while canonical and grand canonical models have very different underlying physical assumption, the results from both models can be in agreement with each other provided the contribution of higher mass fragments in nuclear disassembly is insignificant. This condition can be achieved either by increasing the the temperature or freeze-out volume of the fragmenting nucleus or by increasing the source size, or by decreasing the asymmetry of the source. In fact when all these four conditions are satisfied then one obtains the best convergence between the two models. On the other hand, when the temperature and freezeout volume are low, nucleus is small and more asymmetric then fragmentation of the nucleus is least; in these cases higher mass fragments dominate the distribution and the results from both the ensembles will be very different We would like to add that the convergence between the microcanonical and the canonical ensemble will also be achieved under the similar conditions as those between the canonical and the grand canonical ensembles.
V. ACKNOWLEDGEMENT
We would like to thank Prof. S. Das Gupta (Mcgill University) for introducing us to this subject. | 3,634.4 | 2012-09-11T00:00:00.000 | [
"Physics"
] |
Hard-Rock Coastal Modelling: Past Practice and Future Prospects in a Changing World
: This paper reviews the history of conceptual and numerical modelling of hard rock coasts (mean annual cliff erosion typically < 1 mm up to 1 cm) and its use in studying coastal evolution in the past and predicting the impact of the changing climate, and especially rising sea level, in the future. Most of the models developed during the last century were concerned with the development and morphology of shore-normal coastal profiles, lacking any sediment cover, in non-tidal environments. Some newer models now consider the plan shape of rock coasts, and models often incorporate elements, such as the tidally controlled expenditure of wave energy within the intertidal zone, beach morphodynamics, weathering, changes in relative sea level, and the role of wave refraction and sediment accumulation. Despite these advances, the lack of field data, combined with the inherent complexity of rock coasts and uncertainty over their age, continue to inhibit attempts to develop more reliable models and to verify their results.
Introduction
Rock coasts serve as: depositories for paleo-environmental evidence; sediment sources for commercially valuable and environmentally sensitive estuaries, marshes, and beaches; tourist destinations, particularly when rocky foreshores are covered by beach sands; and bulwarks that protect increasingly densely populated coastal hinterlands from erosion and flooding [1][2][3]. Researchers need to acquire a better understanding of rock coast dynamics and evolution, in part to predict and mitigate the effects of rising sea level and possibly greater storminess during this century. Nevertheless, we are still poorly equipped to provide definitive answers to fundamental questions concerning ( Figure 1): a) The nature and relative efficacy of the physical, chemical, and biological processes operating on rock coasts and how they are influenced by the climate, wave regime, tidal range, rock structure and mineralogy, and other factors; b) Rates of erosion and how they vary spatially within the sub-, inter-, and supratidal zones, and temporally with changes in intertidal morphology; c) Whether intertidal shore platforms and other elements of rock coasts are essentially contemporary (Holocene), having formed since the sea rose to approximately its present level, or ancient, inherited features formed and modified when sea level permitted coastal processes to expose and reshape the features; and d) The morphodynamic response of beaches with resistant rock foundations (platform-beaches) to rising sea level, increased storminess, and other manifestations of climate change.
Figure 1.
Cliff and intertidal shore platform in Liassic limestones and shales in the Vale of Glamorgan, southern Wales, UK. Pertinent questions might concern the: age of this coast and whether it is partly inherited; nature and efficacy of the formative processes; role of geological conditions; morphodynamics of the pebble and sand platform-beach; and past and potentially future impact of changes in climate and sea level.
Effective management in populated and potentially hazardous coastal regions requires reliable predictive methodologies derived from, and tested against, field data. Slow, generally imperceptible, changes and the lack of datable deposits and suitable dating techniques make it difficult to determine rates and modes of long-term, rock coast development. Modelling is therefore a fundamental and particularly appropriate tool to study changes in rock coasts in the past and to predict likely changes in the future [4,5]. Additionally, modelling can be used to study the relative contribution of different processes suites, including mechanical wave erosion and weathering, and to account for differences in coastal morphology under various geological and environmental conditions. The purpose of this paper is to review the use of models on rock coasts to: study their development over very long periods in the past and to predict their likely development over much shorter periods in the future; identify limitations inherent in representing complex coastal systems with fairly simple numerical expressions; and suggest possible measures that can be taken to allow better models to be developed. The paper is primarily concerned with process-form modelling of shore platforms and other wave eroded surfaces rather than with mass movement process models or statistical and probabilistic models of coastal cliff retreat [6,7]. Also, despite there being a significant body of literature on soft rock coastal models, this review is restricted to hard rock coasts which, although poorly defined, are generally considered to erode fairly slowly (<1 mm up to 1-2 cm, compared with centimetres to metres per year for consolidated clay and other soft rock coasts). The term rock coast is therefore used here to refer to hard rock coasts, unless stated explicitly to the contrary. The distinction between hard and soft rock is arbitrary and often difficult to make, however, particularly where more and less resistant strata alternate in the vertical and horizontal planes.
Rock Coast Models
Most models have concentrated on the sectional shape of rock coasts rather than the plan form, and there has been little attempt to integrate changes occurring in the horizontal and vertical planes. The dominant feedbacks in the two planes are quite different [8]. Whereas the sectional shape is generally assumed to be controlled by the relationship between rates of wave attenuation and the gradient of the bottom, the plan shape is thought to reflect the effect of headland-bay morphology on wave refraction and the protection afforded by accumulating beach material in the bays. Cliff and intertidal shore platform in Liassic limestones and shales in the Vale of Glamorgan, southern Wales, UK. Pertinent questions might concern the: age of this coast and whether it is partly inherited; nature and efficacy of the formative processes; role of geological conditions; morphodynamics of the pebble and sand platform-beach; and past and potentially future impact of changes in climate and sea level.
Effective management in populated and potentially hazardous coastal regions requires reliable predictive methodologies derived from, and tested against, field data. Slow, generally imperceptible, changes and the lack of datable deposits and suitable dating techniques make it difficult to determine rates and modes of long-term, rock coast development. Modelling is therefore a fundamental and particularly appropriate tool to study changes in rock coasts in the past and to predict likely changes in the future [4,5]. Additionally, modelling can be used to study the relative contribution of different processes suites, including mechanical wave erosion and weathering, and to account for differences in coastal morphology under various geological and environmental conditions. The purpose of this paper is to review the use of models on rock coasts to: study their development over very long periods in the past and to predict their likely development over much shorter periods in the future; identify limitations inherent in representing complex coastal systems with fairly simple numerical expressions; and suggest possible measures that can be taken to allow better models to be developed. The paper is primarily concerned with process-form modelling of shore platforms and other wave eroded surfaces rather than with mass movement process models or statistical and probabilistic models of coastal cliff retreat [6,7]. Also, despite there being a significant body of literature on soft rock coastal models, this review is restricted to hard rock coasts which, although poorly defined, are generally considered to erode fairly slowly (<1 mm up to 1-2 cm, compared with centimetres to metres per year for consolidated clay and other soft rock coasts). The term rock coast is therefore used here to refer to hard rock coasts, unless stated explicitly to the contrary. The distinction between hard and soft rock is arbitrary and often difficult to make, however, particularly where more and less resistant strata alternate in the vertical and horizontal planes.
Rock Coast Models
Most models have concentrated on the sectional shape of rock coasts rather than the plan form, and there has been little attempt to integrate changes occurring in the horizontal and vertical planes. The dominant feedbacks in the two planes are quite different [8]. Whereas the sectional shape is generally assumed to be controlled by the relationship between rates of wave attenuation and the gradient of the bottom, the plan shape is thought to reflect the effect of headland-bay morphology on wave refraction and the protection afforded by accumulating beach material in the bays.
Coastal Profiles
Geologists and other coastal workers were engaged in a debate from about the mid-nineteenth to the early twentieth centuries over the effective maximum depth of marine erosion. A related question, which also had important implications for the development of ideas regarding the long-term evolution of rock coasts, was concerned with the relative roles of marine and subaerial erosion in the formation of wide denudation surfaces [9][10][11][12]. The first planation models were conceptual and based on the assumption that marine erosion, by waves or bottom currents, is effective at depths extending from tens to hundreds of metres below sea level. Some workers opined that surfaces developing well below sea level become more gently sloping as they widen through time, whereas others surmised that they attain equilibrium, maintaining constant width and gradient as they migrate landwards [12][13][14][15] ( Figure 2). By the mid-twentieth century, other qualitative models had introduced the idea that intertidal shore platforms may also attain equilibrium states with constant morphology, due to the effect of platform gradient on rates of wave attenuation and the energy reaching the cliff foot [16][17][18]. Similarly, Focke [19] proposed that as limestone surf ledges in the southern Caribbean become wider, less spray is able to reach and corrode the cliff, eventually producing an equilibrium state.
Simple numerical models, and in a few cases physical models using plaster and cement blocks in wave tanks and flumes [20][21][22], had largely superseded conceptual models by the latter part of the twentieth century. These early numerical models were generally concerned with the two-dimensional evolution of subtidal and intertidal profiles. They were structured around a few variables and were based on the assumption that marine erosion is largely accomplished in the submarine zone, or at the water surface in tide-less seas [23][24][25][26][27]. The most effective erosional processes operate in the intertidal rather than in the subtidal zone, however, and at water surfaces that vary in elevation in the short-term, due to tides and storms, and in the long-term, due to changes in sea level and/or elevation of the land.
Mechanical wave erosion is the result of hydraulic quarrying and abrasion. Quarrying dislodges joint blocks and other rock fragments through the impact of waves, surf, and swash on rock surfaces, shock pressures by breaking waves, and air compression in rock cavities. Quarrying is closely associated with the water surface and abrasive efficacy declines rapidly with depth, with the decline in the strength of the wave-generated currents [1,20,28]. The most effective wave erosional processes therefore operate at or near to the waterline which, according to most hydraulic models, also experiences the highest wave-generated pressures [1] (pp. [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19], [2] (pp. [29][30][31][32][33][34][35][36][37]. This conjunction between wave pressures and erosional processes has important implications for modelling, implying that wave erosion is closely related to the tide. Tidal duration distributions describe the amount or proportion of time (usually a year) that the water surface occupies each intertidal elevation ( Figure 3). The distributions suggest that waves operate most often at the neap high and low tidal levels and that wave action is increasingly concentrated at and between these levels as the tidal range decreases. The tidal duration concept is an important component of a wave erosional model that has been used, in a variety of forms, to study the: development of shore platforms under stable sea level conditions [29]; evolution of shore platforms and erosional continental shelves on stable and tectonically active coasts in the Quaternary and Pliocene [30,31]; formation of subaerial and submarine terraces on tectonically mobile coasts [32][33][34]; and the role of Holocene changes in relative sea level (RSL) on coastal morphology [35,36]. The tidal duration concept has also been used by Matsumoto et al. [37], who developed a numerical model for low tidal range environments. [13]; (B) Johnson, [12]; and (C) Challinor [15]. (D) Examples of model runs (run parameters are listed in the original article) examining the evolution of hard rock coastal profiles with changes in sea level on tectonically stable landmasses. Labels 1 to 7 alongside each profile refer to 5.4, 3, 2, 1, 0.5, 0.125, and 0 million years ago, respectively [31].
Tidal duration distributions describe the amount or proportion of time (usually a year) that the water surface occupies each intertidal elevation ( Figure 3). The distributions suggest that waves operate most often at the neap high and low tidal levels and that wave action is increasingly concentrated at and between these levels as the tidal range decreases. The tidal duration concept is an important component of a wave erosional model that has been used, in a variety of forms, to study the: development of shore platforms under stable sea level conditions [29]; evolution of shore platforms and erosional continental shelves on stable and tectonically active coasts in the Quaternary and Pliocene [30,31]; formation of subaerial and submarine terraces on tectonically mobile coasts [32][33][34]; and the role of Holocene changes in relative sea level (RSL) on coastal morphology [35,36]. The tidal duration concept has also been used by Matsumoto et al. [37], who developed a numerical model for low tidal range environments. Rock coasts are often inhospitable depositional environments, due to such factors as slow and often unsuitable fine-grained sediment production, poor sediment retention on steep or highelevation intertidal surfaces, exposed, wave-swept headlands, and structural and topographic obstructions to longshore sediment transport. Nevertheless, some rock coasts are partially or Rock coasts are often inhospitable depositional environments, due to such factors as slow and often unsuitable fine-grained sediment production, poor sediment retention on steep or high-elevation intertidal surfaces, exposed, wave-swept headlands, and structural and topographic obstructions to longshore sediment transport. Nevertheless, some rock coasts are partially or completely covered in sediment ranging in size from sand to boulders, and most have small deposits trapped in structural depressions or against upstanding beds of resistant rock [38]. There have been only a few attempts to model the morphodynamics of platform-beaches with rock foundations [39,40], or the development of shore platforms, due to the abrasive or protective effect of beach sediments combined with wave quarrying and weathering by tidal wetting and drying [41] (Figure 4). Rock coasts are often inhospitable depositional environments, due to such factors as slow and often unsuitable fine-grained sediment production, poor sediment retention on steep or highelevation intertidal surfaces, exposed, wave-swept headlands, and structural and topographic obstructions to longshore sediment transport. Nevertheless, some rock coasts are partially or completely covered in sediment ranging in size from sand to boulders, and most have small deposits trapped in structural depressions or against upstanding beds of resistant rock [38]. There have been only a few attempts to model the morphodynamics of platform-beaches with rock foundations [39,40], or the development of shore platforms, due to the abrasive or protective effect of beach sediments combined with wave quarrying and weathering by tidal wetting and drying [41] (Figure 4). The letters below the beach profiles (a to h), for each of the three sea levels, refer to stages in model calculations to determine if a beach can develop on the resistant foundation under prevailing conditions. Beach extent and morphology change during model runs according to the amount and grain size of the available sediment, the relationship between beachface gradient and the gradient of the resistant foundation, and the beach state, which depends in turn on the prevailing wave conditions. MHWS, MHWN, MT, MLWN, and MLWS refer to mean high water springs, mean high water neap, mid-tide, mean low water neap, and mean low water spring tidal levels, respectively. The shaded (yellow) parts of this figure represent beach sediment and α is the gradient of the beachface. See Trenhaile [40] for a detailed explanation of this model.
It is ironic, given the importance traditionally accorded to weathering in the rock coast literature, and the availability of much better data on surface downwearing (erosion in the vertical plane) than on wave-generated backwearing (erosion in the horizontal plane), that weathering has generally been neglected by rock coast modelers. Nevertheless, a few models have incorporated weathering and debris removal in a suite of erosional processes [37,[41][42][43][44], and others have treated it in isolation to determine whether it can produce shore platforms when acting alone [45][46][47]. Several semi-quantitative and numerical models have also been developed to consider episodic notch formation and cliff collapse, due to abrasion and weathering, followed by cliff collapse and debris removal [48][49][50][51][52].
Coastal Plan-Shape
Headlands and bays are generally produced by differential erosion of, respectively, more and less resistant rocks ( Figure 5). Differences in rock resistance often reflect changes in the type of rock, but they can also be due to, or enhanced by, more subtle variations in rock structure. In addition to inherited inequalities in rock resistance in essentially heterogeneous outcrops, some differences may be due to secondary factors that develop only when a coast has started to develop a crenulated form, including those resulting from changes in the relationship between rock strike and dip and the direction of the refracted waves [1,53]. As crenulated coasts develop, wave refraction increasingly concentrates wave energy on the headlands and dissipates it in the bays. Consequently, several conceptual models have suggested that coasts may trend towards an equilibrium plan shape, in which stronger waves erode the more resistant rocks on the headlands at the same rate as weaker waves erode the less resistant rocks in the bays [12,54,55]. A corollary of these concepts is that bay depth decreases with the distance between the headlands and increases with the difference in the resistance of the rocks in the headlands and bays. Plan-shape models also need to consider the complicating effects of: variations in such factors as rock resistance and cliff height, as coasts migrate landwards; the accumulation and redistribution of coarse-grained sediment, especially in bayhead beaches; and Quaternary changes in RSL. The development of crenulated coasts has important implications for the prediction and mitigation of cliff erosion and the impact of rising sea level. Due to a lack of reliable long-term records of episodic cliff recession, however, we do not know whether coasts trend toward equilibrium plan shapes, or if there has been enough time, in any case, for such states to have developed ( Figure 6). Plan-shape models also need to consider the complicating effects of: variations in such factors as rock resistance and cliff height, as coasts migrate landwards; the accumulation and redistribution of coarse-grained sediment, especially in bayhead beaches; and Quaternary changes in RSL. The development of crenulated coasts has important implications for the prediction and mitigation of cliff erosion and the impact of rising sea level. Due to a lack of reliable long-term records of episodic cliff recession, however, we do not know whether coasts trend toward equilibrium plan shapes, or if there has been enough time, in any case, for such states to have developed ( Figure 6). as rock resistance and cliff height, as coasts migrate landwards; the accumulation and redistribution of coarse-grained sediment, especially in bayhead beaches; and Quaternary changes in RSL. The development of crenulated coasts has important implications for the prediction and mitigation of cliff erosion and the impact of rising sea level. Due to a lack of reliable long-term records of episodic cliff recession, however, we do not know whether coasts trend toward equilibrium plan shapes, or if there has been enough time, in any case, for such states to have developed ( Figure 6). , an equilibrium plan shape develops, due to wave refraction, with stronger waves eroding the harder rocks on the headlands at the same rate as weaker waves erode the less resistant rocks in the bay. In (B), the amount and mobility of the sediment determine whether it plays an abrasional (d) or protective role (c). Equilibrium planform amplitude may develop if some erosion can continue in the bays (e), but complete protection in the bays would cause headland amplitude to decrease (f) and eventually allow sediment to be swept around them, thereby initiating a new cycle of erosion (g) [38].
Limber and colleagues have modelled the development of coastal plan shapes, based on the assumption that erosion is enhanced by the occurrence of small amounts of abrasive material and inhibited by the protective effect of large amounts of sediment [56]. Their model, which is primarily concerned with sand-rich coasts, suggests that longshore variations in beach width can account for the formation of crenulated coasts irrespective of any differences in rock resistance. While the decline in beach volume with increasing distance from river mouths and other source areas could account for broad, plan-shape perturbations, however, beach width along fairly regular coasts is unlikely to vary sufficiently, in quantity or frequency, to trigger the development of more intricate headland-bay sequences at larger scales. Conversely, the accumulation of sediment in pre-existing bays in weaker rock outcrops must play an important role in the subsequent development of crenulated coasts.
Limber and Murray [57] found that the cross-shore amplitude of crenulated coasts is: (a) inversely proportional to the sediment supply and the degree of wave energy convergence on the headlands and divergence in the bays; and (b) directly proportional to the alongshore distance between the headlands and the difference in the strength of the rocks in the headlands and the bays. They noted that the degree of crenulation could diminish in time, due to the erosion of the exposed headlands and the protection afforded by sediment to the back of bays. This would eventually allow sediment to escape alongshore (Figure 6f,g). Related modelling has also suggested that the amount of beach sediment can promote or inhibit the development of stacks off headlands [58].
Differences in cliff height are essentially irrelevant in fine-grained rocks that produce clays and other materials that are carried offshore in suspension, but it can be important where rocks break down into coarse grains and rock fragments that accumulate at the cliff foot, thereby helping to protect it from wave erosion [16,59]. In such cases, the height of a cliff partly determines the amount of debris that is produced in an erosional event, and the time required to remove it. The effect of variations in cliff height along crenulated coasts may be superimposed upon the effects of wave refraction and differences in rock resistance, or it might also initiate crenulated forms independently in essentially homogeneous rock outcrops. This could occur because of differences in cliff height, and consequently in rates of recession, between the low cliffs around the mouths of drowned river valleys and the higher cliffs cut into the adjacent watersheds.
Changes in Relative Sea Level
The long-term evolution of rock coasts has been driven, in part, by Quaternary changes in sea level, and also, in many areas, by tectonic and glacio-isostatic changes in the elevation of the land. Changes in RSL caused intertidal zones and their associated erosional processes, to migrate landwards and seawards, producing wide continental and island shelves, contemporary shore platforms, and elevated marine terraces. Early attempts to model the evolution of rock coasts with changes in RSL, including the quantitative models of Scheidegger [24,60] and the qualitative approach of King [61], were concerned with their response to hypothetical and simplistic representations of steadily rising and falling sea level. Later modellers were able to adopt more realistic representations of sea level change incorporating deep-sea isotopic and other data on sea level changes during the Quaternary.
Sunamura [27] modelled the development of continental shelves, albeit in a non-tidal sea, during the Holocene transgression, producing a convex upward profile that was asymptotic to present sea level. A much wider range of profiles was created in subsequent models that examined the effect of a variety of Holocene relative sea-level curves characteristic of different glaciated and non-glaciated environments [35,62]. The results suggested that the fall in sea level from its mid-Holocene maximum, 1 to 2 m above its present level, promoted the formation of subhorizontal platforms in Australasia and over much of the Southern Hemisphere, whereas an asymptotic rise in sea level over much of the Northern Hemisphere was more conducive to the development of sloping platforms [35].
Modellers have also considered rock coast evolution over much longer periods of time. Trenhaile (1989) used an earlier model [53] to study the development of continental shelves and wide, subaerial terraces over five glacial-interglacial cycles [49]. In addition to Quaternary changes in sea level, Cinque et al. [63] and Anderson et al. [64] explored the effect of tectonic uplift on the formation of marine terraces. Trenhaile's [29] wave erosional model has been particularly widely applied to study the: effect of high sea levels in marine isotopic stages (MIS) 5 and 7 [65]; Quaternary evolution of shore platforms and continental and insular volcanic island shelves [30,31,33,34]; and the formation of subaerial and submarine terraces during the Quaternary on tectonically mobile coasts [32]. Using a stylized representation of sea level oscillations during the Quaternary, Trenhaile [30] found that falling sea level at the onset and rising sea level at the end of glacial stages truncates and over-steepens intertidal shore platforms. Erosion during the ensuing interglacials, largely at the high tidal level, then modifies the platforms and restores them to a state of quasi-equilibrium with a gradient that is related, in part, to the tidal range. Trenhaile [31] updated this earlier study, using more realistic, and consequently more variable, sea level data to model the development of stable and tectonically active rock coasts over the last 5.4 million years (Figure 2). Among the conclusions were: a) Some older subaerial terraces (above present sea level), especially on steeply sloping, slowly rising landmasses, were eroded or completely eliminated by the development of younger terraces at lower elevations. b) Submarine terraces (below present sea level) were modified by erosion during subsequent periods of rising and falling sea level and are best preserved on rapidly subsiding landmasses where they were quickly carried below, and therefore protected from, later glacial stage sea levels.
c) Prominent terraces formed during glacial, low sea level periods can alternate with those formed during interglacial, high sea level periods in the submarine and subaerial zones of rapidly rising or subsiding landmasses. d) The larger sea level oscillations of the mid-to late Quaternary were more conducive to erosion than the smaller oscillations in the Pliocene and early Quaternary.
Most models concerned with the effect of sea level rise in the future have been directed at soft rock cliffs which, due to higher rates of erosion, generally pose a greater threat to human lives and activities than hard rock coasts. Nevertheless, several models have considered the impact on hard rock coasts. Trenhaile [66] produced a series of predictive equations that related rates of cliff recession during the present and last centuries. His model suggested that whereas rising sea level will promote faster rates of cliff recession in the future, the effect of increasing storminess may be less important. Young et al. [67] modified the Bruun Rule [68] to model the effect of rising sea level on the Californian coast. This model was based on the premise that the erosion of cliffed coasts fronted by sandy beaches is controlled, in part, by the local sand balance, and consequently by the amount of abrasion or protection that it affords [56,57]. Limber et al. [69] also modelled the effects of rising sea level on the coast of southern California and concluded that erosion rates could more than double with a rise in sea level of 1.5 to 2 m by the end of this century.
Several models have considered the effect of sea level rise on platform-beaches. In Tarborda and Ribeiro's [70] model, beach sand volume and profile shape remain constant as sea level rises, whereas in Trenhaile's [40] model, the response of platform-beaches to rising sea level varies according to the profile shape and gradient of the rock foundation, the amount and grain size of the sediment, and prevailing wave conditions (Figure 7).
Equilibrium
Most models have suggested that, under stable sea level conditions, mechanical wave erosion produces surfaces which attain states of static equilibrium with no subsequent change in profile morphology [23,24,26,27,29,53,71,72]. Modelling under oscillating sea level conditions, representing the period from the Pliocene to the present time, also indicated that wave erosion tended to decline through time on stable landmasses, although there were perturbations, due to differences in the amplitude and other parameters of each sea level oscillation [30,31]. These differences triggered erosional modifications to the profiles, ranging from a few metres to several tens of metres over the last few thousand years. Despite these adjustments, the general shape of the profiles, extending from the present high tidal level to a depth of about −130 m (the lowest sea level in the record), was maintained over the latter part of the middle and late Quaternary ( Figure 2).
Static equilibrium occurs in hard rock models when bottom gradients decrease to the point where the maximum stresses generated by the attenuated waves are lower than the threshold for rock breakdown. This implies that rock resistance thresholds are constant through time, despite the effect of such factors as: weathering; the removal of resistant strata that protect or support less resistant rocks; and possibly the accumulating impact of repeated water hammer and air compression in joints. Although wave erosion could be reactivated periodically, due to these factors, intertidal gradients would eventually become so low that wave stresses were permanently below the thresholds for rock erosion. Nevertheless, some erosion could still take place through weathering and removal of fine-grained debris by weak gravity and infragravity waves. This conclusion for sloping intertidal shore platforms in areas with moderate to high tidal range, supports the hypothesis put forth earlier by Dickson et al. [73] for subhorizontal shore platforms in low tidal range environments. It emphasizes the need for weathering to be included in evolutionary models, and implies that erosional thresholds should be represented by time-dependent variables [37,47] rather than by constant values. through time on stable landmasses, although there were perturbations, due to differences in the amplitude and other parameters of each sea level oscillation [30,31]. These differences triggered erosional modifications to the profiles, ranging from a few metres to several tens of metres over the last few thousand years. Despite these adjustments, the general shape of the profiles, extending from the present high tidal level to a depth of about −130 m (the lowest sea level in the record), was maintained over the latter part of the middle and late Quaternary (Figure 2). Examples of simulated platform beach responses to rising sea level and their relationship to the shape and gradient of the rock foundation, the amount and grain size of the sediment, and the height of the waves (wave 1 to 5 represents increasing height). LT (1, 2, and 3) and HT (1, 2, and 3) represent, respectively, the mean low water spring and mean high water spring tidal levels for present sea level and two higher sea levels equal to 1/4 and 1/2 of the tidal range above today [40].
Inheritance
Rock coasts often retain morphological and sedimentary vestiges of former climates and sea level [74][75][76]. Because of the lack of dateable deposits and uncertainties over the application and interpretation of cosmogenic nuclide analysis [77], it is difficult to determine whether, or to what degree, a coastal element, such as a shore platform, has been inherited from a period, or periods, when sea level was similar to today's ( Figure 1). Few models have considered the possible role of inheritance on hard rock coasts although, based on the degree of profile change between consecutive interglacials, Trenhaile [30,31] opined that it was important in the development of submarine shelves and intertidal shore platforms during the middle and late Quaternary (Figure 2). Modelling has further suggested that contemporary shore platforms in some places, including in much of the Southern Hemisphere, were partly inherited from the mid-Holocene, when sea level exceeded its present elevation by up to several meters [35].
Modelling Constraints and Limitations
Although models have provided important insights into modes of coastal development, they cannot replicate natural conditions. For example, while efforts to record wave transition across sloping and subhorizontal shore platforms facilitate better modelling [78][79][80], it is extremely difficult to convert energy dissipation rates into corresponding rates of rock breakdown. Japanese workers have developed models that relate rates of cliff recession to a variety of factors, including wave height at the cliff foot, compressive and impact strength of the rocks, and longitudinal sound-wave velocity in the rock body [81][82][83]. Nevertheless, attempts to measure and quantify the driving and resisting forces that determine rates of cliff erosion are confounded by the myriad factors that operate on inherently complex rock coasts. They include, in addition to primary environmental elements, such as rock type, climate, wave regime, and tidal type and range: bores breaking prematurely against upstanding strata in the surf and swash zones; breaker characteristics and stresses; the generation of shock pressures by breaking waves; the gradient of the bottom; rock strength (hardness), bed thickness, and the dip and strike of the rocks in relation to the incoming waves and to the orientation of the cliff face; the amount, grain size, location, and mobility of beach sediment and erosional debris under variable wave and tidal conditions; the susceptibility of the rocks to various types of weathering and bioerosion; the effect of multiple sea level oscillations that differ in amplitude and wavelength; and tectonic and isostatic changes in the elevation of the land. A further uncertainty over the relationship between the driving and resisting forces is concerned with the degree to which erosion occurs as a result of one or perhaps a few storms generating stresses that are well above the threshold resistance of the rocks, or the accumulating effect of multiple, lower stress events that contribute, along with weathering, to a gradual reduction in the resistance threshold [77].To accommodate process uncertainty, several recent studies, albeit largely concerned with soft rock coasts, have applied Monte Carlo simulations and statistical analyses, to model output [69,84,85].
Modellers must be cognizant of a number of important factors: a) Tidal variations, including the tidal range and the tidal type, control: the way that wave energy is dissipated in the vertical plane; weathering type and efficacy related to the amount of time that rocks are exposed to the air and immersed in salt water; and the distribution and activities of biological agents [4]. Tidal influences can be represented by the tidal duration distribution or by inputting actual tidal data, although it would be impractical to use the latter option over very long, evolutionary-scale periods. b) Tidal data alone does not adequately represent the distribution of wave energy in the intertidal zone. This is because the largest and most effective waves crossing sloping shore platforms are tidally modulated [79,80,86]. The highest waves also operate under storm surge conditions when the tidal level can be elevated, by up to several metres, by wind shear and other weather-induced elements (Figure 3). c) Soft rock models have often been based on semi-empirical equations derived from the consolidated till coasts of the lower Great Lakes of North America. These equations are site specific and emphasize the erosional effect of wave-generated bottom currents on profile development. They are inappropriate for hard rock coasts because: bottom currents are generally considered to be too weak to be effective in these environments; and erosion is dominated by processes operating around the still water level. d) Models used to predict the effect of rising sea level and possibly increased storminess in sediment-rich areas must consider changes on beach morphology and volume, and consequently such factors as the degree of cliff exposure and whether beach material is protecting or abrading rock surfaces. The underlying assumptions of the Bruun Rule are rarely satisfied on hard rock coasts and should not be used to represent the effect of sea level change on beaches with rigid foundations. e) Most models have been concerned exclusively with wave-generated backwearing, but they also need to incorporate the effect of weathering and other factors that operate primarily in the vertical plane. f) It is possible that cryogenic weathering, involving physico-chemical processes that operated during glacial periods, when sea level was much lower than today, played important roles in the development of rock coasts in mid-to high latitudes [48,87,88], pp. 305-308. Therefore, rates of coastal development may have varied considerably in the past, due to the effect of the changing climate on erosional processes and efficacies. g) The increasing computational and numerical sophistication of rock coast models must not mask a commensurate growth in model assumptions or obfuscate the continuing lack of reliable information on erosion rates and processes. The lack of reliable field data is a crucial problem which has hindered model development and limited their predictive capabilities, particularly in wave-dominated environments. For example, while we have some data on rates of downwearing by weathering and occasionally by abrasion [89][90][91], we have almost no comparable data on the temporally episodic and spatially sporadic dislodgment of larger rock fragments by wave quarrying [77,[92][93][94]. h) Model calibration is hampered by the lack of comparable, long-term field data and by limited numerical modelling of actual field morphology. Consequently, only limited verification is possible, usually by reference to contemporary erosion rates and morphology in the field. The ability of models to simulate contemporary conditions could be an expression of equifinality, however, concealing the occurrence of marked disparities in the past and their likely existence in the future.
Conclusions
Despite the inevitable presence of unknown coefficients, rock coast models provide a useful means to investigate relationships between various formative influences, although the lack of reliable, long-term field data imposes important constraints. We lack information on rates of erosion by various mechanical, chemical, and biological processes and how their efficacy varies spatially and temporally in the sub-to supratidal zones. More specifically, while micro-erosion meters and other techniques have provided useful data on abrasion and weathering, we have almost no comparable information on wave quarrying. Modelling is also hindered by the frequent lack of dateable deposits and problems with the use of cosmogenic nuclide analysis in many areas, with the result that workers are unsure of the role of inheritance and equilibrium in long-term coastal development. Significant advances in coastal modelling in the future therefore depend in part on parallel advances in the identification and measurement of erosional processes in different environments, and in determining the age of possibly inherited coastal elements. | 8,955.8 | 2019-02-03T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
Acceleration of high charge-state target ions in high-intensity laser interactions with sub-micron targets
We have studied laser acceleration of ions from Si3N4 and Al foils ranging in thickness from 1800 to 8 nm with particular interest in acceleration of ions from the bulk of the target. The study includes results of experiments conducted with the HERCULES laser with pulse duration 40 fs and intensity 3 × 1020 W cm−2 and corresponding two-dimensional particle-in-cell simulations. When the target thickness was reduced the distribution of ion species heavier than protons transitioned from being dominated by carbon contaminant ions of low ionization states to being dominated by high ionization states of bulk ions (such as Si12+) and carbon. Targets in the range 50–150 nm yielded dramatically greater particle number and higher ion maximum energy for these high ionization states compared to thicker targets typifying the Target Normal Sheath Acceleration (TNSA) regime. The high charge states persisted for the thinnest targets, but the accelerated particle numbers decreased for targets 35 nm and thinner. This transition to an enhanced ion TNSA regime, which more efficiently generates ion beams from the bulk target material, is also seen in the simulations.
Introduction
Among the prime interests in studying relativistic laser-plasma interactions are compact beam sources of highquality energetic ions having desirable parameters, namely energy extending to 10 s of MeV nucleon -1 , micrometer-scale source size, directionality, and sub-picosecond source duration. The mechanism for ion acceleration accessible with the previous generation of lasers has been restricted to Target Normal Sheath Acceleration (TNSA) [1,2]. TNSA takes place when a foil is irradiated by a laser intense enough to produce 'hot' electrons with enough energy to transit the target. These hot electrons concentrate opposite the irradiated side of the target, forming a sheath and associated field that can reach strength of order TV/m causing atoms located in the substrate and in surface contaminant layers of hydrocarbons and H 2 O to become ionized and subsequently accelerated. However, the TNSA mechanism has the issue of a slow scaling for maximum ion energy, E max , versus laser intensity, I, of I 1 3 to I 1 2 for relativistically intense lasers [3,4] although it has been observed [5] to scale as much as linearly with power for ultrashort pulses at intensity > -10 W cm 21 2 . Furthermore, the sheath field in TNSA is confined to a Debye length in the dimension perpendicular to the target surface and can thus only accelerate those ions initially residing a few nanometers from the initial target/vacuum interface, i.e. H + , -+ ( ) C 1 6 , and -+ ( ) O 1 8 , and the produced ion beams are typically characterized by large divergence and broad continuous energy spread.
As the target is made thinner, the hot electrons reaching the target rear are greater in population and energy and they recirculate [6], enhancing the sheath strength and increasing ion acceleration. For example, the maximum energy and conversion efficiency of proton beams have been observed in experiments [7,8] to scale inversely with thickness for targets with multi-μm scale. However, the mechanism becomes ineffective if the Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. target becomes thin enough that a shock, triggered from any unwanted pre-pulses present in the rising edge of the main laser pulse, breaks out from the rear surface before peak laser intensity [9,10]. Further, for a target nearly as thin as the relativistic skin depth, g w = l c s p e , the laser field itself can reach the target rear surface directly which could change the effectiveness of TNSA and/or enable other mechanisms. For targets thinner than the non-relativistic skin depth, w c pe , ions undergo Coulomb explosion [11], which also results in large divergence and broad continuous energy spread but greater maximum energy.
For the intermediate transitional region, where the target thickness is comparable to the relativistic skin depth, several mechanisms have been proposed and observed experimentally including radiation pressure acceleration [12,13], breakout afterburner [14], relativistic transparency [15], directed Coulomb explosion [16], magnetic vortex acceleration [17], and collisionless shock acceleration [18]. These regimes may be accessed using a combination of ultraintense, ultrahigh contrast laser, and sub-micron thick targets and are expected to give superior performance in terms of energy scaling, laser-ion conversion efficiency and narrow-band energy spectrum. Additionally, these mechanisms hold promise for accelerating the bulk of the target thereby extending the options for ion beam constituents to any solid that can be made thin enough. While the boundaries and capabilities of these mechanisms have been explored for protons and carbon ions [19][20][21][22], few experimental works with laser intensity > -10 W cm 20 2 have focused on the acceleration of heavier ions (having atomic number, Z, 6), because the ion species are less prevalent than the protons. The notable exceptions are an early investigation with the 400 J VULCAN petawatt laser [23] in which 56 Fe ions were accelerated to 10 MeV nucleon -1 , a recent paper showing acceleration of Al 11+ ions from aluminum targets with a peaked spectrum, narrow charge state distribution and fluence comparable to that of the protons [24] using an 80 J laser, and acceleration of Au to intermediate charge states using a similar but lower intensity laser compared to this work [25].
In TNSA of contaminant ions, the species that are accelerated to high energy are typically those ionized to a fully or nearly fully stripped state, producing few charge states in the beam, e.g. + H , -+ ( ) C 5 6 , and -+ ( ) O 7 8 . However, acceleration of high Z target ions has an additional layer of complexity due to field and collisional ionization which lead to space-and time-dependent charge state distributions. Efficient acceleration relies on producing highly charged species overlapped in space with the strongest accelerating field; it has been shown that matching the target material with a given laser intensity is key to high energy acceleration for this reason [26]. This additional layer of complexity means the best parameters for accelerating protons are not necessarily best for accelerating high Z ions. Therefore we have conducted experiments and particle-in-cell simulations of ultraintense laser-solid interactions in the transitional regime with particular attention paid to the acceleration of ions from the target bulk material.
In this work, we show that laser contrast and target thickness, which are well-known key parameters in laser acceleration of protons, are even more critical in the acceleration of ions from the bulk of the target. The production of multi-MeV ions from the target bulk material was markedly increased for thickness <200 nm. We established an optimum target thickness range of 50-150 nm for the acceleration of contaminant and bulk ions including a Si beam of predominantly Si 12+ charge state.
Experimental setup
The experiments were carried out using the HERCULES Ti: sapphire laser system [27] with pulse duration 40 fs full-width at half maximum (FWHM), peak intensity´-3 10 W cm 20 2 , and energy contrast of 10 −6 between the main pulse and the amplified spontaneous emission (ASE) on the nanosecond timescale. Two parallel plasma mirrors enhanced the energy contrast before the arrival time of the main pulse by an additional factor of ∼3×10 −4 as reported previously [28]. The laser focal spot was optimized using a wavefront flattening routine and deformable mirror immediately preceding the f 3 off-axis paraboloid. Compared to previous experiments with the same laser and similar setup [28,29], the focal spot size, 4.1 μm FWHM, was 3.4× larger and the intensity was lower by an order of magnitude. It was hypothesized that the reduced transverse gradients would prevent target breakup and acceleration disruption. The linearly polarized laser was incident normal to flat targets of Si 3 N 4 with thickness 8, 15, 35, 50, 150, 200, or 500 nm, or Al with thickness 200, 500, 800, or 1800 nm.
The laser contrast on each shot was monitored in two ways. First, a fast photodiode (250 ps rise-time) and 2 GHz oscilloscope recorded the pulse shape transmitted through the first turning mirror shown in figure 1, preceding the plasma mirrors. Example traces are shown in figure 2(a). Second, the mid-field of the laser was monitored downstream of the plasma mirrors by partially focusing the portion transmitted through a turning mirror. Shots with no measurable prepulse and smooth mid-field (see figure 2(c)) were presumed to be free of prepulses capable of target damage on the nanosecond timescale. For some shots the mid-field was uneven with spatial modulations (see figure 2(b)). When a measurable prepulse on the photodiode was also observed, this was presumed to be due to early breakdown of the plasma mirrors, and other times it may have been due to stochastic damage on the plasma mirror from neighboring shots. These shots were rare and more likely to occur after several hours of laser operation due to laser drift issues. The analyzed data presented in this work do not include these shots. A Thomson parabola (TP) ion spectrometer with detector arrangement consisting of a microchannel plate, scintillator, and optical CCD camera was aligned to the target rear normal to record the spectra of ion species with distinct charge:mass ratios (q/m).
Ion acceleration measurements from sub-micron targets
An example raw datum from the TP is shown in figure 3 for a Si 3 N 4 target with thickness 150 nm. The bright quadrant at the top-right shows the straight-through or infinite energy point and energy spectra for different q/ m ions fan out in parabolic traces to the lower left. The top spectrum is that of protons, + H , followed by + C 12 6 or other fully stripped ions with equal q/m ratio. Ions which are not fully stripped fall below this trace. Signals from + H and -+ ( ) C 12 1 6 were prevalent in all shots as no effort was made to remove contaminants from the target in situ. Several more spectra are visible from Si 28 and N 14 , which overlap with the even Si traces. The circular aperture defining the lowest detectable energy is due to the size of the circular microchannel plate. The maximum energy observed on the TP is not necessarily a hard cut-off, just the maximum energy before the signal cannot be distinguished from noise. Detection of ions was strongly dependent upon laser contrast. Figures 4(a), (b) and (d), (e) compare the TP data for two different target types when prepulse was detected (poor contrast as defined above) or was below the detectable level. When contrast was poor, proton maximum energy, ion charge and number of charge states were all reduced substantially. These outcomes may be the result of prepulse directly disrupting the target or triggerring the plasma mirrors early leading to degradation of the main pulse focus on target.
The ion beam charge and species distribution were strongly dependent upon target thickness. As shown in figure 4(c) relatively thick Al (1800 nm) samples resulted in only protons and very low charge of C contaminants. As the target was changed to thinner Al (d) and even thinner Si 3 N 4 (e) and (f), the + H signal first increased down to 50 nm then stabilized and the signals from Si and N increased.
Contaminant species
Among Si 3 N 4 targets with thickness between 8 and 500 nm the proton and C contaminant ion beam parameters were maximum for foil thickness 35-50 nm. Proton and carbon species traces from the TP data for individual trials at various thickness are plotted in figure 5. The maximum detected energy was greatest for target thickness of 35 nm with 4 MeV for protons. C ion energy peaked for 50 nm thickness: 1.2 MeV nucleon -1 for + C 6 and 0.7 MeV nucleon -1 for + C 4 . The total contaminant ion energy and particle numbers were greater than those from thicker Al foils; energy was greatest for 50 nm targets and particle number decreased sharply as thickness was reduced below 50 nm.
High charge-state ions from the target bulk
For ions from the bulk, Si q+ and + N q , where q is any charge state, an optimal target thickness was also observed. The signals from intermediate charge states, such as Si 7+ , were about the same for thickness 500-50 nm and diminished for 35 and 8 nm as shown in figure 6(a). Si 6+ , which does not overlap with any carbon or oxygen states, displayed the same trend. The higher charge states, however, were significantly less abundant and energetic for the thickest targets as shown in figure 6(b). The higher charge states have a higher minimum detected energy (due to the circular detector shape) and higher maximum energy due to their higher q/m ratio. The maximum energies were 0.3 MeV nucleon -1 for Si 7+ and 0.8 MeV nucleon -1 for Si 11+ , which are slightly lower than the values reported above for + C 4 and + C 6 having slightly higher q/m ratios. Note that Si 11+ charge states has been selected for presentation in figure 6 because it has distinct q/m ratio from any nitrogen, carbon, or oxygen states. The experimental data show that bulk ions such as + Si q and + N q can be accelerated if the target is sufficiently thin.
Species distributions as derived from the TP data analysis are shown in figure 7. Trace positions for each species of interest were determined by manually adjusting parabolas for each species and using the same position for all shots. The trace data, which have dimensions of ( ) counts MeV steradian, were integrated over energy between the lower cutoff and a chosen maximum energy. The fractional signal plotted in figure 7 is the fraction of a particular trace's integrated signal divided by the sum over all traces of the same element. In most cases the distributions were indistinguishable when the upper integral bound was chosen to be 5 MeV or infinite energy.
As target thickness was decreased the distribution of charge states shifted from low to high for both bulk (Si) and contaminants (C) ions. As shown in the left column of figure 7, for 500 nm thickness, the distributions for silicon and carbon were nearly symmetrically centered on charge states 10+ and 4+, respectively. For 35 nm thickness, only silicon states -+ ( ) Si 10 12 were present with >10% of the fractional signal and the Si 12+ state (Helike) dominated. Si 13+ was not observed.
The reason for this single dominant charge state is the large step in ionization potential between the Si 12+ and Si 13+ states, causing a bottleneck in the ionization progression. Table 1 lists the ionization potentials [30] and required laser field and intensity to reach each charge state of silicon through above-threshold field ionization [31]. The intensity of the laser, the strongest field present in the system and corresponding to an electric field of 46 MV μm −1 , is much greater than the field ionization threshold intensity for reaching Si 12+ (´-2 10 W cm 18 2 ), but far below the threshold for Si 13+ (´-8 10 W cm 20 2 ). Therefore on the front side of the target, generation of Si 12+ is expected to be widespread. The TNSA field on the rear side of the target has maximum electric field strength ò=4.8 MV μm −1 based on the maximum proton energy according to estimates with a plasma expansion model [2,32]; the maximum energies of + + C 4 ,6 , and Si + + 7 ,11 indicate that they experienced maximum longitudinal fields in the range m < < -1.0 2.2 MV m 1 . With these field strengths on the rear side, Si 12+ can just barely be produced. However, the 35 nm targets are thinner than the relativistic skin depth so evanescent transmission of the laser can increase the number of high charge states born at the back of the target, in position to be accelerated by the sheath. For thick targets all charge states of carbon were present, but + C 6 signal increased as thickness was reduced. In combination with this bottleneck effect, ions ionized to the highest charge state receive the strongest acceleration from fields and are therefore more likely to reach the detector. For silicon this leads to dominance of the He-like state. Carbon can be fully stripped with threshold intensity´-6 10 W cm 18 2 , and states -+ ( ) C 5 6 are prevalent in the data.
Particle-in-cell simulations
Simulations of the short pulse interaction were conducted with a two-dimensional particle-in-cell (2D PIC) code written at the Naval Research Laboratory [33]. A laser with parameters chosen to match the HERCULES experiments (40 fs,´-3.0 10 W cm 20 2 , 4.1 μm FWHM spot, 0.8 μm wavelength) was injected into a 100×128 μm (length×width) box and incident normally onto a -3.2 g cm 3 slab of Si 3 N 4 . The target thickness was varied from 10-500 nm, and 5 nm thick layers of CH contaminant was added directly on both the front and rear sides of all targets. The cells were 20 nm squares with 38/50/42 particles per cell for species Si/ N/C.
In order to correctly predict the ion charge distribution, an ionization dynamics model is added to the PIC code, which works as follows. At the beginning of the simulations all computational particles representing the various species of the target (Si, N, etc) are initialized with charge +1. A corresponding amount of electron computational particles is added to conserve quasineutrality. During the simulations the charge of each species except for hydrogen is dynamically incremented due to tunneling and collisional ionizations. Tunneling ionization is modeled using the Ammosov-Delone-Krainov ionization rate equation [31,34]. It is applied for each ion using the electric field strength at the location of the (ion) computational particle. The collisional ionization rates are calculated using cell-averaged electron density, energy and velocity, and ionization cross section based on the Lotz formula [35]. The latter has the advantage of being universal and computationally effective, although the degree of accuracy may vary depending on the ion charge state and atomic number [36].
Once computed, the tunneling and collisional ionization rates are tested for 'ionization events' for every computational particle at every time step using a standard Monte Carlo scheme [36,37]. If tested positive, i.e. a new ionization event occurs, the ion charge is incremented and a new electron computational particle is added at the location of the ion. Following this procedure, every computational particle acquires its own charge, which evolves in time. By counting the number of charges for each species, the charge distribution function is built. Figures 8(a) and (b) show the time-histories of laser intensity and energy in the system for a 50 nm Si 3 N 4 target. About 80% of the laser energy is reflected and eventually leaves the domain. This value is high because there is no assumed preplasma and this 50 nm thick target remains opaque throughout the interaction. The energy that gets absorbed is given to particles, most rapidly to electrons which peak at about 10% of the laser energy at time t=60 fs. The electron energy then decreases as energy is given to ions. Protons gain energy most rapidly, while the other ions gain energy later and more slowly. Carbon and nitrogen gain energy concurrently, and Si ions gain energy least rapidly. This behavior is consistent with sheath acceleration in which the highest q/ m ratio species lead the expansion and shield trailing species reducing the field they experience. Figures 8(d)-(f) plot the spectra of electrons 40 fs into the simulation (when the intensity on target is greatest) and four different ion species at the end of the simulations (240 fs). In total, protons are the most numerous single species in the forward directions (´-2.3 10 ions sr 11 1 ), but the target ions Si + q (´-1.8 10 ions sr 11 1 ) and + N q (´-3.3 10 ions sr 11 1 ) combined have higher charge and total energy than the contaminants p + and + C q (´-1.4 10 ions sr 11 1 ). The charge state distributions of Si and C ions traveling within 10°of the forward target normal direction for three target thicknesses are shown in figure 9. As in the experiment, the degree of ionization increases as target thickness is reduced for both bulk target (Si and N) and contaminant (O or C) ions. For the thick target simulation, the Si charge states are broadly distributed while the C ions have mostly reached He-like to fully stripped states. As the target thickness decreases to 150 nm, the Si distribution shifts slightly higher and then for 35 nm thickness it becomes dominated by the He-like state. The charge states Si + 10 and Si 14+ disappear from the forward direction. The fractional populations of -+ ( ) C 4 5 ions decrease and the fraction reaching full ionization increase to nearly 90% for both the 150 and 35 nm thickness cases. This behavior is in agreement with the changes observed experimentally in figure 7 whereby the Si ionization is bottlenecked to the He-like state and the C reaches full ionization.
In the simulations of a 150 nm target, Si 12+ ions from the front side do not reach the rear of the target. In the simulation of a 35 nm target the front of the target is observed to be pushed by hole-boring [38,39]. Si ions ionized and accelerated by the laser are pushed forward and are able to reach the TNSA field at the rear. Because these ions felt the laser field directly, many of them (∼90% of those within the focal spot) attain the maximum ion charge allowed by field ionization (Si 12+ ), and by reaching the TNSA sheath they are able to be accelerated directionally toward the detector. A secondary effect can also be seen in the simulation results-there is a small fraction of Si (13)(14)+ ions present for the 500 and 150 nm cases whereas none were observed experimentally. As discussed earlier, these states cannot be produced by field ionization. It is seen that in the simulation they arise from collisional ionization within the target, taking place after the laser has been reflected. When collisional ionization was Si 13 14 disappeared from the forward direction. For thinner targets, there are fewer collisions in the target and therefore fewer chances for Si 12+ ions to be further ionized. This leads to further narrowing in the charge state distribution for the thinnest target.
Conclusions
The ion behaviors are similar to previously discovered trends and expectations from theoretical treatments of the TNSA mechanism for protons. For example, the maximum energy and conversion efficiency of proton beams have been observed in many experiments [7,8,40] to scale inversely with thickness for targets with multiμm scale and reverse trend sharply below some optimum thickness. For typical experiments this reversal is due to a shock wave from the nanosecond-scale ASE prepulse that can transit through a multi-μm target prior to the main pulse arrival and disrupt the rear target surface [9], reducing the sheath strength and ultimately reducing signal and maximum energy of any sheath-accelerated ions. For experiments believed to be free of ASE (e.g. [40] and this work), the optimal thickness can be expected to be sub-μm.
In our work targets had thickness of order of the relativistic skin depth and thinner, so the optimal thickness was determined by the tradeoff between maximum sheath strength and laser transmission [41]. For heavy ions, this optimum is convolved with the trend of increasing achievable ionization state with thinner targets. The electron density in Si 3 N 4 is 1000× the classical critical density. The interaction volume, which is a flat disk for these thin targets,would have to expand in thickness to several microns in order to become transparent; such expansion cannot occur in the sub-picosecond timescale. However, the relativistic skin depth is 45 nm for the Si 3 N 4 targets at the intensity in the experiment. For targets thinner than this the intense laser can deliver a strong electric field that interacts throughout the bulk of the target, sustaining the ionization process, but for the thinnest targets (8 nm) laser energy is wasted. This explains the observed trends (1) that increased charge state continued down to the thinnest target in both experiment and simulation and (2) the maximum energy and particle number decreased by as much as factors of 2 and 10, respectively for both target and contaminant species below 50 nm in the experiment (see figures 5(a) and 6).
Regarding the broader charge distribution observed in the experimental data, we surmise that the Gaussian distribution of laser intensity in the transverse direction prescribed in the simulations leads to a sharp focal spot having exclusively Si 12+ and just a few ions with lower charges in the fringes of the focal spot. That is why in the simulations we observe mostly Si 12+ and some + ( ) Si 10&11 . The experimental profile had a more complex shape in the transverse direction including two partial Airy rings with larger annular area than the circular main spot but much lower intensity (see figure 1 inset). Therefore, it is fitting that in the experiment more ions with lower charges were observed than in the simulation. The species Si 13&14 could only be produced by Si 12+ ions from the front-side undergoing collisional ionization while traveling in the forward direction in the target. These species were detected in the simulations only when collisional ionization was modeled, and they had low quantity due the limited thickness for collisions to occur.
In summary, we have shown that laser contrast and target thickness are critical parameters in the acceleration of ions from the bulk of modest-Z targets. While these are well-known to be critical in proton acceleration, acceleration of target ions also depends on matching the target material to the field strength of the laser and rear sheath. The production of multi-MeV target ions was markedly increased for thickness <200 nm. The optimum target thickness was 50-150 nm for the acceleration of both contaminant and bulk target ions. In this range, the ion beam particle number and maximum energy were maximized. The optimal range occurs when the target is thin enough to recirculate hot electrons, producing a strong target normal sheath yet thick enough that laser energy is not wasted by evanescently transmitting. The degree of isolation of the + C 6 and Si 12+ states was however greatest for 35 nm thickness, less than l hb and l s , when the laser can push ions and ionize them all the way to the rear of the target. | 6,302.8 | 2016-11-17T00:00:00.000 | [
"Physics"
] |
The relationship between body mass index , thickness of subcutaneous fat , and the gluteus muscle as the intramuscular injection site
An intramuscular injection (IMI) is an injection given directly into the central area of a specific muscle. Certain medicines need to be administered by the gluteal route for these to be effective. The aim of this study was to determine the influence of body mass index (BMI), subcutaneous fat, and muscular thickness of the dorsogluteal IMI site among healthy Japanese women. There were 39 healthy female subjects who volunteered and met the criteria. Their ages ranged from 40s to 60s (50.82 ± 6.04). With the data collected using the B-mode ultrasound images of the dorsogluteal site, it was found that the distance from the epidermis to the under-fascia (DEUF) of the gluteus maximus was dissimilar between the subject’s right and left buttocks. It was found that the distance from the epidermis to the iliac bone (DEI) was significantly more on the right than on the left buttocks. In the case of an adult Japanese woman with a BMI of 21 or more, the DEUF of the gluteus medius was found to be about 30 mm, and the DEI was approximately 50 mm or more. Based on these findings, it is recommended that a needle length of 38 mm (1.5 inches) can be safely used to administer IMIs to the gluteus medius muscle to effectively and efficiently deliver medications through the IMI route.
INTRODUCTION
An intramuscular injection (IMI) is a method of administering medications deep into the central area of specific muscle tissues.This route of administration provides rapid system absorption of medications thus enhancing the effects of the medications.
IMI drugs are generally given through the deltoid muscle and/or the gluteal muscles [1][2][3].The administration of long acting injectable medications in the deltoid muscle is viewed as an alternative to the injection using the gluteal muscles by a considerable number of patients.Nevertheless, some patients experience increased injection site pain through this application while others perceive the change as beneficial in terms of practicability [4].
The most common reasons for preferring the deltoid muscle for IMIs instead of other muscles include easier access, enhanced privacy causing less embarrassment due to exposure of human body parts, and faster administration with the expectation of less pain, than injection through the gluteus route.Moreover, larger volume of fluid may be the predisposing reason for inducing greater pain when medications are administered through the deltoid muscle.The gluteal muscles are thick enough to permit the injection of larger volumes of fluid, whereas the volume limit for IMI to the deltoid muscles is only about 1 mL [5,6].Therefore, it was considered that a long acting injection is better administered intramuscularly using the gluteal muscles, and we have to consider the volume, property of drug and its safety and absorption such as damage to tissues including subcutaneous and blood vessel or peripheral nerve.
In a previous study, Brahm NC, et al. [7] declared in their report, that determining the optimum needle length for administration of intramuscular formulations based on individual patient variables is critical, although this has not been extensively reported in patients receiving specific medications, e.g.Haloperidol and Hluphenazine decanoate IM administration.Therefore, needle length may play an important role in assuring accurate medication administration through IMI.Nevertheless, Brahm NC et al. stated that when anticipated results of intramuscular antipsychotic medication administration are not realized, practitioners are urged to consider specific patient variables, notably the thickness of adipose tissue which influences the depth of the needle insertion that will result in the accurate muscle delivery of the medication.
In a separate study, Choi DW, et al. [8] assessed optimal needle length for IMI into the gluteal muscles using the simple skinfold thickness method.In this study, 190 healthy adults were recruited and grouped into eight groups according to gender and body mass index (BMI, kg/m²).For each participant, the skinfold thickness of the dorsoguteal and ventrogluteal sites was measured using a caliper.However, subcutaneous tissue thickness was acquired through ultrasonic images.The study showed that for men in the overweight and obese groups who received IMI at the dorsoguteal site and for the obese group at the ventrogluteal site, and for women who were within normal weight, or overweight and obese, these groups were injected the medication at both IMI sites.The results illustrated that the mean subcutaneous tissue thickness of men exceeded 1.84 cm, while for the minimal needle length for IMI at the dorsoguteal site, the optimal intramuscular needle length was 1.4 times in women and 1.0 times in men compared to the skinfold thickness technique.At the ventrogluteal site, optimal intramuscular needle length was 1.3 times in women and 0.9 times in men as compared to the skinfold thickness technique.They concluded that skinfold thickness is a reliable index to determine optimal needle length with minimal effort prior to IMI.Moreover, Zaybak A, et al. [9] reported on their study which measured subcutaneous tissue thickness at the dorsogluteal and ventrogluteal sites to determine optimal needle length for dorsogluteal and ventrogluteal IMIs in adults with BMI of more than 24.9 kg/m 2 .They found that problems can arise if drugs designed to be absorbed from muscle are only delivered into subcutaneous tissue.Increasing obesity in all developed and many developing countries makes this an increasing concern.Ultrasound measurements were made of the subcutaneous tissue considering overweight, obese and extremely obese people at the dorsogluteal and ventrogluteal sites with the probe held at a 90 degrees angle to the plane of the injection site.Subcutaneous tissue thickness was measured in 119 adults whose BMI was >or = to 25 kg/m 2 .Mean subcutaneous tissue thickness at the dorsogluteal site was 34.5 mm for overweight adults, 40.2 mm for obese adults and 51.4 mm for extremely obese adults, and at the ventrogluteal site was 38.2 mm for overweight adults, 43.1 mm for obese adults and 53.8 mm for extremely obese adults.In 98% of women and 37% of men, when IMI was administered at the dorsogluteal site and at the ventrogluteal site in 97% of women and 57% of men, it would not reach the desired muscles.A needle longer than 1.5 inches should be used in women whose body mass index is more than 24.9 kg/m 2 , and the dorsogluteal site may be used in all overweight and obese men, and the ventrogluteal site may be used in overweight men only.
Chan VO et al. [10] radiologically determined whether or not IMI was truly intramuscular, and they found that the majority of assumed IMIs were actually subcutaneous injections.
Kikuchi K et al. [11,12] reported that the most frequently used assessment method for the IMI to the gluteus medius is the "four-and three-way split method" which accounted for 85.5% of all IMIs in Japan.Approximately 90% of nurses use the 23 gauge needle (32 mm) for IMI.Furthermore, it was found that the length of the needle inserted as IMI by clinical nurses, was based on their experience to assess whether the needle has reached the muscle or not, rather than on credible evidence.The thickness of subcutaneous tissues for gluteal IMI sites was measured by an adipometer and by ultrasonography.In their study, the distance from the epidermis to the under-fascia (DEUF) of the gluteus maximus, and DEUF of the gluteus medius were measured, respectively.However, the distance from epidermis to the iliac bone (DEI) was not measured.Therefore, little is known about the relation between BMI, DEI, and DEUF when administering IMI.
The aim of this study was to determine the relationship between BMI, subcutaneous fat, and DEI among selected women in Japan.
Study Design
The design used was a descriptive study to determine the relationship between BMI, subcutaneous Fat, and DEI among selected women in Japan.Ultrasonogaphy was used to collect the data.
Subjects
There were 39 adult healthy female subjects who volunteered and met the criteria.These criteria were that height was 160.00 ± 5.48 cm and weight was 51.30 ± 6.87 kg.Their ages ranged from 40s to 60s (50.82 ± 6.04).Of these subjects, 38 had right-foot dominance, while 1 had left-foot dominance.
Data Collection
This study was conducted between August 2011 to September 2011.To identify the injection site by the "four and three-way split", the buttocks area was imaginarily divided into four quadrants.The proper injection site was located at the upper outer quadrant [13], and at one third the distance from the iliac crest on the imaginary 45 degree line (see Figure 1).
DEUF of the gluteus muscles and DEI were measured by ultrasonography.All ultrasonographic measurements were performed by an experienced Sonographer using a 7.5 MHz linear and convex array transducer and ultrasonograph diagnostic system (Hitachi Medical Corporation, Japan).Ultrasonograph images were made at the dorsogluteal injection site.Gluteus maximus, medius and minimus muscles are commonly used as regions for IMI.DEUF and DEI measurements were made above and outside a line drawn from the posterior superior iliac spine to the greater trochanter of the femur.Ultrasonograph probe was held at right angle to the skin at the gluteal region.These measurements were performed by an experienced female sonographer and reviewed by two sonographers (registered sonographer of the Japan Society of Ultrasound in Medicine) and one physician (registered neurosonographer of the Academy of Neurosonology) for consistency in reading the results of the sonography.
Hypothesis
Three research hypotheses were tested: 1) there is a significant difference in measurements of the DEUF of gluteus medius muscle, and the DEUF of gluteus maximus muscle, with the DEI and the BMI; 2) there is a significant difference between the right and left areas (buttocks area) from the data using each of the parameters; 3) there is significant correlation between the BMI, DEUF, and DEI.
Data Analysis
The subjects' height and the weight measurements were collected from the clinical records.The median BMI was 20.7 kg/m 2 .Collected data were divided into two groups based on median measurements: Group A (BMI < 21 kg/m 2 ) and Group B (BMI ≧ 21 kg/m 2 ).The Mann-Whitney's U-test was used to determine the significant differences between the two groups.The Wilcoxon signed rank test was used to determine the significant differences in the data between the right and left DEUF of gluteal muscles.The Spearman's rho correlation coefficient was used to determine the correlation between BMI and DEI, DEUF used in this study.A significance level was set at p < 0.05 (two-sided test).Statistical analysis of data was performed using SPSS (Ver.18.0 J).
Ethical Considerations
Approval for the Ethics Committee of the Tokushima University Hospital was obtained.Both verbal and written informed consents were obtained from the 39 subjects of the study prior to commencement of the study.
RESULTS
The subjects were divided into two groups: Group A (n = 23) and Group B (n = 16).There was a significant difference between Group A (67.46 ± 6.36 mm) and Group B (73.34 ± 8.32 mm) using the DEI of the right gluteal muscle (p < 0.01).There was also a significant difference between Group A and Group B in DEUF using the gluteus medius muscle measurements of both areas of the buttocks (right side: p < 0.05, left side: p < 0.01).DEUF measurements of the gluteus medius muscle for Group A was 37.95 ± 6.22 mm (right side), and 36.71 ± 6.74 mm (left side), while for Group B it was 43.25 ± 8.86 mm (right side), and 43.41 ± 9.61 mm (left side).Moreover, there was a significant difference between Group A and Group B in DEUF measurements of the gluteus maximus muscle the left side (15.45 ± 3.36 vs 19.74 ± 5.20, p < 0.01).Furthermore, DEI of the right side was significantly greater than the left side (70.29±7.90vs 66.39 ± 9.32, p < 0.01) (see Tables 1 and 2).
DISCUSSION
Kikuchi K. et al. [12] declared that in the study they conducted, it was found that there was a strong correlation between BMI and the thickness of subcutaneous tissues.However, the DEI was not measured using ultrasonography.In the first hypothesis, the right side DEI, both sides of DEUF of the gluteus medius, and left side of DEUF of the gluteus maximus were significantly greater in Group B (BMI ≧ 21 kg/m 2 ) measurements than in Group A (BMI < 21 kg/m 2 ).
Femoral-gluteal subcutaneous adipose tissue and femoral-gluteal intramuscular adipose tissue distribution varies by gender and race [14].In this current study, the DEUF of the gluteus maximus was dissimilar in the subject's right and left sides.
The second hypothesis declared that the DEI of the right side was significantly greater than the left side.In this study, there was a bilateral difference in DEI, especially noting greater dispersion in the non-dominant foot.This was considered to be related to the dominant foot finding and the size of muscle mass in the specified area.Sanchis-Moysi J et al. [15] pointed out that asymmetry of gluteal muscles was inferred as possible reasons for dif-ferences in the gluteal muscle mass.They found that the muscles of the dominant foot were more developed among those with greater masses of gluteal muscles.Therefore, it is important to identify the dominant foot and assess for subcutaneous fat before administering an IMI.
In the third hypothesis, correlation was confirmed to be positive in the DEI and BMI.Also, a remarkable trend was observed in DEI and DEUF measurements of the gluteus maximus and medius in both sides of the buttocks.There was no significant correlation between BMI and DEUF of the gluteus medius based on data collected at the right side of the buttocks.The right side DEI of the Group B was significantly greater than those of Group A. However, on the left buttocks side, there was no significant difference between both groups.
From both sides of the buttocks, the DEUF of gluteus medius of the Group B subjects were significantly greater than those of Group A subjects.Also, there was a positive correlation in the DEI and BMI data.Therefore, it was understood that in fact the DEI may be approximately proportional to BMI.Therefore, the DEI of the right side was significantly greater than the left side.This finding supports that the higher BMI was correlated with a greater DEI, and that it is important to identify the dominant foot and to assess subcutaneous fat presence when considering the administration of IMI.
Generally in Japan, it is common to use the 23 gauge needle (32 mm) for IM injection [1].However, DEUF of gluteus medius of the Group A(BMI < 21 kg/m 2 )was about 30 -40 mm (on the right buttocks), and 35 -50 mm (on the left buttocks).In the Group B ( ≧ BMI 21 kg/m 2 ), BMI was 30 -40 mm (right buttocks), and 43 -52 mm (on the left side).Therefore, it was thought that there is a great need to consider using gauge 20 or 21 (38 mm) needles when administering IMI.
LIMITATIONS
Only adult women in Japan comprised the subjects of the study; some measurements could not be obtained on every subject, at every injection site, therefore some data could not be matched or paired accordingly.On B-mode findings, the subcutaneous adipose tissues showed high echogenicity while the gluteal muscles showed relatively low echogenicity, or could not be identified correctly.In some cases, the fascia of gluteal muscles could not be detected on B-mode scan.Also, it was difficult to precisely detect the separation between gluteus maximus and gluteus medius.
IMPLICATIONS FOR FUTURE RESEARCH
The findings of the study provided data to support possible revisions in the procedures for administering medications via IMI.Future research can be conducted to determine the effect of administering IMI using various lengths of needles on the pain reaction of patients.Also considered is to determine other procedures for IMI which may decrease pain reactions during IMI, for example, the use of music preferred by the patient during the IMI considering various lengths of needles for the procedure.Furthermore, qualitative research focused on the description of the experience of the patients during IMI will provide increased awareness of the value of various interventions which may decrease perception of pain and reactions to painful stimulation.
CONCLUSION
The right side DEI, both sides of DEUF of the gluteus medius, and left side of DEUF of the gluteus maximus were significantly greater in Group B (BMI ≥ 21 kg/m 2 ) measurements than in Group A (BMI < 21 kg/m 2 ).
The DEI of the right side was significantly greater than the left side.In this study, there was a bilateral difference in DEI, especially noting greater dispersion in the non-dominant foot.It is important to identify the dominant foot and assess subcutaneous fat before administering an IMI.
Correlation was confirmed to be positive in the DEI and BMI.Also, a remarkable trend was observed in DEI and DEUF measurements of the gluteus maximus and medius in both sides of the buttocks.There was no significant correlation between BMI and DEUF of the gluteus medius based on data collected at the right side of the buttocks.From both sides of the buttocks, the DEUF of gluteus medius of the Group B subjects was significantly greater than those of Group A subjects.Also, there was a positive correlation in the DEI and BMI data.It was understood that in fact the DEI may be approximately proportional to BMI.
There was a positive correlation in BMI and DEI data from both sides of the buttocks.This finding supports that the higher BMI was correlated with a greater DEI, and was important to identify the dominant foot and as-sess subcutaneous fat at the IMI.
Figure 1 .
Figure 1.IMI site in the buttocks "upper, outer quadrant" and "four-and three-way split".
Table 1 .
BMI-based differences in DEI, DEUF of gluteus medius, and gluteus maximus by the four and three-way split.p < 0.05, ** p < 0.01, n.s.: not significant.BMI: body mass index, DEI: distance from the epidermis to the iliac bone, DEUF: distance from the epidermis to the under-fascia.
Table 2 .
Difference between right and left in DEI and DEUF (n = 39). | 4,153.8 | 2013-08-28T00:00:00.000 | [
"Medicine",
"Biology"
] |
Application of an Escherichia coli triple reporter strain for at‐line monitoring of single‐cell physiology during L‐phenylalanine production
Abstract Biotechnological production processes are sustainable approaches for the production of biobased components such as amino acids for food and feed industry. Scale‐up from ideal lab‐scale bioreactors to large‐scale processes is often accompanied by loss in productivity. This may be related to population heterogeneities of cells originating from isogenic cultures that arise due to dynamic non‐ideal conditions in the bioreactor. To better understand this phenomenon, deeper insights into single‐cell physiologies in bioprocesses are mandatory before scale‐up. Here, a triple reporter strain (3RP) was developed by chromosomally integrating the fluorescent proteins mEmerald, CyOFP1, and mTagBFP2 into the L‐phenylalanine producing Escherichia coli strain FUS4 (pF81kan) to allow monitoring of growth, oxygen availability, and general stress response of the single cells. Functionality of the 3RP was confirmed in well‐mixed lab‐scale fed‐batch processes with glycerol as carbon source in comparison to the strain without fluorescent proteins, leading to no difference in process performance. Fluorescence levels could successfully reflect the course of related process state variables, revealed population heterogeneities during the transition between different process phases and potentially subpopulations that exhibit superior process performance. Furthermore, indications were found for noise in gene expression as regulation strategy against environmental perturbation.
PRACTICAL APPLICATION
Genetically encoded fluorescent reporter strains express fluorescent proteins together with cellular events of interest. Triggering of these events can then be monitored via fluorescence measurement. Developing multiple instead of nowadays commonly applied single reporter strains raises these strains as noninvasive tools for bioprocess monitoring to a next level as they enable simultaneous monitoring of different single-cell characteristics, which can then be correlated to each other and to process performance on population level. This information leads to increased understanding of bioprocess events, heterogeneities, and enlighten cell-cell and cell-bioreactor interactions, which would be masked when solely considering population level physiology. In bioprocesses, potential subpopulations with superior properties for improved productivity can be uncovered. Thus, multiple reporter strains shall support future bioprocess design and supervised up-scaling by already knowing coexisting phenotypes before potentially experiencing productivity loss due to unspecific population heterogeneities induced by non-ideal process conditions.
INTRODUCTION
Biotechnological production processes are a sustainable alternative to chemical production industries, as microorganisms are capable of producing a great variety of products for the food, feed, and pharmaceutical industries [1]. An example is the production of L-phenylalanine (L-phe), an important building block for sweeteners in the food industry, which can be produced by Escherichia coli from glycerol [2]. Despite promising concepts, a rather modest number of bioprocesses has resulted in industrial scale production and marketed products [3]. One major reason is that the scale-up of bioprocesses to production scale often results in lowered yields and productivity compared to the respective well-mixed lab-scale processes due to the omnipresent phenomenon population heterogeneity [4,5]. Although the producing cells in the bioreactor originate from isogenic cultures, the phenotype of single cells can differentiate significantly especially during large-scale bioprocesses, so that potentially even distinct subpopulations arise [6]. The reason is that fluctuating environmental conditions with gradients in process state variables occur due to mix-ing insufficiencies and mass transfer limitations in largescale bioprocesses. As a consequence, each cell experiences a random order of microenvironments (lifelines) and thus exhibits different single-cell physiologies matched with its lifeline in the bioreactor. This leads to formation of a heterogeneous culture with potentially deviating single cell productivity [7,8]. Even though the consequences of population heterogeneity in bioprocesses can nowadays be studied with different available experimental tools, mechanistic understanding of this phenomenon is still comparably low [9].
One prominent approach to get insights on physiology of cells is the utilization of fluorescent reporter strains [10,11]. These are strains in which fluorescent proteins are integrated into specific operons or loci of interest. Therefore, the triggering event for their expression can be monitored by measurement of fluorescence levels of the reporter strain. These fluorescence levels are almost exclusively measured with flow cytometry as it allows at-line high-throughput single-cell data acquisition during cultivations [12,13]. For the design of reporter strains, a great diversity of fluorescent proteins is available which differ in characteristics such as brightness, maturation times, and oligomeric state [14]. However, mostly fast maturating fluorescent proteins with bright detectability and no cytotoxic effects on the host strain are desirable [13,14]. There are various single-reporter strains already available, allowing monitoring of different single-cell characteristics such as growth, stress responses of different kind and cellular fitness [15][16][17][18]. There are also reporter strains that can sense nutrient or oxygen limitation as well as intracellular stress factors like imbalances in redox state, intracellular pH or accumulation of oxygen radicals [19][20][21][22]. Product formation reporter strains can identify the best producing cells in a bioprocess or detect loss in productivity during scaleup [23,24].
More efficient, however, is the application of multiple reporter strains combining reporter molecules for monitoring of several cellular characteristics, whose response can then be directly correlated to each other [25]. This raises the level of understanding of cellular interactions and comprises a powerful alternative tool to omics technologies, which are time-consuming and not yet capable of providing data on single-cell level [26].
To the best of our knowledge, only few multiple reporter strains exist [27,28]. One example is the E. coli triple reporter strain (3RP) described by Heins et al. (2020) in which three fluorescent proteins were chromosomally integrated into the rrnB operon, the narGHIJ operon, and downstream of the rpoS gene for monitoring of growth, oxygen availability, and the general stress response of single cells [25]. In the present study, the general reporter strain concept was adapted to generate a 3RP based on the previously well-characterized L-phe producing E. coli strain FUS4 (pF81 kan ) [29]. The aim was to apply this 3RP in fed-batch processes for L-phe production in a well-mixed stirred-tank bioreactor at lab-scale to uncover single-cell phenomena and potential formation of subpopulations that contribute to process performance of FUS4 (pF81 kan ). A special focus was put on investigating physiological changes in different process phases as, for instance, during the product formation phase, when a decline in product formation coupled to loss in cellular activity was consistently found in previous studies with FUS4 (pF81 kan ) [29].
Escherichia coli strains
The recombinant E. coli strain FUS4 (pF81 kan ) was used for cultivation in the L-phe production process [29,30]. FUS4 is a derivative of E. coli K-12 with deletion of chromosomal genes pheA, aroF, and tyrA along the aromatic biosynthesis pathway. Consequently, cells are auxotroph toward L-phe and L-tyrosine (L-tyr). FUS4 harbors the pF81 kan plasmid encoding for the genes aroF, pheA, aroB, and aroL under the control of an inducible P tac promoter system. This allows overexpression of deleted enzymes along the aromatic biosynthesis pathway for production of L-phe. The plasmid further provides kanamycin resistance, which can be used as selection marker [29]. In this study, FUS4 (pF81 kan ) was transformed into a 3RP by a series of knock-in recombination reactions for site-specific insertion of three synthetic cassettes using λ-red recombination and a subsequent FLP/FRT mediated recombination reaction [31]. Each cassette carried a monocopy of the coding sequence (CDS) of a fluorescent protein. A synthetic copy of mEmerald was inserted into the ribosomal rrnB promoter complex with a synthetic ribosomal binding site (RBS) 5′-AAAGAGGAGAAA-3′ according to Elowitz and Leibler (2000) and a transcriptional terminator downstream of the CDS [32]. This cassette was integrated in the rhamnose operon with simultaneous deletion of the native genes rhaB and rhaS. Subsequently, the mTagBFP2 gene was inserted downstream of the rpoS gene in conjunction with its own RBS. Last, the CyOFP1 gene was inserted into the narGHIJ gene cluster downstream of all native genes together with the synthetic RBS mentioned above. These insertions allow to follow single-cell growth, oxygen limitation, and general stress response of single cells by fluorescence of rrnB-mEmerald, rpoS-mTagBFP2, and nar-CyOFP1, respectively [25].
Preliminary culture preparation
Cryopreserved cells of E. coli FUS4 (pF81 kan ) and E. coli 3RP (pF81 kan ) were streaked on minimal medium agar plates with glycerol as carbon source prepared according to Weiner et al. (2014) [29] and incubated at 37 • C for at least 66 h. One single colony was then used for inoculation of a 100 mL shake flask with 10 mL minimal medium with 7 g/L of glycerol as carbon source prepared according to the same protocol as the agar plates [29]. After cultivation at 37 • C and 150 rpm for approximately 24 h in an orbital shaker (Multitron, Infors HT, Switzerland), the optical density at 600 nm (OD 600 ) was measured (Genesys 10UV, Thermo Fisher Scientific, USA). A defined volume of cell suspension was transferred to two 500 mL shake flasks each one with 100 mL minimal medium to yield a starting OD 600 of 0.01. These cultures were further cultivated at 37 • C and 250 rpm for at least 24 h. When the cells reached exponential growth phase with an OD 600 above 0.5, the cultures were centrifuged at 3260×g for 10 min at 4 • C. The supernatant was discarded and the cell pellets were suspended in fresh minimal medium. Bioreactor cultivations were inoculated with washed cells to a starting OD 600 of 0.1.
Bioreactor cultivation
Fed-batch processes for L-phe production with E. coli FUS4 (pF81 kan ) and E. coli 3RP (pF81 kan ) on minimal medium with 4 g/L glycerol as sole carbon source, as described by Weiner et al. (2014) [29], were conducted in a 3.6 L stirredtank bioreactor (Labfors 5, Infors GmbH, Germany), which was equipped with three baffles and two six-blade flatblade turbines. Prior to inoculation, the medium was pumped into the bioreactor under sterile conditions to a starting volume of 1 L. During the process, temperature was kept at 37 • C and a pH electrode (EasyFerm Plus PHI Arc 325, Hamilton, USA) was implemented for pH control at 7.0 ± 0.1 with 42% phosphoric acid and 25% ammonia. Dissolved oxygen (pO 2 ) levels were monitored by a pO 2 probe (VisiFerm DO Arc 325 H0, Hamilton, USA) and were maintained above 30% by the stepwise increase of either stirrer speed (maximum: 1500 rpm) or aeration rate (maximum: 5 L/min). Both sensors were calibrated according to standard procedures using a two-, respectively one-point calibration (pH 4.0 and 7.0, calibration for 100% pO 2 ). An antifoam probe allowed the controlled addition of antifoam solution (AF204, Sigma-Aldrich, USA) to circumvent over foam reactions. Online analysis of offgas oxygen (O 2 ) and carbon dioxide (CO 2 ) was performed using a gas sensor (BlueVary, BlueSens, Germany).
The process strategy was adapted from Weiner et al. (2014) [29]. The fed-batch process for L-phe production can be divided into three distinctive phases: a batch phase, a biomass production phase, and a product formation phase. After the batch phase, whose end was characterized by glycerol depletion and recognized by a steep increase in pO 2 levels in the bioreactor, the biomass production phase was started. In this phase, an exponential feed with a growth rate of μ set = 0.1 h -1 was applied with two consecutive feed media in which the second feed medium was applied after the first feed medium was empty. Feed medium one contained 120 g/L glycerol, 2.5 g/L L-phe, 3.6 g/L L-tyr, 60 g/L ammonium sulfate, and 0.1 g/L kanamycin, whereas feed medium two consisted of 400 g/L glycerol, 1.11 g/L L-phe, 3.8 g/L L-tyr, 25 g/L ammonium sulfate, and 0.1 g/L kanamycin. Both media were titrated with either 25% ammonia or 5 M potassium hydroxide to allow the complete dissolving of L-tyr. Provision of a biomass concentration of at least 20 g/L indicated the transition to the product formation phase in which the cells were induced with 0.3 mM IPTG. Additionally, feed medium three was constantly applied with a rate of 0.18 g glycerol /g biomass /h, which contained 800 g/L glycerol, 8 g/L ammonium sulfate, 8 g/L ammonium phosphate, and 0.1 g/L kanamycin. At the start of supply of feed medium one and two, 4.8 or 9.6 mL of minimal media without amino acids and glycerol was added whereas at the start of supply of feed medium three, 8.8 mL of a four times concentrated minimal media solution without amino acids and glycerol was injected to the cultivation broth [29]. Samples for high-performance liquid chromatography (HPLC), cell dry weight measurements and flow cytometry analysis were withdrawn frequently during all process phases.
Sample analysis
For measuring the cell biomass, empty 2 mL centrifuge tubes were dried at 80 • C for at least 24 h and weighted. After collecting 2 mL cell sample, they were centrifuged at 21,130×g and 4 • C for 20 min. The cell pellet was dried at 80 • C for at least 24 h. The weight difference between dried empty tubes and tubes with cell pellets divided by the volume revealed the biomass concentration. Samples for the quantification of extracellular metabolite concentrations were prepared by filtration (pore size 0.2 μm) of the supernatant from dry cell weight measurements and stored at 4 • C until analysis.
L-phe and L-tyr concentrations were analyzed using a Smartline HPLC (Knauer, Germany) coupled to a derivatization protocol with 0.04 M bicine (pH 10.2 titrated with sodium hydroxide) as buffer solution as described by Weiner et al. (2014) but with higher sample volumes of 11 μL to enhance detectability [29].
Organic metabolites such as glycerol, acetate, and lactate were quantified by the Prominence-I LC-2030C HPLC (Shimadzu, Japan) equipped with an ion-exchange column (Aminex HPX-87H 300 mm × 7.8 mm, Bio-Rad, USA). An isocratic flow of 0.6 mL/min of 5 mM sulfuric acid and a constant temperature of 60 • C were applied during separation. 10 μL of sample were injected and the quantification of the components was done with a RID-20A refractive index detector (Shimadzu, Japan).
Samples for flow cytometry analysis of fluorescence were prepared by centrifugation at 21,130 × g and 20 • C for 3 min. The cell pellets were suspended in phosphate saline buffer (0.2 g/L potassium chloride, 0.24 g/L monopotassium phosphate, 8 g/L sodium chloride, and 1.44 g/L di-sodium phosphate). Fluorescence of the cells at different process stages was measured with a FACSMelody (BD, USA). This device is equipped with three lasers allowing excitation at 405 nm (36 mW), 488 nm (16 mW), and 640 nm (36 mW) and nine detection filters. A sorting nozzle with a diameter of 100 μm was applied. As sheath fluid, FACSFlow (BD, USA) was used and measurement was done with a rate of 1000 events per second recording 100,000 events. Background noise signals were circumvented by application of a threshold on the side scatter (SSC). Photon multiplier tube voltages for forward scatter (FCS) and SSC as well as the detection filters 448/45 nm for mTagBFP2 (excited by the 405 nm laser), 527/32 nm for mEmerald, and 586/42 nm for CyOFP1 fluorescence (both excited by the 488 nm laser) were set to 250, 335, 500, 500, and 600 mV, respectively. Distinct signal detection of mTagBFP2, mEmerald, and CyOFP1 fluorescence without overlap was confirmed in preliminary experiments (see Supplementary material Figures S1-S3).
Autofluorescence measurements were done measuring fluorescence of the reference strain E. coli FUS4 (pF81 kan ) in the above-mentioned filters, which allowed detection of mTagBFP2, mEmerald, and CyOFP1 fluorescence.
Data analysis
Fluorescence measurements were conducted based on the pulse area and saved with FACSChorus (BD, USA), and the raw data were exported as FCS 3.1 files. Data analysis was conducted with FCS Express 7 (De Novo Software, USA) and Matlab (Mathworks, USA). This includes the calculation of median, skewness, and coefficient of variance (CV) of fluorescence distributions for the three reporter proteins. The latter was calculated by dividing the standard deviation of the distribution by its mean and is a measure of the level of population heterogeneity. Pearson's first coefficient of skewness is calculated by dividing the difference between the mean and mode of the distribution by its standard deviation. Since distributions are shown in logarithmic scale, all distributions are right-skewed; however, a decrease in skew might indicate left-skewed distributions. Stacked offset histogram plots were generated visualizing changes in fluorescence distributions during the L-phe production process. Furthermore, density plots were created for chosen process time points (
Influence of genomic engineering on process performance
Since genomic integration of several fluorescent proteins into a production host potentially results in metabolic burden, process performance of the triple reporter in fed-batch cultivations for L-phe production was evaluated based on population level physiology in comparison to the reference strain without fluorescent proteins. Both, the reference strain E. coli FUS4 (pF81 kan ) and the modified strain E. coli 3RP (pF81 kan ), were cultivated in a well-mixed stirredtank bioreactor at lab-scale for L-phe production. The process was adapted from Weiner et al. (2014) and consisted of three phases including a batch phase, followed by a biomass production phase and finally a product formation phase in which the cells were induced with 0.3 mM IPTG [29].
After the initial batch phase of around 15.0 h to 16.0 h, both E. coli FUS4 (pF81 kan ) and E. coli 3RP (pF81 kan ) showed a linear increase of biomass in the subsequent feeding phase until 40.7 h respectively 42.2 h of process time ( Figure 1A and B). During that phase, concentrations of the auxotrophic amino acids, L-phe and L-tyr, were below 0.4 g/L. The stronger slope of biomass increase at the end of this phase is due to the application of a differently concentrated feeding solution from 25.2 h onwards. With provision of a sufficiently high biomass concentration of over 20 g/L after 40.7 h respectively 42.2 h of process, the cells were induced with 0.3 mM IPTG marking the start of the product formation phase (Figure 1, second vertical line). Eight hours later, L-tyr was fully depleted and the biomass concentration remained level at a maximum of 29.05 ± 0.43 g/L for E. coli FUS4 (pF81 kan ) and 30.78 ± 0.75 g/L for E. coli 3RP (pF81 kan ), respectively. Simultaneously, both strains started to produce L-phe and achieved a maximum product concentration of 16.6 g/L and 17.6 g/L at the end of the cultivation with E. coli FUS4 (pF81 kan ) and E. coli 3RP (pF81 kan ), respectively. After 70 h, respectively 74 h of process, product formation declined in both strains accompanied by a preceding accumulation of by-products such as lactate and acetate ( Figure 1C and D) that were not produced in earlier process stages. Nevertheless, glycerol as sole carbon source was still fully consumed by the cells (Figure 1C and D).
The oxygen uptake rate (OUR) and carbon emission rate (CER) ( Figure 1E and F) increased during the biomass production phase, reaching their highest values between 40 h to 42 h of process. Both decreased after induction and remained level after product formation started. With declining product formation, their levels gradually decreased further until the end of the process for both strains.
3.2
At-line monitoring of cellular characteristics by averaged single-cell fluorescence Next, median fluorescence values of distributions for growth, oxygen availability, and general stress response of single cells (rrnB-mEmerald, narGHIJ-CyOFP1, and rpoS-mTagBFP2, respectively) were correlated with the growth rate on population level, the dissolved oxygen level in the bioreactor, and the product formation rate during the Lphe production process, respectively.
Following the growth rate of 3RP on population level (Figure 2A), three distinctive levels were visible. The highest growth rate of 0.22 h -1 was achieved at the end of the initial batch phase at around 15 h of process time. Afterwards, during the biomass production phase, the growth rate decreased to around 0.1 h -1 due to the controlled feeding strategy. In the product formation phase, the growth rates further declined to almost no growth after 60 h of process time. Expectedly, these three growth levels were mirrored by the corresponding median fluorescence of mEmerald. The highest median fluorescence level of 624 ± 5.2 was reached at the end of the initial batch phase, while an intermediate median fluorescence of 454.8 ± 61.7 was monitored during the biomass production phase. During the product formation phase, median fluorescence further decreased within 10 h; thereafter, it only gradually declined further until reaching a value below 300 at the end of the process.
With the increase of biomass during the biomass production phase, dissolved oxygen levels in the bioreactor continuously decreased from approximately 75% to 30% F I G U R E 1 Fed-batch process for L-phenylalanine production with Escherichia coli FUS4 (pF81 kan ) (A, C, E) and the triple reporter strain E. coli 3RP (pF81 kan ) (B, D, F) in a 3.6 L stirred tank bioreactor. A and B show concentrations of biomass (black circles), L-phenylalanine (gray triangles) and L-tyrosine (white diamonds) in the course of the bioprocess, whereas C and D display concentrations of the substrate glycerol (black circles) and the by-products acetate (gray triangles) and lactate (white diamonds). E and F provide the oxygen uptake (OUR, black line) and carbon emission rates (CER, gray line). The first vertical line in each graph indicates the start of the biomass production phase (16 h or 15 h) after a batch phase, while the second vertical line marks the switch to higher concentrated feed media at 25 h. Product formation phase (third vertical at 40 h or 42 h) starts upon induction with 0.3 mM IPTG. Both strains were cultivated using minimal medium with glycerol as carbon source. CER, carbon emission rate; OUR, oxygen uptake rate; 3RP, triple reporter strain ( Figure 2B). From 18 h of process time onward, which correlated with a dissolved oxygen level in the bioreactor of around 70%, the median fluorescence of CyOFP1 correlated to oxygen limitation constantly increased from 412.4 ± 6.1 to 821.1 ± 13.9, displaying its peak values at the end of the biomass production phase and the beginning of the product formation phase, respectively. With the induction of product formation at around 42 h of process time, the dissolved oxygen level rose after a short delay of 1-2 h and remained level at around 50%. Simultaneously, the median fluorescence dropped before staying constant at 577.5 ± 20.9.
Median fluorescence values of mTagBFP2 coupled to expression of the rpoS gene for at-line monitoring of the general stress response of single cells showed two peaks during the L-phe production process ( Figure 2C). In the biomass production phase, an increase of median fluorescence from 1107.2 ± 1.7 at 18 h to its highest value at 1649.2 ± 14.8 at 24.6 h was detected. Afterwards, the median remained about level before starting to decline 5 h before F I G U R E 2 Median fluorescence measurements during the L-phenylalanine production process with E. coli triple reporter strain (3RP) (pF81 kan ) in a 3.6 L stirred tank bioreactor. Median fluorescence of mEmerald is coupled to the expression of the rrnB-operon for monitoring of single cell growth and is therefore plotted together with the growth rate on population level (A). B shows the dissolved oxygen levels in the bioreactor together with median fluorescence of narGHIJ-CyOFP1 as marker for oxygen availability. mTagBFP2 is coupled to the expression of the rpoS gene and provides a correlation to general stress response levels. These data are plotted together with the biomass specific product formation rate (C). The first vertical line in each graph indicates the start of the biomass production phase (15 h) after a batch phase, while the second vertical line marks the switch to higher concentrated feed media at 25 h. Product formation phase (third vertical at 42 h) starts upon induction with 0.3 mM IPTG. The strain was cultivated using minimal medium with glycerol as carbon source induction of the cells. With induction at around 42 h of process, median fluorescence levels were lowest at 965.5 ± 14.5. Afterwards, they constantly increased reaching a maximum of 1593.8 ± 14.2 after around 75 h of process, before declining again until the process was stopped. In comparison with the biomass specific product formation rate on population level, it is interesting that after 52 h of process time, the cells reached their highest product formation rate of 16.8 ± 1.9 mg L-phe /g biomass /h, which remained constant until 75 h of process when the rate started to decrease. Consequently, the highest general stress response levels coincided with the start of product formation decline.
Single-cell fluorescence distribution during the L-phenylalanine production process
To consider potential unequal behavior of single cells, histogram distributions plots for growth, oxygen limitation, and general stress response following the L-phe production process were generated ( Figure 3) as well as skewness and CV for the respective distributions plotted (Figure 4). Additionally, respective autofluorescence measurements, taken during the L-phe production process with E. coli FUS4 (pF81 kan ), are depicted.
Single cell growth (rrnB-mEmerald) followed a monomodal distribution until 37 h of process in the biomass production phase ( Figure 3A) with a low constant CV value of 0.96 ± 0.32 ( Figure 4A). Between 37 and 45 h of the process, during the transition from biomass production phase to product formation phase, distributions broadened with tailing toward lower fluorescence intensities, which resulted in declining skew and exponentially increasing CV values (Figure 4). The effect enhanced the longer the transition period lasted. From 50 h onwards, distributions narrowed again at lower fluorescence intensity accompanied by decreasing CV values ( Figure 4A). Concurrently, tailing toward higher fluorescence intensities ( Figure 3A) was confirmed by increased skew ( Figure 4B). Moreover, toward the end of the process, gradual broadening can be suspected. Median autofluorescence was low with average value at 111.3 ± 19.2 and thus only marginally overlapping with fluorescence signals of E. coli 3RP (pF81 kan ) ( Figure 3A).
The overlap between autofluorescence signals and CyOFP1 fluorescence distributions of E. coli 3RP (pF81 kan ) were more pronounced (average median autofluorescence: 423.2 ± 70.0; Figure 3B). However, the main part, covering 71.8% ± 6.3%, of oxygen limitation distributions (narGHIJ-CyOFP1) of E. coli 3RP (pF81 kan ) settled at higher fluorescence intensities compared to autofluorescence values and exhibited mono-modal distributions with relatively F I G U R E 3 Fluorescence distributions as cell count against fluorescence intensity for the L-phenylalanine production process with E. coli triple reporter strain (3RP) (pF81 kan ) in a 3.6 L stirred tank bioreactor. Stacked histograms show fluorescence distributions for rrnB-mEmerald (A), narGHIJ-CyOFP1 (B), and rpoS-mTagBFP2 (C) following the bioprocess. The gray area indicates the maximum of autofluorescence distributions during the cultivation of the E. coli FUS4 (pF81 kan ) reference strain. The time points and phase of the process the distributions originate from are depicted on the left side of the graphs. The strain was cultivated using minimal medium with glycerol as carbon source. *Biomass production phase with feed media 1, **biomass production phase with feed media 2, and ***product formation phase low CV values ( Figure 4A) until 45 h of process. They tailed toward lower fluorescence intensities, which got less pronounced with time, however. From 52 h of process until the process was stopped, distributions significantly increased in skew ( Figure 4B) and reached almost double width, compared to distributions at the end of the batch phase, which caused an increase in CV values ( Figure 4A). Starting from 63 h of process, a small higher fluorescent subpopulation of 22.6% adjacent to the main population appeared that increased in significance until 75 h of process whereon it gradually declined again.
General stress response levels of single cells (rpoS-mTagBFP2) revealed more distinct changes following the L-phe production process ( Figure 3C). At the end of the batch phase, at 15-18 h of process, two adjacent distri-butions were visible covering 40%-60% of cells in the higher fluorescent subpopulation. However, the distribution changed into a mono-modal distribution within 6 h, which caused a decrease in CV values ( Figure 4A). Afterwards, distributions broadened with increased tailing toward lower fluorescence intensities, which decreased their skew and enhanced their CV values (Figure 4), during the transition from biomass production to product formation phase between 37 and 45 h of process. After 50 h, the distributions slowly got less skewed ( Figure 4B) und more uniform ( Figure 4A) at higher fluorescence intensities until 75 h of process and then gradually broadened again. The average median autofluorescence was at 88.7 ± 36.2 and thus marginally overlapping with fluorescence levels of E. coli 3RP (pF81 kan ) ( Figure 3C).
F I G U R E 4 Skewness (A) and coefficient of variance (B) for fluorescence distributions of rrnB-mEmerald, narGHIJ-CyOFP1,
and rpoS-mTagBFP2 at 527/32-A, 586/42-A during the L-phenylalanine production process with E. coli 3RP (pF81 kan ) in a 3.6 L stirred tank bioreactor. The first vertical line in each graph indicates the start of the biomass production phase (15 h) after a batch phase, while the second vertical line marks the switch to higher concentrated feed media at 25 h. Product formation phase (third vertical at 42 h) starts upon induction with 0.3 mM IPTG. The strain was cultivated using minimal medium with glycerol as carbon source
Correlation between the fluorescent markers of the triple reporter strain
When applying multiple instead of single reporter strains, distributions of different fluorescent markers can be pairwise correlated with each other in biplots, allowing deeper insights into their interconnection ( Figure 5). As fluorescence distributions, (Figure 3), skewness, and CV values (Figure 4) indicated the significant changes in single-cell fluorescence at the end of the batch phase (15 h), during the transition from biomass production phase to product formation phase (37-43 h) and at the end of the process (94 h), a focus was set on these process stages.
Correlating general stress responses to the single-cell growth showed two adjacent populations exhibiting two different levels of general stress response; however, the same growth characteristics were seen at 15 h of process. These, however, transformed into a uniform population during the transition from the biomass production phase (37 h) to the product formation phase (43 h). At the end of the process, two populations were visible, one showing higher mEmerald levels (10%) than the majority of the cells (Figure 5A-D).
When plotting the single-cell oxygen availability in relation to the general stress responses, one population appeared at the end of the batch phase with a slight difference in general stress response levels within the population. Interestingly, a minority of cells (13%) showed increased levels of oxygen limitation at the end of the biomass production phase at 37 h. This subpopulation even increased to 26.5% at the beginning of the product formation phase at 43 h but shrank again to 11% at the end of the process. Some cells of this subpopulation did not only show higher oxygen limitation levels, but also lower general stress response levels ( Figure 5E-H).
The correlation of single-cell oxygen availability with single-cell growth revealed a uniform distribution at 15 h of process. This population broadened over time, with a small subpopulation in the oxygen availability marker with gradually increasing fluorescence intensity until the end of the process. Furthermore, this subpopulation exhibited weaker growth characteristics than the main population ( Figure 5I-L).
DISCUSSION
Reporter strains have been proven to be useful noninvasive tools to monitor cellular characteristics in bioprocesses in real-time [33][34][35]. In the present study, the previously well-characterized L-phe producing strain E. coli FUS4 (pF81 kan ) was transformed into a 3RP by chromosomally integrating the fluorescent proteins, mEmerald, CyOFP1, and mTagBFP2, so that they are expressed together with the ribosomal promoter rrnB, the narGHIJ-operon, and the alternative sigma factor rpoS, respectively. The resulting strain was successfully applied to monitor growth, oxygen limitation, and general stress responses of singles cells in the L-phe production processes in a well-mixed lab-scale bioreactor. General functionality of the 3RP concept was already demonstrated in an earlier study [25], but the fluorescent proteins for monitoring oxygen availability and general stress response, TagBFP657 respectively mStrawberry, were exchanged by CyOFP1 and mTagBFP2 in the present strain. Furthermore, applicability of the concept for a strain that is already metabolically challenged by production of L-phe was ambiguous. Comparing key process performance parameters, such as maximum biomass and product concentrations and growth behavior during the L-phe production processes of the newly generated E. coli 3RP (pF81 kan ) with that of the reference strain E. coli FUS4 (pF81 kan ) without fluorescent proteins, no significant impact on process performance was found. Consequently, metabolic burden by expression of the fluorescent (general stress response), respectively. The strain was cultivated using minimal medium with glycerol as carbon source. Induction was done by the addition of 0.3 mM IPTG proteins next to L-phe production was obviated. This was expected, as in previous investigations with E. coli it was shown that disruption of a considerable portion of genes did not affect cell viability and metabolism [36][37][38][39][40].
Furthermore, the changes in distributions and median fluorescence signals of the 3RP in bioprocesses for L-phe production related to cellular growth, oxygen limitation, and general stress response matched the respective timely course of process state variables with a maximum signal delay of 1 h to 2 h.
Monitoring of cell growth was realized by integration of mEmerald into the rrnB operon, decoding for a RNA polymerase subunit. Growing cells exhibit higher concentrations of ribosomes, which should trigger rrnB expression [41]. Therefore, higher growth rates should lead to higher fluorescence intensities. Indeed, considering the median fluorescence levels of mEmerald, the growth rate on population level was closely reflected in all process phases.
CyOFP1 was integrated into the narGHIJ operon decoding for a nitrate reductase. Whenever cells experience dissolved oxygen levels below 40% in the bioreactor or are surrounded by high nitrate concentrations, the narGHIJ operon is active [20,42]. Consequently, an inverse correlation between oxygen availability and median CyOFP1 fluorescence was expected. This could be confirmed especially during the biomass production phase in which dissolved oxygen levels decreased with increasing biomass concentration as well as at the beginning of the product formation phase.
mTagBFP2 was integrated downstream to the promoter controlling the expression of the alpha subunit of the alternative sigma factor rpoS. This gene is generally known as global stress response factor. Triggering this gene leads to the expression of rpoS-dependent genes decoding for more specific stress response pathways [43]. As a consequence, general stress response levels can be reflected but the underlying reasons remain unclear. What is well known as a strong trigger for the general stress response is the lack of nutrients leading to starvation [44][45][46]. Indeed, elevated general stress response levels via blue fluorescence were visible during the biomass production phase. Though substrate was fed at this stage of the process, fed-batch processes normally provide a minimum of substrate which is immediately consumed by the cells for control of the cell's growth rate [47]. This uncertainty of nutrient availability however could potentially induce the cellular stress response. Besides nutrient availability, the rpoS gene is induced by various other suboptimal conditions such as high or low pH values or temperatures and accumulation of toxic metabolites [43]. Especially, the latter is often a major factor when it comes to production decline in recombinant strains [49,50]. This might supposingly have happened in this study, as the highest blue fluorescence was seen were the L-phe concentration reached its maximum. Afterwards, general stress response levels decreased simultaneously with the starting decline of product concentration. Consequently, the decline in L-phe production might be related to the metabolic overload of the cells, which could have been ultimately triggered by accumulation of toxic byproducts. Hence, the measurement of general stress response was interesting; however, for a deeper investigation of the stress response, future reporter strains shall be equipped with fluorescence proteins coupled to more specific stress responses.
The selection of appropriate fluorescent proteins enables distinctive detection of their fluorescence. Even though there are plenty of selectable characteristics [14,50,51], choices were made focusing on high brightness for robust detection even at low concentration of the fluorescent protein and low maturation times to provide a rapid response to triggering events. Although the delay between expression and complete maturation of the fluorescent proteins might be a drawback, it could be neglected in the present study as the maturation times of mEmerald, CyOFP1, and mTagBFP2 were all shown to be <20 min in previous studies [14,[52][53][54] and thus low compared to overall process duration. The in vivo degradation characteristics of the fluorescent proteins are unknown. Very few related techniques are described in the literature [55,56], which shall be used for upcoming research. One possibility to control degradation is the implementation of degradation tags on the fluorescent protein, which are recognized by the intracellular protease systems such as ClpXP and Lon [57][58][59]. With degradation tag sequences, which differ regarding their degradation time, the lifetime of each fluorescent protein could be determined. Nevertheless, it is important to adapt the degradation times to measuring time to avoid complete loss of signal. Despite the uncertainty of in vivo degradation of the reporter molecules, the median fluorescence data still followed the changes in process state variables almost synchronously. Indeed, it is necessary to mention that autofluorescence for CyOFP1 was only marginally lower than the fluorescence intensities measured for the 3RP. This might be related to weak expression levels of the nar-operon in general. Similar findings were also described by Heins et al. (2020), but a slower maturating fluorescent protein was used [25]. Therefore, substitution of the narGHIJ-operon as reporter for oxygen limitation might be advisable.
The reason to apply a 3RP in the L-phe production process was to uncover single-cell behavior that is masked when only considering population level physiology. This includes the appearance of subpopulations, quantification of population heterogeneity in different process phases as well as characterization of single-cell behavior during declining product formation, which was consistently seen in earlier studies, but it could not be explained [29].
Whereas single cells exhibited low uniform levels of oxygen limitation, two temporary subpopulations were seen for the general stress response at the beginning of the process with similar single-cell growth characteristics. These might have arisen because part of the population in the bioreactor took longer time to adapt to the fed-batch mode and exhibited a distinct general stress response to reactivate, whereas the remaining part of the population was still prepared to grow. Consistently, earlier studies revealed that cells in stationary phase exhibited different response times to the addition of fresh medium [60]. When the second feed was applied in the biomass production phase, population heterogeneity levels rose notably by strong dispersion of distributions and some cells appeared to be less vital and stronger metabolically challenged by the higher concentrated feed. The reason was possibly a nutrient overload and the subsequent accumulation of metabolites. This finding clearly extends the information gained by just considering population level physiology, revealing that the cells exhibit a diverse reaction to potentially harmful changes in their environment. This might not have been expected, as on population level, no change in cellular physiology was seen in this process phase. In future studies, cells from opposite sites of the distributions should be deeper functionally characterized to uncover the underlying reason for this strong dispersion.
During the transition from biomass production phase to product formation phase, this effect first persisted, as the heterogeneity level was high for single-cell growth, with the majority of cells first maintaining their growth level even though the culture was induced to start production formation and stop growing. Concurrently, the cells exhibited broader general stress responses. This phenomenon might be explainable by a bet-hedging strategy to cope with the transition. Bet-hedging describes the development of coexisting phenotypes within one population, of which some may be currently disadvantageous but prepare these cells for the future environments they might encounter [35,61,62].
Following product formation phase, single cell growth and general stress response levels got more uniform, probably because the overload in metabolism, which lowered the vitality of some cells, could be released by-product formation. However, some more actively growing cells with high stress response levels seemed to exist, which raises the question whether these were able to simultaneously grow and produce. Furthermore, some cells seemed to be stronger challenged during L-phe production. They exhibited a less effective general stress response, together with higher oxygen limitation levels and lower growth levels. The cells probably tried to fight accumulation of potentially toxic by-products, which were seen on population level, by using alternative parts of the metabolism [37,38]. Another strategy the cells could have applied is noise in gene expression. Noise in gene expression is known to be one of the major sources of population heterogeneity in bioprocesses and appears even in stable environments [63,64]. Noise in gene expression for single-cell growth and general stress response was only elevated during the transition from the biomass to the product formation phase. Surprisingly, noise in gene expression levels for the general stress response were lower than for the other markers, which was unexpected as rpoS expression is known be noisy especially in comparison to the expression of genes that are related to the growth rate [65]. Therefore, this finding needs further investigation in future experiments. Other than that, the cells seemed to employ noise in gene expression in the nar-operon during the product formation phase. However, this strategy did not seem to be sustainable against metabolic stress as after around 75 h, product formation collapsed and cell activity gradually declined. All subpopulations found should be further characterized by proteome analysis after sorting to identify factors that contribute to cell robustness of E. coli FUS4 (pF81 kan ). It should generally be mentioned that none of the subpopulations that appeared during the process for L-phe production was clearly resolved from the main population and permanent. They rather temporary arose in certain process phases and afterwards reverted again. However, due to the well-mixed conditions provided in the laboratory scale stirred-tank bioreactor applied in this study this might not be surprising.
CONCLUDING REMARKS
Overall, application of the 3RP during the L-phe production process could provide further additional insights to population level physiology. The level of heterogeneity was mostly elevated during the transition between different process phases. This is related to the homogeneous conditions in well-mixed lab-scale stirred-tank bioreactors, in which the cells mostly experience the same ideal conditions and therefore behave equally [16]. Therefore, future studies should aim to induce dynamic conditions, for example, applying a multicompartment system to simulate conditions that would arise during scale-up [66][67][68]. This setup would probably induce higher levels of heterogeneity and thus more permanent subpopulation formation. Furthermore, investigations regarding the product formation decline appear to be interesting. Here, advancing the strain with another fluorescent protein for monitoring of single-cell product formation could be interesting. This could be integrated on the plasmid that carries the genes for finalizing L-phe synthesis so that its expression would be correlated to product formation.
In long-term perspective, it is desirable to influence the level of population heterogeneity in the L-phe production process to induce the appearance of process beneficial subpopulations. | 10,039.4 | 2022-02-26T00:00:00.000 | [
"Biology",
"Engineering"
] |
Antiparasitic Activities of Acridone Alkaloids from Swinglea glutinosa ( Bl . ) Merr
Onze alcalóides acridônicos isolados de Swinglea glutinosa (Bl.) Merr. foram avaliados para suas atividades in vitro contra linhagens de Plasmodium falciparum sensíveis a cloroquina 3D7, Trypanosoma brucei rhodesiense STIB9000 e Leishmania donovani L82. Ensaios com células KB foram também executados com o objetivo de se determinar o grau de toxicidade das substâncias ativas contra os parasitas. Nove dos compostos apresentaram IC 50 entre 0,3 e 11,6 μM contra P. falciparum. Em contraste, um pequeno número de compostos mostrou atividade significativa contra T. brucei rhodesiense e nenhum apresentou atividade contra L. donovani. Entre os alcalóides três tiveram IC 50 < 1,0 μM contra P. falciparum, enquanto que contra T. b. rhodesiense cinco mostraram IC 50 < 10 μM. A caracterização dos alcalóides, 1,3,5-triidróxi-4-metóxi-10-metil-2,8bis(3-metilbut-2-enil)acridin-9(10H)-ona (1), 2,3-diidro-4,9-diidróxi-2-(2-hidróxipropan-2-il)11-metóxi-10-metilfuro[3,2-b]acridin-5(10H)-ona (2) e 3,4-diidro-3,5,8-triidróxi-6-metóxi-2,2,7trimetil-2H-pirano[2,3-a]acridin-12(7H)-ona (3), é aqui discutida. Discute-se também a relação estrutura-atividade para todos os compostos ensaiados. O isolamento e dados espectrais para os alcalóides 1-3 estão sendo aqui descritos pela primeira vez, embora em trabalho anterior tenham sido relatadas as suas atividades citotóxicas.
Introduction
Parasitic protozoa are the causative agents of human and livestock diseases infecting hundreds of millions of people every year and are collectively one of most important causes of human misery. 1 Human African trypanosomiasis (HAT), or sleeping sickness, malaria, Chagas' disease and leishmaniasis are major health problems in many countries.HAT promoted by Trypanosoma brucei rhodesiense and T. b. gambiense is endemic in over 30 African countries threatening over 60 million people.HAT has reached epidemic proportions in some countries, such as Angola, southern Sudan, Uganda and the Democratic Republic of Congo. 2,3Malaria has re-emerged as a major public health problem over the past three decades mainly because of the development of worldwide resistance of Plasmodium falciparum to chloroquine, a drug which formed the basis for cheap and effective treatment and for prophylaxis of this disease. 4Each year, approximately 300 to 500 million malaria infections lead to over one million deaths.In many endemic countries, malaria is responsible for economic stagnation, lowering the annual economic growth in some regions by up to 1.5%. 5,6Leishmaniasis is a disease caused by protozoa of the genus Leishmania.According to WHO 88 countries are affected, with 350 million people at risk.90% of cases of visceral leishmaniasis occur in India, Sudan, Bangladesh and Brazil. 7Present chemotherapy for these diseases is inadequate or toxic, or becoming ineffective due to an increase in resistance. 8he family Rutaceae contains many secondary metabolites such as alkaloids, flavonoids, coumarins, limonoids and lignans with a large spectrum of biological activities. 9Studies showed that acridone alkaloids are compounds with promising activity against P. falciparum, 10,11 and also have antiviral 12 and antiproliferative effects on cancer cell lines. 13,14The Asian genera Citrus and Swinglea are members of the Rutaceae and are included in the subfamily Aurantioideae.Citrus species have been investigated and characterized by possessing acridone alkaloids.These data stimulated an investigation of Swinglea glutinosa (Bl.)Merr. in a search for lead acridones.
Compound 2 was isolated as an amorphous powder, the molecular formula C 20 H 21 NO 6 was determined on the basis of HRESIMS, exhibiting an [M+H] + peak at m/z 372.1447.The UV and IR spectra were identical to those of 1, however some differences were observed in the 1 H and 13 C NMR.In the 1 H NMR spectrum, the characteristic signal of a hydrogen-bonded hydroxy proton at d 14.42, exchangeable with D 2 O suggested the presence of a hydroxyl group.][18][19][20][21][22] The 1 H NMR spectrum of 2 showed an ABX type aromatic spin system at d 7.15 (1 H, t, J 7.8Hz, H-7), 7.29 (1H, dd, J 7.8, 1.4Hz, H-8) and 7.80 (1H, dd, J 7.8, 1.4Hz, H-6), this last proton being deshielded by the 5-carbonyl group.The spectrum also showed one N-methyl group at d 3.85 and one O-methyl group at d 3.89.The presence of a hydroxyisopropyldihydrofuran moiety was suggested by an oxymethine proton at d 4.88 (1H, dd, J 9.4, 7.8Hz, H-2), methylene protons as two dd in an AB system at d 3.20 (1H, dd, J 15.5, 7.8Hz, H-3a) and 3.26 (1H, dd, J 15.5, 9.4Hz, H-3b), two methyl groups at d 1.33 (3H, s, H-2') and 1.29 (3H, s, H-3').The signal at d 93.0 and 71.5 in the 13 C NMR spectrum supported the presence of this substituent in the acridone nucleus. 23 2 shows 1 H, 13 C NMR data for 2 and the correlations observed in the HMBC.
Compound 3 was isolated as an amorphous powder, the molecular formula was determined as C 20 H 21 NO 6 by HRESIMS, showing an [M + H] + peak at m/z 372.1447.
The eleven acridone alkaloids isolated from S. glutinosa were tested for in vitro activity against P. falciparum, T. b. rhodesiense and L. donovani.An assay with KB cells indicated in vitro cytotoxicity.The results are summarized in Table 4.To facilitate the discussion, IC 50 values were assigned as IC 50 T,P,K against T. b. rhodesiense, P. falciparum and KB cells, respectively.
Nine out of the eleven acridone alkaloids showed IC 50 P below 10 µM, four showed IC 50 T below 10 µM and none displayed significant activity against L. donovani.
Related acridone alkaloids from Thamnosma rodesica (Bak.F.), showed activity against promastigote and amastigote forms of Leishmania major.These alkaloids have a methyl 2,3-dihydroxypropanoate chain at C-3 and C-4, 23 indicating that the substitution in the acridone skeleton is important for activity in this class of compounds.According to the data, compound 5, with one prenyl group at C-2, was the most active against P. falciparum with IC P 50 0.3 µM.Comparison of 5, 1 (IC P 50 2.6 µM) and 4 (IC P 50 2.6 µM) indicates that the second prenyl at C-8 was responsible for reducing the activity of this series.This fact was also observed by Weniger et al., 10 who performed the assay with four alkaloids from S. glutinosa on a Nigerian chloroquine-sensitive strain of P. falciparum.Analysis of the results for compounds 2, 3, 9, 10 and 11 suggests that the presence of a pyran ring is important for activity against P. falciparum and the position of this group, angular pyrano[2,3-c] (9) or linear pyrano[3,2-b] (10), did not alter the results of IC P 50 .The activity against P. falciparum observed for 6, 7 and 8, IC P 50 8.9, 29.9 and 6.1 µM, respectively, shows that presence of an O-methyl group at C-2 improves the activity.
From the 11 alkaloids tested against T. b. rhodesiense, compound 9 was the most active with an IC T 50 1.0 µM.Alkaloids 1-3 had their citotoxicity to cancer cells described in an earlier paper, 30 however here we disclose for the first time their isolation, spectral data and structure elucidation.
General
Optical rotations were measured using a Perkin Elmer polarimeter.IR spectra were recorded on a Bomem M-B Series spectrophotometer.UV absorptions were recorded using a Varian 500 SCAN UV-Vis-NIR spectrophotometer. 1 H and 13 C NMR data were recorded on Bruker ARX-200 and Bruker DRX-400 spectrometers.Spectra were recorded in acetone-d 6 and DMSO-d 6 with TMS as internal standard.All 2D NMR data were recorded at 400MHz (Bruker DRX-400), HSQC J 145 Hz; HMBC J 8 Hz.HR-MS data were recorded on a Micromass Q-Tof (QqTOF) spectrometer; column chromatography was on silica gel 60 (Merck) and Sephadex LH-20 (Pharmacia).Preparative HPLC was performed on a Shodex Asahipak GS-310 2G column.TLC was carried out using Merck aluminum-backed silica gel 60 F 254 .
Plant material
Leaves, stem and root bark were collected in Campinas (SP) at the Instituto Agronômico de Campinas, and dried in the shade.The plant was identified by Prof. Dr. Maria Inês Salgado.A voucher specimen is deposited at the Herbarium of the Departamento de Botânica of the Universidade Federal de São Carlos (HUFSCar) as number 7110.
Biological assays
Stock solutions of the compounds, plus control drugs, were prepared at a concentration of 20 mg mL -1 in DMSO (Sigma, UK), and diluted to appropriate concentrations prior to assays.IC 50 values were calculated with MSXLFIT (IDBS, UK).
P. falciparum
Chloroquine-sensitive P. falciparum strain 3D7 was maintained in human A + erythocytes in RPMI 1640 medium (Sigma, UK) supplemented with Albumax II at 37 ºC in a 5% CO 2 -air mixture.Asynchronous (65-75% ring stage) of P. falciparum intraerythrocytic cultures were set up as above, with 1% parasitemia, 2.5% hematocrit, in triplicate in 100 µL of medium in 96 well, flat-bottomed Microtest III tissue plates.Drugs were added in a threefold dilution series and cultures incubated for a total of 48 h at 37 ºC in a 5% CO 2 -air mixture.After 24 h, { 3 H} hypoxanthine (0.2 mCi) was added to each well. 25,26At the end of the assay, plates were rapidly freeze-thawed, harvested using a Tomtec Mach III cell harvester (Tomtec, CT) onto a 96well format filtermat and Meltilex TM solid scintillant (both Wallac, Finland) added prior to reading in a Microbeta 1450 scintillation counter (Wallac, Finland) at 1 min per well.
T. brucei rhodesiense
T. b. rhodesiense STIB900 bloodstream form trypomastigotes were maintained in HMI-18 medium, 27 with 15% heat-inactivated fetal calf serum (Harlan Sera Lab, UK) at 37 °C in a 5% CO 2 -air mixture.Prior to drugging, trypomastigotes were washed and resuspended in fresh medium at a concentration 2 × 10 5 trypanosoma/mL and 100 µL of this suspension was added to the drug dilutions.The top concentration for the test compounds was 30 mg mL -1 .Pentamidine was included as the standard drug.Plates were incubated for 72 h at 37 ºC in a 5% CO 2air mixture. 28At 72 h AlamarBlue was added to the plates.Plates were read after 4-5 h on a Gemini fluorescent plate reader (Sofimax Pro.3.1.1,Molecular Devices, UK) at EX/EM 530/585 nm with a filter cut-off at 550 nm.
L. donovani
L. donovani L82 amastigotes were harvested from an infected hamster (Mesocricetus auratus) spleen and used to infect murine peritoneal exudate macrophages (PEM) at a ratio of 7:1.In brief, infected cells were exposed to drug for a total of 5 days. 29The percentage of infected cells was evaluated microscopically and the percentage inhibition in comparison with untreated controls was calculated.
Cytotoxicity assays
96-well plates were seeded with KB cells at 4 × 10 4 mL -1 (100 µL per well).Drugs at 300, 30, 3 and 0.3 µg mL -1 were added in fresh overlay after 24 h, in triplicate at each concentration.Plates were incubated for 72 h at 37 °C in a 5% CO 2 -air mixture.At 72 h AlamarBlue was added to the plates.Plates were read after 4-5 h on a Gemini fluorescent plate reader (Sofimax Pro.3.1.1,Molecular Devices, UK) at EX/EM 530/585 nm with a filter cut-off at 550 nm.IC 50 values were calculated against the blanks and control samples.
0Hz, H-5'' Z)] were observed, which suggested the presence of two prenyl groups (3-methylbut-2-enyl). Their positions were confirmed at C-2 and C-8 by HMBC.The position of a prenyl group at C-2 was confirmed by the correlation of the methylene protons at d 3.40 (H-1') with C-1 (d 158.1),C-2 (d 109.4),C-3 (d 156.1),C-2' (d 123.6) and C-3' (d 131.4) in the HMBC spectrum.In addition, the correlation of 1-OH at d 14.42 with C-1 and C-9a (d 107.6) confirmed that the first prenyl group was attached to C-2.The position of the second prenyl group was confirmed by the correlation of the deshielded methylene protons at d 4.01 (H-1'') with C-7 (d 125.4),C-8 (d 135.0),C-2'' (d 125.4) and C-3'' (d 131.2), confirming that the second prenyl group was attached to C-8.The N-methyl protons showed correlation with C-4a and C-5a at d 140.2 and 139.4,respectively.The O-methyl protons showed correlation with C-4 at d 128.8.
Table 3 .
13 and13C NMR and HMBC data for compound 3 (acetone-d 6 ) a Recorded at 100 MHz; b Recorded at 400 MHz; * Assignments may be exchanged.
Table 4 .
In vitro activity against P. falciparum 3D7, T. b. rhodesiense STIB 900, L. donovani L82 and KB cells a a The mean IC 50 values of the test compounds and standard drug (n = 3, ±, σ, n = number of tests performed in three series).Drugs used as positive control: b Plasmodium falciparum; c T. brucei rhodesiense; d Leishmania donovani and e toxicity. | 2,840.6 | 2009-01-01T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
Teaching English at Sekolah Agama Rakyat ( People ’ s Religious Schools ) in Northern Peninsula Malaysia : Methodology Development and Preliminary Observations
This research article is based on a pilot study that we carried out to gain preliminary insights into how English is taught at ‘Sekolah Agama Rakyat’ (‘SAR’ or literally translated as ‘People’s Islamic Religious Schools’) in a state in northern Peninsula Malaysia. In the process of carrying out the study, we tested data collection instruments that we developed to understand the complexities of English language teaching in this interesting educational milieu. Questionnaires were distributed to 30 English language teachers from three schools to collect data on their educational background and their teaching experience. Classroom observations were also carried out in one of the schools to examine whether the classroom adheres to the general principles of Communicative Language Teaching (CLT) as required by the Malaysian Ministry of Education. Finally, interview sessions were conducted to examine how the schools’ management personnel contribute to teachers’ performance as a whole. It was found that almost all the teacher participants that we came into contact with were not certified as English as a Second Language practitioners and some never received any forms of formal teacher training. The classroom observations that we carried out generally show an unconducive climate to support English language learning. In addition, the interview sessions revealed that SAR teachers rarely attend professional development courses. We hope that these preliminary observations from our pilot study will lead to more research efforts in order to understand the realities (and complexities) of teaching English within the Malaysian SAR educational context.
This pilot study was conducted to examine how the English language subject is taught at three privately-funded SARs in the state of Perak in northern Peninsula Malaysia.It must be mentioned from the outset that Islamic education plays a special role within the Malaysian education system well before the Federation of Malaya's independence from British colonial rule in 1957 (see, Adnan & Smith, 2001;Adnan, 2001).The history of Malaysia's brand of Islamic education, and the establishment of religious schools on a large scale that integrate academic and religious subjects for the Malay majority, are topics that have been well-documented by scholars (Adnan, 2013).
As Harper (1999) notes, "in 1953 there were only 26,215 Malays in English schools ... a quarter of the total pupils" (p.234).At the same time, although quite small in number, Malay schools or 'sekolah rakyat/Melayu' assisted rural Malays to gain access to formal education.Even if these schools were faced with problems like shortages in teachers and funds, the setting up of these schools, mostly by members of local communities, proved that the Malays are open to learning and not insular (Melebek & Moain, 2006).Another possibility for young Malays to learn came in the form of Islamic religious schools or traditional 'sekolah pondok'.Although initially perceived as less formally organised compared to government-funded schools, Islamic schooling also provided opportunities for the Malays to access education, a practice that continues in Malaysia today (Rosnani, 1996).At this present time, according to the Malaysian Advisory Board for the Coordination of Islamic Education or 'Lembaga Penyelaras Pendidikan Agama Islam Malaysia' (LEPAI, 1998as cited in Ahmad Kilani, 2003), there are three types of schools that are allowed to subscribe to the Islamic Education curriculum.The first is called National Islamic Secondary Schools ('SMKA' or 'Sekolah Menengah Kebangsaan Agama').These schools are under the direct control of the Malaysian Ministry of Education at federal level.The second is called State Islamic Schools ('SAN' or 'Sekolah Agama Negeri').These schools are fully managed by the Department of Religious Affairs of different Malaysian states.The third type of school is the Sekolah Agama Rakyat (SAR or literally translated as People's Islamic Religious Schools).These schools are unique as they are operated by non-governmental organizations and other nonprofitmaking entities.
Within the Malaysian education system, all schools are expected to fully subscribe to the national academic curriculum that is provided by the Malaysian Ministry of Education (Adnan, 2013).This is to allow students to sit for national standardized examinations such as the Lower Secondary Evaluation (PT3), Malaysian Certificate of Education (SPM) and Malaysian High School Certificate (STPM).
English language performance in Islamic Religious Schools
The 'problem' of English language teaching and learning within Malaysian borders is not a new topic nor will it be easy to find a solution to the perennial Malaysian 'English language dilemma' (for further discussion, see Adnan, 2005 and2012).Although the Malaysian government has instituted many policies to support the teaching and learning of English, even to the point of using the mass media and popular media as English teaching 'tools' (see, Adnan, 2010) there seems to be many institutional and socioeconomic problems that hinder these positive efforts.Therefore, it is not surprising that lately certain quarters within Malaysian society have raised the issue of the moderate performance in the English language subject in SARs (within the national standardized examinations framework) to continue to fuel the debate.While there is no doubt that a small minority of SAR students have been able to show commendable performance in the said subject, as a whole, the English language performance of students within this educational setting is rapidly declining compared to other schools within the Malaysian system.
According to Ahmad Zabidi (2005), a number of SARs have very limited facilities in terms of teaching materials, laboratories and libraries.In addition, some of the teachers in these schools do not possess basic teacher's training and they are not able to attend teaching courses that can elevate their professionalism.Most of these schools also do not offer promotion and permanent service schemes for their teachers or the opportunity to undergo tertiary training and pursue university-level education.A number of Sekolah Agama Rakyat are also plagued by insufficient funding and poor safety maintenance records (see, Sufean Hussin, 2005;Zaharah Hussin, 2005).
The Ministry of Education through its Educational Training and Research Division relates the poor academic performance of the students in SARs to the lack of quality teachers, slow infrastructural development and ineffective management (Ministry of Education, Malaysia, 1989).In its 1989 report on basic education in Malaysia, the ministry stated that 90 percent of teachers in 49 randomly selected SARs were without professional qualification and training, 73.2 percent did not graduate from university or college, and 26.5 percent were merely SPM-educated secondary school leavers.Analysing SPM results in 2002, only three out of ten SAR students passed all their academic subjects, compared to six out of ten students from the national school system.Strangely enough, only around 10 percent of SAR students scored an 'A' grade in Islamic Education, a subject in which they were expected to shine (for further discussion, see Ministry of Education, Malaysia, 2014).
Table 1 below outlines the results of the English language subject for the Malaysian Certificate of Education (SPM) recorded in nine SARs in the state of Perak, Malaysia from 2009 to 2012 ('n.a.' in the table denotes data not available at the time of going to press).This national-level examination is the Malaysian equivalent of GCE/GCSE and it is the school-leaving examination undertaken by all secondary level students in the country.
Table 1.SPM English language performance in nine SARs within the state of Perak, Malaysia
20.8
From the table, we can see that there is an overall decline in the performance of SAR students in the English language subject at SPM or Malaysian Certificate of Education level.This is a cause for concern given the fact that these secondary school-leavers are expected to continue their studies at tertiary level where English continues to be taught and used.Questions arise as to their competence as tertiary level students and their readiness to study at a higher level when they failed to show their English language abilities at a lower (i.e., secondary) level.
Guiding questions for this pilot research
The above factors compelled us to conduct a pilot study to identify possible causes that contribute to the weak performance in the English language subject by SAR students.Our goal is to later on carry out large-scale fieldwork to see how English is taught within these schools and how the schools' management impact upon the performance of the English language subject as measured through the national standardized examinations framework and school-level assessment regimes.It is hoped that our effort will shed some light into improving the English language teaching and learning situation within these schools.Towards these aims, we operationalised three guiding/research questions: First: How is English taught at three randomly selected Sekolah Agama Rakyat (SAR or People's Islamic Religious Schools) in the state of Perak, Malaysia?
Second: Are English language teachers in these schools equipped with professional knowledge on English language teaching and learning?
Third: How does the schools' management impact upon the overall teaching performance of English language teachers within this educational context?
Research methods
With the primary aims of executing a pilot study on Sekolah Agama Rakyat and gauging English language teachers' professional knowledge and experiences in teaching English within this educational context, we adopted a mixedmethods design for this study.We worked with teacher participants from three secondary level SARs in a state in northern Peninsula Malaysia to carry out our study (i.e., the state of Perak).Three types of data collection instruments ALLS 5(6):260-266, 2014
263
were employed for our pilot study: 1. Classroom observations 2. Questionnaires
Semi-structured interviews
The study that we carried out involved all English language teachers who, at that time, were teaching in three privatelyfunded SARs in Perak.General details regarding the schools are provided in Table 2 below.Rural town 8
Classroom observations
The observation checklist below was used to examine pre-teaching, while-teaching and post-teaching activities of teachers in the three schools.We outlined several instructional elements to be assessed and presented them in the form of an observation checklist as seen in Figure 1.
Questionnaires
The second instrument employed in this pilot study was a standardised questionnaire.We constructed this instrument so that it could inductively shed some answers to the second research question: "Are English language teachers in these schools equipped with professional knowledge on English language teaching and learning?"Following that, actual survey items were constructed and categorized into three broad categories namely demographic data (Part A), professional development (Part B) and teacher readiness (Part C). Figure 2
Methodology development and preliminary observations
At this present time, we managed to fully pilot the survey instrument and also carried out a number of classroom observations and semi-structured interview sessions.As for the questionnaires, all 30 teacher participants in the three research sites completed and successfully returned our survey forms.We met each one of them personally to discuss the questionnaires from the outset and then collected the forms from them when they were ready.The Cronbach Alpha values for the questionnaire are as follows.
For part 'B' that relates to the subject knowledge and professional development of the teacher participants, the initial value is .974.
Reliability Statistics
Cronbach's Alpha
N of Items
.974 20 For part 'C' that relates to the readiness and experience of the teacher participants to teach in the research settings, the returned value is .892.
Reliability Statistics
Cronbach's Alpha
N of Items
.892 16 In terms of the population make up within the research sites, we found that all 30 respondents had different educational experiences, ranging from the Malaysian Certificate of Education (SPM) to degree level qualifications.Interestingly for us, even though all three schools have set the SPM certificate as the minimum qualification for English teachers, the schools did not explicitly require prospective English teachers to have any kind of formal training in English language teaching and learning.Out of the 30 teacher participants in our pilot study, only one was qualified as a teacher of English as a Second Language (ESL).The other 29 'English' teachers were also first degree holders albeit with specialisations in areas that are not related to English language teaching and learning.In fact, all the teacher participants in our pilot study admitted to having low to moderate levels of proficiency in the English language.In addition, nearly all of them have never attended any kinds of formal English language teaching and learning courses to raise their professionalism in the last 2 years to date.
This dire reality was reflected in our classroom observations.In one observation session that typifies the style of teaching and learning of English in these schools, we observed a class that ran for a full hour (from 2 p.m. in the afternoon to 3 p.m.).Our observation showed that this particular teacher did not 'teach English' per se but she spent the entire hour to teach her students techniques for answering English language exam questions.At the start of the class, no induction activities were done to stimulate students' interest.Then, for the duration of that period, about 90% of 'teacher talk' was done in the students' first language which is 'Bahasa Melayu' or Malay language.Moreover, the teaching style of the teacher was quite traditionally teacher-centred and all her students were not given the time and space to interact in the target language (i.e., English).Nevertheless, we are both trying to remain optimistic for our actual research effort.Perhaps, further longitudinal observations will show positive aspects of English teaching and learning within the research sites.
With reference to the interviews, the preliminary session that we conducted with this particular teacher revealed that she strongly felt that she was not fully proficient in the subject that she has been asked to teach.She also acknowledged that the use of Malay language was extremely important as a way to ensure that her students understood what she was teaching.She reasoned that her students all come with a background in religious education that stresses more on Arabic language proficiency compared to English.The teacher also told us that it was quite difficult for English teachers (in schools like hers) to get the opportunity to increase their abilities to teach English what more to attend professionallevel English language teaching courses due to financial constraints and negative perceptions regarding this international language by administrative personnel.In the end, she accepted that these factors are likely to have contributed to the overall less-than-average quality of her teaching and that of other colleagues in her department.
Preliminary conclusions and tentative recommendations
We believe that it is important for English language teachers to have adequate knowledge about the teaching of English no matter where they are teaching.English teachers must still try to equip themselves with adequate skills to teach the target language because they are the role models for their students who will try their best to become like their teachers.
To master English, students should be exposed to the target language as much as possible.Hopefully, this will enhance their receptive skills (i.e., listening and reading).On the contrary, the total opposite is happening in Malaysian Sekolah Agama Rakyat or People's Islamic Religious Schools where English is being taught almost exclusively in the mother tongue of students.
Students in SARs should be given the opportunity to take part actively in the target language to improve their productive skill too (i.e., speaking and writing).Teachers in these schools should strive to create interactive teacherstudent and student-student 'learning moments' that will give their students the chance that they desperately need for much needed practice.Again, on the contrary, English teachers within this educational context prefer to teach in the traditional manner from the front of the classroom.This might hinder the promotion of English and even be counterproductive in the long run.
At the same time, as mentioned in the previous section, we remain optimistic that changes can happen in Malaysian SARs given the fact that there are still a number of privately-run SARs that managed to record total passes in the SPM examinations every year, including for the English language paper.Moreover, given that ours is a pilot effort to learn
Table 2 .
Basic details of the three sites of research that we chose is an abridged version of our second data collection instrument.Years of teaching experience / Previous schools taught / Educational background / Teaching certification / Language proficiency level / Other language spoken besides English / Reasons for choosing English teaching as a career / Formal qualifications required by your institution to teach English / Hours of teaching per week / Number of students / Hours of English language teaching training received. | 3,766 | 2014-12-01T00:00:00.000 | [
"Education",
"Linguistics"
] |
Cannabinoid CB2 Receptor Functional Variation (Q63R) is Associated with COVID-19 Severity: from Human Study to Molecular Docking
Background: Evidence supports the role of host genetic diversity for the clinical course variation of coronavirus disease 2019 (COVID-19). Variation in the cannabinoid CB2 receptor gene (CNR2) could affect the endocannabinoids regulatory actions on the immune system, resulting in an increased risk of various inammatory diseases. The present study investigated the relationship between the CNR2 rs35761398 (Q63R) functional variation and COVID-19 severity. Results: A total of 200 Iranian COVID-19 patients (100 expired and 100 discharged) were enrolled in the study and genotyped through TaqMan assay. The co-dominant, dominant, recessive, over-dominant, and additive inheritance models were analyzed using SNPStats software. In silico molecular docking was also performed to simulate the effects of Q63R variation on CB2 binding with a ligand and with G-protein. A signicant difference in the Q63R allele and genotype distributions was found between COVID-19 expired and discharged patients in co-dominant (OR: 3.33, 95% CI: 1.25-8.88, p = 0.043), recessive (OR: 2.92, 95% CI: 1.16-7.33, p = 0.017), and additive inheritance (OR: 1.62, 95% CI: 1.06-2.48, p = 0.025) models. The molecular docking results showed that the predicted structure of mutant CB2 (63R type) could not bind to G-protein in the correct position. Conclusions: The data implied the involvement of the CNR2 gene in the severity of COVID-19 in Iranian patients. Identication of genes related to susceptibility and severity of COVID-19 may lead to specic targets for repurposing or drug development. EC; cannabinoid receptor 2, CB2; cannabinoid CB2 receptor gene, CNR2; reverse transcription-polymerase chain reaction, RT-PCR; Hardy-Weinberg equilibrium, HWE; 3-dimensional, 3D; protein data bank, PDB; odds ratios, OR; condence intervals, CI; Akaike information criterion, AIC; root-mean-square deviation, RMSD; angiotensin-converting enzyme 2, ACE2; during respiratory syncytial virus, RSV; G-protein-coupled receptors, GPCRs;
Background
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) as a newly emerging virus causes mild-tosevere respiratory disease, which has been named coronavirus disease 2019 (COVID-19) (1,2). All people are susceptible to the virus infection, but there is considerable variation in the disease course and outcome among infected individuals (3). While many infected cases do not exhibit any symptoms, others proceed to develop COVID-19; however, severe illness and death occur only in a small minority of patients (4). Although our understanding of the SARS-CoV-2 and COVID-19 is still in its infancy, there is now strong evidence supporting the role of host genetic diversity alongside with other host, viral and environmental factors for the clinical course variation (5)(6)(7)(8)(9)(10)(11)(12). Host genetic diversity could dictate the clinical response to respiratory viruses through susceptibility to viral infection, and propensity to develop harmful pulmonary in ammation (13). Finding a relationship between host genetics and the clinical outcome of SARS-CoV-2 infection may be necessary for identifying high-risk individuals.
The endocannabinoid (EC) system is a biological system composed of endogenous cannabinoids and their respective receptors, CB1 and CB2 (14). The system has been identi ed as a critical endogenous regulator of immune system homeostasis due to its effects on immune cells development, migration, proliferation, and effector functions (15). Cannabinoids have been proposed as a promising immunomodulator to reduce the SARS-CoV-2 immunopathology (16). Variations in the cannabinoid CB2 receptor gene (CNR2) could affect intracellular signaling and reduce the ECs function, which has been associated with an unbalanced immune response and increased risk of various in ammatory diseases (17)(18)(19)(20)(21)(22)(23). The CB2-Q63R polymorphism is a missense mutation of the second and third bases at codon 63 of the CNR2 gene, which leads to a Q/R substitution, causing a different polarization state of the protein (24). This variation has been shown to affect the response of CB2 to cannabinoids and differently modulate the EC-induced inhibition of lymphocyte proliferation (25). While evidence indicated that the mutation does not affect receptor-ligand binding (24), the exact mechanism behind this action is still unknown.
Focusing on the immunopathogenesis of SARS-CoV-2 and EC effects on the immune system, here we describe how variability in the CNR2 gene can conceivably explain variability in COVID-19 clinical phenotype. Besides, in silico molecular docking was performed to simulate the effects of CB2-Q63R variation on receptorligand and receptor-G-protein interaction. The data implied the involvement of the CNR2 gene in the severity of COVID-19 in Iranian patients.
Human study
All patients were COVID-19 con rmed cases and hospitalized in the central hospital for COVID-19 patients in Gorgan city (Sayyad Medical and Educational Center). The details of the demographic, gender, age, and clinical data of all cases are presented in Table 1. The age distribution between expired patients (mean, 62.08 years) was signi cant compared with discharged patients (mean, 54.45 years) (p < 0.05). Moreover, the age distribution between female (mean, 63.12 years) and male (mean, 61.04 years) expired patients was signi cant compared with the discharged subjects (female mean, 55.64 years; male mean, 53.26 years) (p < 0.05). Of all patients enrolled in the study, the most observed symptoms were dyspnea (66.5%), cough (66%), fever or chill (59.9%), sore throat (20.5%), myalgia (16.5%), ageusia and anosmia (14.5%), nausea or vomiting (10.5%), diarrhea (7%), headache (6%), chest pain (5%), and fatigue (3.5%). The allelic frequencies and the genotype distributions in expired and discharged patients are shown in
Molecular docking
The I-TASSER server predicted ve models for the submitted sequences, and we selected the best predicted 3D structure of mutant CB2 (63R) with c-score, i.e., -0.9 with a 0.74 TM-score. C-score shows the quality of the predicted model by I-TASSER, and a model with a higher c-score signi es a higher con dence predicted model. TM-score shows the similarity between the predicted model and the template (6PT0), and the TM-score of more than 0.5 exhibits the correct topology of the predicted model. The best model of homology with 100% con dence and 94% identity were selected. The 3D structure of the predicted 63R models and wild type CB2 (PDB ID 6PT0, Chain R) are shown in Figure 1. Ramachandran plots are shown in Figure 2.
Molecular docking was performed to examine the binding a nity and possible binding pocket of 2-AG with both the built CB2-63R models and wild-type CB2-Q63. The binding energy between 2-AG and CB2-Q63, 2-AG and I-TASSER built model, and 2-AG and Phyre2 built model were -7.5, -6.9, and -7.0 Kcal/mol, respectively. Lower binding energy indicates better interaction, more a nity, and more stability between ligand and protein.
The binding position, hydrogen bonds, alkyl bonds, van der Waals forces, and binding residues are shown in Figure 3.
The HDOCK server was used for protein-protein docking and the possibility of interaction between CB2 and Gprotein. The docking results for wild-type CB2 and G-protein were -452.97 docking score and 1.06 A° rootmean-square deviation (RMSD), and the binding a nity was -13.3 (Kcal/mol). These data showed the correct binding position of G-protein and CB2-Q63 after docking compared with CB2 structure coupled with G-protein (PDB ID: 6PT0). The docking results for the I-TASSER model and G-protein were -350.14 docking score and 80.32 A° ligand RMSD, and for the Phyre2 model and G-protein were -322.83 docking score and 192.66 A°l igand RMSD. These results show that both predicted structures of mutant CB2 cannot bind to G-protein in the correct position. Interaction between both CB2-Q63 and CB2-63R models and G-protein is shown in Figure 4.
Discussion
As a newly emerging virus, all people are susceptible to SARS-CoV-2 infection, though the nature and severity of COVID-19 vary signi cantly among cases. Notably, reported disease burden and case fatality rates differ considerably from one country to another (36). The exact in uence of host genetic makeup in this variation has remained mostly unknown. The importance of host genetics' contribution in differential responses to SARS-CoV-2 is highlighted by a modeling study, revealing that 50% of the variance of the 'predicted COVID-19' phenotype is due to genetic factors (6). For example, studies reported that the variable expression pattern and genetic variation in angiotensin-converting enzyme 2 (ACE2), as a functional receptor for SARS-CoV-2, might be associated with the susceptibility to infection and severity of the disease (37). Notably, a recent genomewide association study revealed that critical illness in COVID-19 is related to host antiviral defense mechanisms (IFNAR2 and OAS genes) and mediators of in ammatory organ damage (DPP9, TYK2, and CCR2) (38). Thus, genes related to immune responses are of particular interest for our understanding of predisposition to severe COVID-19 because of the immunopathology of SARS-CoV-2.
The EC system has received much attention due to its regulatory roles in immune response and its effects on immune-associated disease progression (39). There is evidence supporting the system's speci c involvement in respiratory viral associated immunopathology and in modulating in ammation following infection. We previously provided evidence that the EC system plays an essential role during respiratory syncytial virus (RSV) infection in humans and mice (17). Other studies also showed that the lack of cannabinoid receptors could increase in ammation and tissue damage following in uenza virus infection, and their activation can impair immune responses induced by the virus (40)(41)(42). Such data supported the idea of using cannabinoids as a potential therapeutic approach in COVID-19 patients (16,43). The rationale behind the current study was based on SARS-CoV-2 immunopathology, together with the data available on the immune regulatory role of CB2 signaling.
The CB2-Q63R polymorphism is caused by two missense mutations in the CNR2 gene changing CAA to CGG, which lead to a substitution of an uncharged polar amino acid (glutamine) with a positively charged polar amino acid (arginine) at position 63 located at the rst intracellular loop of the CB2 receptor (24). Studies indicated that the CB2-63R variant has less functions than 63Q in the modulation of immune responses, especially T-cell proliferation (25). While the exact mechanism is unknown, a study reported that the signal intensity caused by 63R activation is relatively weaker than that caused by 63Q activation (44). The present study reports for the rst time, the association between the CB2 receptor and COVID-19 severity. A signi cant difference in the Q63R allele and genotype distributions was found between COVID-19 expired and discharged patients ( Table 2). The co-dominant, recessive, and additive inheritance models showed a signi cant association between Q63R and COVID-19 severity (Table 3). According to the co-dominant model, RR subjects showed a risk for developing severe COVID-19 more than thrice of QQ subjects.
The association between the CB2-Q63R variation and autoimmune conditions such as thrombocytopenic purpura (45), celiac disease (23), juvenile idiopathic arthritis (19), in ammatory bowel disease (46), and rheumatoid arthritis (22) has been reported. The data reported by the current authors have been implied the involvement of the Q63R variation in susceptibility to multiple sclerosis in Iranian patients (18). Interestingly, our previous study showed that the in ammatory response to virus is more inhibited in cases with QQ variants, allowing the virus to replicate and induce severe infection (17). In the case of SARS-CoV-2 infection, while a robust innate immune response is essential to eliminate viral pathogens, a prolonged or dysregulated/exuberant manner can damage the respiratory tract (47). The current results are consistent with previous studies that showed a reduced EC-induced modulation of the immune system in human subjects carrying the RR variant of CB2 compared with those having the QQ variant (2,19,23,45,46).
Importantly, experimental data from previous competition ligand binding assay showed that the binding a nity of 2-AG and CB2-63R is similar to CB2-Q63, but CB2-63R had a signi cantly lower maximum response after binding to 2-AG compared with Q63 type (24). Our molecular docking results con rm that the CB2-63R induces a more inadequate response to ligand binding. The biological effects of cannabinoids are mediated through the activation of G-protein-coupled cannabinoid receptors (48). G-proteins act as adaptors that link Gprotein-coupled receptors (GPCRs) to other signaling and regulatory proteins to operate or modulate intracellular signaling pathways (15). The molecular docking results showed that the predicted structures of mutant CB2 could not bind to G-protein in the correct position, resulting in EC signaling dysfunction. This data is consistent with Wang et al., nding that CB2-63R induces lower signaling transduction than CB2-63Q in human primary T-cells (44). The CB2 receptor is predominantly expressed in the immune and immune-derived cells, and its activation indirectly affects viral infections by altering host immune responses, particularly in ammation, along different signaling pathways (49,50).
Conclusion
Data from the current study points toward host genetic involvement in the severity of COVID-19. Host genetic certainly affects the balance of immune responses during any viral immunopathogenesis, leading to different clinical phenotypes. Results indicate that people with CB2-63R variant are more prone to develop severe COVID-19. Considering the potential of this polymorphism as a biomarker in COVID-19 severity, there is an urgent need to deepen these ndings through further studies. Regarding the sample size limitation and different genetic backgrounds in various populations, other studies using whole-genome sequencing with a large cohort for multiple people would be required. Identi cation of genes related to susceptibility and severity of COVID-19 may lead to speci c targets for repurposing or drug development. We hope that, with great efforts, scienti c support, and information sharing, the overcoming of COVID-19 will come soon.
Human study
A total of 200 Iranian COVID-19 patients were included in this study. The case group consisted of 100 expired cases (50 women and 50 men) with a mean age of 62.08 years, and the control group consisted of 100 discharged cases (50 women and 50 men) with a mean age of 54.45 years. All COVID-19 patients were con rmed by real-time RT-PCR assay targeting the SARS-CoV-2 nucleoprotein (N) and ORF1ab genes (Pishtazteb, Iran). A clinical questionnaire was developed for this study and used to collect data from all patients, including gender, age, and clinical symptoms at admission (Additional le 1). Nasopharyngeal samples were collected from all patients and divided into two groups according to their disease outcome. All of the subjects in this study were from Golestan, Province, and had the same geographical origin, and none were related.
Genomic DNA was extracted from the collected nasopharyngeal samples using a DNA extraction kit following manufacturers' instructions (GeneAll, South Korea). Extracted samples were genotyped for the CNR2 rs35761398 (Q63R) variant using a TaqMan assay with commercial primers/probes (Thermo Fisher, USA). The reaction conditions were as follows: 95 °C for 4 min, followed by 50 cycles of 95 °C for 15 s, and 60 °C for 90 s. Both PCR and post-PCR allelic discrimination was performed on an ABI PRISM 7300 system (Applied Biosystems, USA). Genotypes of ten percent random samples per each group were con rmed by direct PCR sequencing as described previously (18).
The demographic and clinical data were analyzed with SPSS23 software (IBM, Chicago, IL). The Hardy-Weinberg equilibrium (HWE) and differences in allele and genotypic frequencies were calculated using the SNPStats software (a web tool for the analysis of association studies: http://bioinfo.iconcologia.net/SNPstats) (26). Inheritance models such as co-dominant, dominant, recessive, over-dominant, and log-additive were analyzed using the SNPStats software. The OR was adjusted by age in the logistic regression model. The power of the test was calculated using G-power 3.1.9.4 software (Universitat Kiel, Germany). Odds ratios (OR) and 95% con dence intervals (CI) were calculated, and p-values less than 0.05 were considered statistically signi cant.
Molecular docking
To predict the 3-dimensional (3D) structure of mutant CB2 (63R), CB2 protein sequence (NCBI Reference Sequence: NP_001832.1) with a change in position 63 was submitted to I-TASSER server (https://zhanglab.ccmb.med.umich.edu/I-TASSER), which use a hierarchical approach to protein structure prediction (27). The best model is selected from the output based on the con dence score (c-score). The Phyre2 server was also used to predict the 3D structure of 63R based on homology (http://www.sbg.bio.ic.ac.uk/phyre2/html/page.cgi?id=index) (28). The best model based on con dence and identity was selected as a homology model. Both models were submitted to ModRe ner (https://zhanglab.ccmb.med.umich.edu/ModRe ner) for atomic-level, high-resolution protein structure re nement to energy minimization and structure re nement (29). Ramachandran validation was performed using the MolProbity server (http://molprobity.biochem.duke.edu/) for backbone and structure validation of the built 3D models (30
Consent for publication
Not applicable. Visualization of docking analysis of 2-Arachidonoylglycerol (2-AG) with A) wild type CB2, B) I-TASSER built mutant CB2 model, and C) Phyre2 built mutant CB2 model. The binding position, hydrogen bonds, alkyl bonds, van der Waals forces, pi-sigma, carbon, and binding residues are shown. | 3,788.4 | 2021-01-25T00:00:00.000 | [
"Medicine",
"Biology"
] |
Multi-Dimensional Cloud Model-Based Assessment and Its Application to the Risk of Supply Chain Financial Companies
The multi-dimensional cloud model is proposed as the expansion of the one-dimensional cloud model. The features of ambiguity and stochasticity in complex information situations are considered; thus, this optimized model can be utilized upon multiple value classifications and ordering via which the objects’ attributes of physical and social can be reflected. Therefore, this promoted model is wildly used. This paper provides a knowledge graph by reviewing the theoretical research of the multi-dimensional cloud model and its related bibliographies, and Cite Space is applied here to give a visualization conclusion. In recent years, a multitude of theories and methods have emerged to address the challenges posed by fuzzy and stochastic uncertainty in various domains, such as image segmentation, data mining, prediction techniques, and comprehensive evaluation of multiple metrics and dimensions using uncertain linguistic variables.
INTRoDUCTIoN
The cloud model has been applied in various domains, including decision-making, pattern recognition, data mining, and expert systems.It allows for the modeling and reasoning of uncertain and imprecise information, enabling more accurate and robust analysis of complex problems.Many concepts in real-world problems need to be described by multiple metrics, i.e., multi-attribute, multidimensional problems.The traditional cloud model normally suffers from an evaluation process, thus as the size of the data set increases, its operation efficiency decreases; there also exists a dilemma that biased evaluation results may yield when there is a large difference in the scales of each evaluation level interval.To solve such problems, a multidimensional cloud model can be considered (Li & Du, 2017).
Further, CiteSpace visualization is used here to present the structure, and distributional characteristics for the research of a multidimensional cloud model.CiteSpace information visualization software can present the new dynamics of a certain scientific field in future developments (Chen, 2006) and draw a visual analysis chart of literature author collaboration, research institution collaboration, and literature keyword co-occurrence.By analyzing the size and number of nodes in the graph, as well as the density of connecting lines between nodes, the current research hotspots and future research trends in this field are analyzed.
Keywords are a cluster of natural language words with substantial meaning that express the thematic characteristics of the content of the article.Reading the literature first, locating the keyword section can yield the article's theme, research object, research methodology, etc.Similarly, search keywords can realize the paper's information to find and summarize.Thus, the node information is set as keywords in CiteSpace and visualized as a graph; secondly, a series of intuitive knowledge graphs are used to show the hot keywords of the multidimensional cloud model and their evolution direction in foreign research.Keyword co-occurrence analysis graph in Web of Science (WoS) regarding multidimensional cloud modeling (see Figure 1), where intricate solid lines come together to form dots (nodes) that indicate how many keywords appear in the literature.The larger the dot, the higher the frequency of the keyword, and the thickness of the solid line connecting the dots indicates the strength of the link between the keywords; the thicker the solid line, indicating that the keywords appear in the same article, the greater the intensity (Chen, 2016).
Figure 1 demonstrates the co-occurrence graph of the terms cloud model, cloud computing, multidimensional, multidimensional cloud, and data mining.Other keywords are centered on the cloud model, spreading out in all directions to form a mesh, and each of the nodes is connected through the nodes, and then extended to the multidimensional model, multidimensional cloud, and the degree of affiliation composed of the other groups by the nodes, and the connection between the nodes to form a whole with a certain relationship.Further, on the prospect of nodes, Figure 1 shows that the dots for keywords, including cloud model, field framework, big data, cloud computing, and algorithm are significantly larger than the dots of other keywords, which indicates that the number of times these keywords appear is relatively frequent.In addition, the network connecting the words is intricate and complex, which means that the closer the connection is, which leads to the conclusion that these words belong to the hot vocabulary in the current research field of cloud modeling.Next, the keyword clustering analysis of the literature obtained from the WoS database can visualize the aggregation of the multidimensional cloud model.Figure 2 shows that the keywords mined from the WoS database are clustered into multiple word clouds, each of which describes the main research directions of the multidimensional cloud model from 2005 to 2022.
According to the above mapping analysis, it can be seen that foreign researchers prefer introducing the cloud model into meteorological prediction, and proposing new methods to construct the planetary modeling of persistent sunny to cloudy to ensure the cloud model can mine part of the meteorological region information.Fuzziness (or vagueness) results from the imprecise boundaries of FSs.Nonspecificity (or imprecision) relates to sizes (i.e., cardinalities) of relevant sets of alternatives.Zhu and Li (2016) considered not only the difference between the membership degree and non-membership degree but also the hesitation degree.Combining both the cloud model and two-type fuzzy to deal with the problem of image segmentation is also another important optimizing direction.The cloud model is frequently used in web services to create distributed cloud applications based on efficient quality of service awareness techniques.In conclusion, the cloud model is used to advance societal progress through scientific and technology research and development in addition to life safety evaluation.Multidimensional cloud models based on fuzzy mathematics and probability theory have been widely used in natural language processing, data mining, decision analysis, intelligent control, and image processing.
CoNCePTS AND THeoRIeS oF THe MULTIDIMeNSIoNAL CLoUD MoDeL Definition 1
Let U be a universal set described by precise numbers and C be the qualitative concept related to ) is a single random realization of the concept C , it is a random number with stable distribution for the determinant m x . Then the distribution of x in the universe U is called the cloud model, and x is defined as a cloud drop (Li et al, 2004).The quantitative value x reflects the randomness of the quantitative value that represents the concept, while m x ( ) reflects the degree of certainty that the quantitative value x is affiliated with the qualitative concept C .
Definition 2
The concept C is in the quantitative domain U , and C contains three numerical features Ex En He , , Î is a single random realization of concept and the determinacy m x ; the affiliation function satisfies the expression: . Therefore, the distribution of any variable consisting of cloud drops is called a normal cloud model (Li et al., 2004).The multidimensional normal cloud model is developed from the one-dimensional normal cloud model, which can reflect the multidimensional qualitative concept.The multidimensional normal cloud model is defined as follows.
Definition 3
Let U be an m -dimensional theoretical domain U , C be a qualitative concept on U .The affiliation m i of an element x in U with respect to C is a random number with stable tendency, namely: . The multidimensional normal cloud affiliation function can be expressed as follows.
where Ex i and En i are the affiliation degrees of the elements in the thesis domain U corresponding to C , respectively (Liu et al., 2021).The numerical characteristics of the multidimensional cloud model and the types of cloud generators are as follows.
Numerical Feature Model
The numerical feature model characterizes the vagueness and randomness of concepts by three numerical features: expectation Ex ( ) , entropy En ( ) , and super entropy He ( ) .
• Expectation is the point in the number field space that best represents the qualitative concept and reflects the center of the corresponding concept cloud.• Entropy can be used to provide a combined measure of the vagueness and probability of a qualitative concept, as well as to illustrate the relationship between vagueness and randomness.
The number of points can indicate the probability of the concept, i.e., randomness.• Super-entropy It is an uncertain measure of entropy, i.e., the entropy of entropy, which reflects the agglomeration of cloud droplets, while the magnitude of super-entropy indirectly represents the dispersion and thickness of the cloud.
This section may be divided by subheadings.It should provide a concise and precise description of the experimental results, their interpretation, as well as the experimental conclusions that can be drawn.
The subjectivity of the prototype multidimensional cloud model in determining the numerical features, leads to bias in the evaluation results, making its application limited.More and more researchers engage in the cloud model-theoretic study, and expansion of this model which merged with other decision-making methods applied to the study of real-world problems, including entropy weighting (Wu et al., 2022), the CRITIC method (Demir et al., 2022), the TOPSIS method (Khodamipour & Shahamabad, 2022), the combined assignment method, the Bayesian network method (Yu et al., 2004), and so on, such that, based on the comparative advantage of describing the transformation of deterministic and uncertain situations, the multidimensional cloud model provides a new way of thinking to solve the deficiencies of the multidimensional normal cloud model.In order to determine the numerical properties of the cloud model more objectively and increase the accuracy and reliability of evaluation outcomes, the multidimensional normal cloud model is investigated based on statistical approaches frequently employed in cloud models.
The significance of this paper is to illustrate the advantages of multidimensional cloud models and show the importance of multidimensional cloud models.Therefore, it is necessary to refer to a multidimensional cloud model to solve multi-attribute decision-making problems.In this paper, the advantages of multidimensional cloud models over one-dimensional cloud models are integrated, and several decision-making methods combined with multidimensional cloud models are introduced in detail.Then, one of these methods is selected and applied to a specific example to demonstrate that the multidimensional cloud model, which can provide accurate indicator evaluation results when solving multi-attribute decision-making problems.
On the basis of summarizing existing research results, this paper summarizes the current research status and development trends of the multidimensional cloud model theory, so that readers can establish a basic framework for the multidimensional cloud model system.Secondly, the procedure of decision-making methods based on multidimensional cloud models is also summarized.Finally, perspectives on the frontiers of multidimensional cloud modeling are presented.This work provides a sense of value for the research with big data and cloud model theory.
The rest of the paper proceeds as follows: Section 2 furnishes concepts and theories related to a multidimensional cloud model, Section 3 introduces the theoretical study of the cloud model, the applied research of the cloud model is proposed in Section 4.An application of the multidimension cloud model is illustrated by examining the risk of supply chain financial companies in Section 5. Section 6 conducts frontier issues in multidimensional cloud modeling research, followed by remarks in Section 7.
Cloud Generator
In many practice circumstances, the cloud generation algorithm is also called a cloud generator (Ma et al., 2022), which is divided into a forward cloud generator and an inverse cloud generator (CG).In response to the inability of traditional cloud generation methods to effectively process highdimensional data, some experts suggest using multidimensional Gaussian cloud generators (MCG).The numerical features of the most basic 1D cloud model are extended to the multidimensional case, as shown in Figure 5.
However, its inverse cloud generator first calculates three data features of the cloud model, based on the transformation of quantitative data into digital features of the cloud.The mean value x of the ) of the cloud model.
THeoReTICAL STUDy oF THe CLoUD MoDeL
The multidimensional cloud model concept is widely utilized in numerous fields, such as intelligent control, data mining, system evaluation, signal processing, image processing, and knowledge modeling (Yang et al., 2018).In this section, the theoretical study of cloud modeling will be elaborated in terms of how decision-making methods and cloud modeling are combined, the value generated by applying them to practice, and the prospects.
The ToPSIS-Cloud Model TOPSIS (technique for order preference by similarity to an ideal solution) is a multi-objective decisionmaking problems (Zeng et al., 2020) based on the concept of the relative-closeness coefficient in the TOPSIS, auxiliary nonlinear programming models are constructed to solve MADM problems (Li, 2010).The basic idea is to normalize the original data matrix, find the positive and negative ideal solutions using the TOPSIS method, and then seek the distance between the positive and negative ideal solutions, according to the distance of the relative proximity of each program, and as a basis for evaluating the advantages and disadvantages of the program to rank the program, to select the optimal program.Chen ( 2000) studied TOPSIS in a fuzzy environment and proposed a linguistic decision process for solving multi-criteria problems.The TOPSIS-cloud model is a new computational method for solving interval decision information based on the cloud model, which combines the distance measurement algorithm of the cloud with TOPSIS to solve the problem of uncertainty and stochasticity of the information contained in evaluation schemes (Lu et al., 2022), and gives the method of determining positive and negative ideal clouds and the distance measurement formula of the cloud model, based on which the cloud TOPSIS method is proposed, and this method is applied to the problem of uncertainty and randomness in evaluation schemes.Uncertainty and stochasticity problems (Lu et al., 2022), the method of determining the positive and negative ideal clouds, and the distance measure formula of the cloud model are given, based on which the cloud TOPSIS method is proposed, and this method is applied to many fields such as engineering design, economic management, and military, and the main steps are specified as follows. Step
1: Transforming the Traditional Evaluation Decision Matrix Into a Cloud Model Evaluation Decision Matrix
The matrix given by the decision maker is transformed into a cloud evaluation decision matrix , a L ij and a U ij are the minimum and maximum boundary values of the interval, respectively, and the three numerical parameters in the cloud model evaluation decision matrix are defined as: Step 2: Cloud Distance Following Chen (2000) and Zhou et al. (2018), arbitrarily give two clouds, C Ex En He ) ) .Then the Hamming distance between C i and C j is: En He En He En He Ex The Euclidean distance between C i and C j is: (5) The Manhattan distance between C i and C j is: The variable r is the uncertainty of the cloud, such that r = − +
En
He .
Step 3: Determining the Positive and Negative Ideal Solutions According to the cloud model matrix , the positive and negative ideal solutions of the scheme with different attribute indicators are determined as: Step 4: Calculating Objective Weight Values If the weight information w i is known, then the integrated cloud model algorithm integrated by the positive and negative ideal scheme weighting is: If the weight information w i is unknown, the objective weights are determined using the idea of minimizing the cloud uncertainty r .The objective planning function is thus constructed.
By constructing the Lagrangian function method, the objective weight values are obtained as: Step 5. Calculating the Relative Cloud Distance Applicable to different cases, use the above distance formula to solve the weighted integrated cloud model distance values d C C i , − ( ) between each scheme and the positive and negative ideal scheme, and calculate the relative cloud distance (Yu et al., 2019).
The larger P i means the better solution, and the ranking result of each solution is calculated to select the best solution.
The Bayesian Network -Cloud Model
Bayesian networks are a network topology of directed acyclic graphs, an uncertainty processing model that simulates the causal relationship in human reasoning, with nodes representing random variables X X X n 1 2 , , { } .If there is only a single arrow between two nodes, it means that one of the nodes is the cause and the other is the effect.The strength of the association of nodes and connecting lines, i.e., the weights, is represented by the conditional probabilities.
Bayesian network is widely used in real life, constructed a Bayesian network based transport aircraft runway risk assessment model, accurately evaluates the level of various types of indicators, effectively evaluates the risk of over-wheel speed, can help airlines to take reasonable measures to achieve over-wheel speed risk control, which is of great significance to ensure safe operation; based on the cloud model and the Bayesian network of the missile status assessment method, assesses the state of the missile, and according to the monitoring results of the missile for regular repair and testing and maintenance.The Bayesian network model improved by using the theory of the cloud model is applied to the practical problems with the main steps as follows.
Step 1: Constructing the Bayesian Network , denotes a directed acyclic graph, I denotes the set of all nodes, E denotes the set of directed connected line segments, and the joint probability of the random variables X X i is denoted as: Then we call X a Bayesian network relative to a directed acyclic graph G , where pa i ( ) denotes the parent of the i th node (Wu et al., 2016).
Step 2: Generating the Integrated Cloud According to the Bayesian network in cloud computing and the actual situation in establishing the indicator system, the indicators will be discretized and processed to determine the cloud digital features to generate a comprehensive cloud.Step 3: Network Parameter Learning Computing conceptual certainty from the numerical characterization of cloud model, and after the mutual transformation of certainty and probability, the Bayesian network structure is constructed, and then network parameter learning, network inference to obtain the node a posteriori probability, and finally the forward assessment and reverse inference are implemented, which finally leads to the indicator assessment results.
The Combination weighting Method Cloud Model
The combination weighting method is the combination of subjective and objective weight to obtain the optimal weight.Subjective assignment is based on the decision maker's subjective information, according to the importance of the indicators to give relatively appropriate weights.This method has strong effectiveness, but its subjectivity is relatively strong.Common subjective weight calculation methods include the Delphi method, hierarchical analysis method, etc. Objective weighting cannot reflect the degree of importance attached to different indicators by the participating decision makers, but its weight can be determined by the connection between the original data, which has a strong theoretical basis, and the entropy weighting method is usually used to calculate the objective weights.
Combination weighting method is an important element in the research of rational allocation program evaluation, which can balance the subjective judgment and objective evaluation of decisionmakers and make the ranking results of multi-indicator evaluation more scientific.Generally, the combination assignment-cloud model first calculates the subjective weights by hierarchical analysis method, and then calculates the comprehensive weights by combining subjective and objective combination assignments.Finally, using the cloud model theory, the numerical values are transformed into qualitative language, and the evaluation cloud diagram is used to visualize the degree of deviation of the indicators.This combined assignment evaluation can make the assessment results more scientific.
Step 1: Calculating the Subjective Weights Taking the AHP method as an example (Saaty, 2004), the evaluation system is set up with a matrix of n indicators and m expert ratings , the subjective weights of each indicator were calculated through the following formula.
The weight of each indicator is calculated as: Each weight is then normalized to obtain the subjective weights of the index values W i s : where n is the number of evaluation indicators, m is the number of experts, and , , , .
Step 2: Calculating the Objective Weights The entropy weight method is used to calculate the objective assignment (Wei et al., 2016), which determines the entropy weight of the indicator through information entropy according to the dispersion of the indicator data and obtains a more objective weight by correcting the entropy weight.There are n evaluation samples for each evaluation indicator, x ij denotes the evaluation value of the evaluation sample R i on evaluation indicator I j , and the original data matrix is noted as x ij m n ( ) × .The steps for determining the weights of each indicator are as follows.
First, standardize the data: Then calculate the information entropy of each index: Then calculate the indicator weights: where e j is the entropy value of the evaluation metric; I j and W j represent the weight of the , , , .
Step 3: Determining the Portfolio Weights Let the portfolio weights be ω α α , where a a ,1are the proportions of subjective and objective weights in the portfolio weights, respectively.Variables W W s o , are the subjective and objective weights of the indicators, respectively.
Step 4: Cloud Digital Features Under Empowerment Combination Based on the three numerical characteristics of the cloud and the calculation method of the combined weight w j , it is applied to the calculation of the integrated cloud
C Ex En He C Ex En He
C Ex En He
The numerical characteristics algorithm of the integrated cloud is obtained as follows.
Ex Ex Ex Ex
En En En En The cloud model digital features are obtained from the above equation and then the visualization of the integrated cloud is given.
Discussion of Methods
The above methods can all consider the weights and correlations between different indicators.However, TOPSIS is ranked based on the proximity of the evaluation object to the idealized goal (Li, 2010), and there is a certain degree of subjectivity when calculating indicator weights.The main purpose of the Bayesian network is to solve problems of uncertainty and incompleteness.The combination weighting method combines subjective and objective weights, which has stronger applicability.
APPLIeD ReSeARCH oF THe CLoUD MoDeL
The ToPSIS-Cloud Model The TOPSIS-cloud modeling approach is characterized by simplicity and ease of use in solving interval-based decision information problems.Gong (2022) used the cloud similarity TOPSIS method for water resource management and allocation scheme problems to effectively solve the fuzzy uncertainty of indicator weights (Wei, 2016;Gong et al., 2018), and optimized the assessment of relay protection status under the combination of the cloud modeling method, grayscale correlation, and TOPSIS; In addition, the TOPSIS-cloud model validates the applicability of the metrics and models on construction safety, disaster risk assessment, and building engineering.
Bayesian Network and Cloud Model
Bayesian network -cloud model is well known for its powerful reasoning ability, commonly used in security risk, threat assessment, reliability analysis, etc. Especially in the fields of railroad transportation, automation technology, weapons industry and military technology, and Internet technology, this expansion is preferred.Yu (2023) established a novel multi-objective decision model for the grade assessment of network security situations under multi-source information.Wu et al. (2016) utilized the rough set and Bayesian network for building subway shield construction adjacent bridge safety problem for risk assessment.Shen et al. (2019) applied this model to the construction safety of assembled housing components lifting; this model can effectively reduce the housing safety hazards easier to attract people's attention.
Portfolio empowerment and Cloud Model
The empowerment cloud model fully considers the subjective and objective information of the indicator weights and realizes the mutual transformation of the qualitative and quantitative, subjective and objective.Numerous scholars have transformed real-life linguistic uncertainty variables into precise quantitative data based on the combinatorial assignment cloud model to propose reasonable evaluations and find suitable solutions.The method was common in the field of mathematics in the early days, and later, experts and scholars extended it to the field of road and waterway transportation for solving the problems of high-speed railroad system experiments, subway construction risk evaluation, and so on.In water conservancy and hydropower engineering, the operation and management of long-distance water transfer projects, as well as the integrated problems related to distribution network planning, are more accurately and effectively assessed after combining the combined-empowerment cloud model.
eMPIRICAL ANALySIS
Fuzzy multi-attribute decision-making (MADM) problems are widely spread in real-life situations.However, it may not be easy to identify the exact value for the membership degree of an element to a given set.A range of values may be a more appropriate measurement to accommodate the uncertainty, imprecision, or vagueness (Li, 2011).In the traditional financing environment, small and mediumsized enterprises (SMEs) normally find it hard to earn sufficient loans from the banks, which may be necessary for their daily operation, because of their credit rating or other reasons.However, this dilemma is partly mitigated by the neo-founded supply chain finance plants.A Chinese enterprise QR is chosen as an example to assess the risk level of supply chain finance (because of China's privacy policy regulation, the enterprise's name is presented only by the initials QR).
Supply Chain Finance Risk Indicator System
Supply chain finance risks are usually influenced by environmental and human factors, leading to the damage of some industries' interests.Starting from the actual situation of QR's supply chain finance operation, five first-level indicators consisting of the qualification of financing applicant enterprises, the qualification of core enterprises, the status of enterprise assets, macro and industry risks and supply chain risks are selected following the principles of science, feasibility, and representativeness, as shown in Table 2.
Data Sources-expert Scoring Method
In this case, the expert scoring method was adopted, and 10 experts with knowledge and experience in the automotive and financial industries were invited to evaluate the importance of the above 20 indicators.The scoring scale is based on a five-point scale, 0 1 , is the results of the scores given to the j th indicator by the i th expert and normalized (Hwang et al., 1981).
The benefit type is defined as: The cost type is defined as: Vector transformation methods are defined as: , , , ; , , , 1 , A defines the interval series vector parametrization.The initial matrix in Table 3 is normalized according to the above equation, and the norms of the column vectors of the decision matrix are denoted as A i respectively, and the resulting normalized matrix is shown in Table 4.
Based on the normalization matrix, the entropy weight of each indicator is calculated W j n j o , , , , = 1 2 , as shown in Table 5. x 1 According to the information entropy in Table 5, the weights of the primary and secondary indicators can be calculated as shown in Table 6.Although the two first-level indicators of core enterprise qualification and enterprise asset status account for a relatively small proportion, the characteristics of the pledge in the second-level indicators under the enterprise qualification status occupy a larger weight, and the pledge usually refers to the real estate, movable property or rights owned by the debtor provided to the creditor.Here, according to the company's pledge liquidity and ability to convert into cash as the second-level evaluation indicators to judge the key factors of the company's supply chain financial credit risk, the higher the pledge liquidity and the stronger the ability to convert into cash, the lower the supply chain risk faced by the company.
Macro and industry risks are the most important factors affecting the development of supply chain finance, with exchange rate risk accounting for the highest proportion, followed by interest rate risk.The reason mainly lies in the fact that when QR company trades with foreign countries, the changes in the economic strength of each country and the choice of macroeconomic policies determine the trend of exchange rate changes.In recent years, the impact of the new crown epidemic hit the global economy hard, so that the exchange rate has great volatility, foreign banks in order to maintain economic stability, to avoid the adverse impact of exchange rate changes to the domestic economy, often intervene in the market, so the exchange rate change is facing the key risk of supply chain finance.
Standard Risk Cloud Graph
According to the principle of the five-point scoring system, the corresponding range of theoretical domain values are: 0 1 1 2 2 3 3 4 4 5 , , , , , , , , , where k is a constant, generally taken as 0.1.The results of the calculations are shown in Table 7.
The cloud digital eigenvalues from the above table are entered into the cloud generator and a risk level cloud map is generated as shown in Figure 7. these numerical characteristics of the secondary indicators were calculated as shown in Table 8.The characteristic values of the primary indicators were calculated according to Tables 6 and 8, as shown in Table 9.
Analysis of Results and Suggestions for Countermeasures
According to the calculated eigenvalues of the first-level indicators, the five first-level indicators of financing application enterprise qualification risk U 1 , core enterprise qualification risk U 2 , enterprise asset status risk U 3 , macro and industry risk U 4 , and supply chain risk U 5 are randomly grouped in two groups, respectively.From the two-dimensional diagram drawn from the core enterprise qualification risk U 2 and the supply chain risk U 5 , the respective one-dimensional floor plan is obtained by observing from the two perspectives of U 2 and U 5 , respectively, and from the comparative analysis of the floor plan and the standard risk cloud diagram, it can be seen that: the risk of U 5 is lower than that of U 2 , and then we obtain U U 5 2 .From the two-dimensional diagram drawn from the supply chain risk U 5 and the financing applicant enterprise qualification risk U 1 , the respective one-dimensional floor plan is obtained by observing from the two perspectives of U 1 and U 5 , respectively, and from the comparative analysis of the floor plan and the standard risk cloud diagram, it can be seen that: the risk of U 1 is lower than that of U 5 , then U U 1 5 .The case introduces a multidimensional cloud model on the basis of supply chain finance to explore some risks that exist in the process of a company's supply chain finance operation mode, and finds that the risks faced by the company are controllable in general through the comprehensive cloud characteristic value of supply chain finance of QR Company, and puts forward relevant suggestions and improvement measures as follows.
1. QR faces significant core enterprise qualification risks.It is suggested that the company constructs a more complete credit evaluation system, pays attention to the credit rating of core enterprises in banks and the industry status of core enterprises, and can implement a strict reward and punishment system to give certain preferential policies, such as preferential interest rates, to enterprises with higher credit ratings.The credit risk of the enterprise can be improved by including the defaulting enterprise in the blacklist and refusing to provide relevant financial-type services to the defaulting enterprise, etc. 2. Improve the transparency of information and the internal management system of the company.
The supply chain can leverage the leading technology of financial technology to provide sufficient power for supply chain development, enhance information flow and business cooperation among online companies, and reduce supply chain collaboration and management risks.3. To strengthen the regulatory system for all aspects of supply chain finance operations, the government needs to actively improve the relevant policies and regulations, improve the supply chain finance mechanism, do a good job of risk prevention and control measures, protect the rights and interests of stakeholders, improve the enthusiasm of investors, purify the market environment, ensure the stable operation of the market, achieve the purpose of reducing macro and industry risks, and achieve the ideal state of healthy development of supply chain finance.
FRoNTIeR ISSUeS IN THe MULTIDIMeNSIoNAL CLoUD MoDeL
Throughout the existing literature, there are fruitful research results on cloud modeling.Thanks for the effort of the research, the cloud model is promoted from one-dimensional to two-dimensional, three-dimensional, or more multidimensional cloud model, the research field is getting wider and wider, and the research method tends to be improved, but the research difficulty is also getting higher and higher.The following will focus on the research of multidimensional cloud model in dealing with the cutting-edge issues in image segmentation technology, evidence theory, linkage number theory and time series analysis, and the prospect of the application of the multidimensional cloud model.
Image Segmentation Processing Techniques
The images generated by introducing the cloud model into the field of image segmentation processing are better than the simulated image generation technique of the analytical model in terms of realism, universality, and rapidity.Due to the existence of many uncertainties in the image processing process, as well as the color, shape, and size of the image to be presented need to be consistent with the actual image, the complete and accurate transmission of image information is the cloud transformation process using the region growth method to achieve the key difficulties of automatic image segmentation technology, and similarly for the segmentation of images with more concepts, the perfection of the image simulation, whether it is possible to be from the most basic cloud model image segmentation Processing technology naturally transitions to the multidimensional cloud model image segmentation processing technology, and whether the effect presented after the transition is the same, is the model applied to the field is highly valued issues.
Cloud Model Fusion evidence Theory Approach
With the development of information technology, there are more and more methods improved by experts on uncertainty inference rules, and different people have different ideas.Complex management systems suffer from a variety of elements, the coexistence of subjective and objective information, and difficulties in quantitative evaluation.Dempster proposed a comprehensive performance evaluation model with improved evidence theory (Beynon et al., 2000), and the combination of the cloud model and evidence theory became a new research hotspot.In the updating of D-S evidence, the results of the evidence update obtained will vary due to the differences in the combination weights obtained by the various methods, but there is still room for further exploration of the methods used and the variables relied upon.
Cloud Model and Connection Number Coupling Techniques
In engineering applications, the form of distribution of many indicators is restricted to the normal cloud model.However, the actual situation cannot fully achieve the ideal state, and the cloud model combined with the linkage number of the model can be applied to the form of distribution does not satisfy the normal distribution of the actual indicators to solve the problem of stochasticity and ambiguity in practical applications.The coupling of the one-dimensional cloud model with the number of links can intuitively evaluate the decision-making of intervals through the trend of the number of links, and when the dimensionality rises to multidimensional, it must consider the intrinsic connection between the indicators and whether a single indicator will cause excessive influence or not to be eliminated (Wang et al., 2020).Therefore, finding the optimal method of coupling between indicators still needs to be deeply investigated, and the validity and practicality of the model must be verified.
Time Series Forecasting Techniques
Time series represent an important class of complex data whose distributional properties change with time.Time series based on cloud model has become a research hotspot in China, which is mainly used for data mining of time series knowledge, and there are many expandable fields, the division and representation of information granulation in time granularity problem is a difficult point in time series problem.In real life, the development of the law of things by a variety of factors, how to decompose the multivariate time series into a one-dimensional time series for information granularity, so that this method is more and more simple and easy to operate to be further studied.
CoNCLUSIoN
As the application areas of cloud modeling become more and more extensive, there are more and more methods combined with it.Early research was to deal with uncertainty and two-way cognitive problems through cloud modeling, with a single computational method that could not better handle the transformation between data and models.This paper makes a description of the theoretical study of multidimensional cloud models.The example applies the approach among those approaches to calculate supply chain risk.This paper reflects that the data source adopts the expert scoring method subjectively.A novel method proposed will be both objective and subjective.We will highlight the advantages of multidimensional cloud models compared with other methods in the future.Cloud modeling moves from theoretical research to the technical problems that need to be solved for practical applications.The complexity of multidimensionality, the feasibility and operability of the constructed model, and the uncertainty of the parameters of the multidimensional cloud model are the research directions that need to be solved.
Figure 3 .
Figure 3. Cloud model and its three numerical features
Figure 2 .
Figure 2. Multidimensional cloud model keyword clustering graph
Figure 6 .
Figure 6.Multidimensional forward cloud generator algorithm Based on the range of values for each evaluation level, the cloud number characteristic values are calculated by the specific formula:
Figure 7 .
Figure 7. Standard risk level graph Note.The horizontal coordinate is the score, and the vertical coordinate is the affiliation degree.The orange-red circles indicate cloud very lower (CVL); the sky blue triangles indicate cloud lower (CL); the grass-green stars indicate cloud medium (CM); the ginger squares indicate cloud high (CH); the plum-red inverted triangles indicate cloud very high (CVH).
Figure 12 .
Figure 12.QR corporation supply chain finance risk integration cloud
Table 1 . Keyword clustering scenarios Cluster Number Cluster Size Tag Words (Selecting the First Five)
, dispersal assembly, tropical forest, recruitment strategies, biodiversity maintenance | multidimensional compressible flow 13 10 atmospheric sulfuric acid, galactic cosmic ray, particle formation, nucleation, climate 15 9 crystalline rock, system, connectivity, tracer transport, behavior | multidimensional compressible flow 18 6 photogrammetry, structural geology, neotectonics, 3D surface modelling, structure-from-motion The scoring results of each expert are shown in Table3below.Based on 10 experts' scores on QR supply chain finance's metrics, | 8,838.2 | 2023-11-21T00:00:00.000 | [
"Business",
"Computer Science"
] |
Prevention of LPS-Induced Microglia Activation, Cytokine Production and Sickness Behavior with TLR4 Receptor Interfering Peptides
The innate immune receptor Toll-like 4 (TLR4) is the receptor activated by lipopolysaccharide (LPS), and TLR4-LPS interaction is well known to induce an innate immune response, triggering sickness behavior. Within the brain, TLR4 is highly expressed in brain microglia, and excessive inflammation resulting from activation of this pathway in the brain has been implicated in depressive disorders and neurodegenerative pathologies. We hypothesized that blocking LPS-induced activation of TLR4 would prevent downstream immune signaling in the brain and suppress the induction of sickness behavior. We used interfering peptides to block TLR4 activation and confirmed their efficacy in preventing second messenger activation and cytokine production normally induced by LPS treatment. Further, these peptides blocked morphological changes in microglia that are typically induced by LPS. We also demonstrated that intraperitoneal (i.p.) injection of Tat-TLR4 interfering peptides prevented LPS-induced sickness behavior, as assessed in home cage behavior and with the intracranial self-stimulation paradigm. These newly synthesised peptides inhibit TLR4 signaling thereby preventing changes in behavior and motivation caused by inflammatory stimuli. These peptides highlight the roll of TLR4 and microglia morphology changes in sickness behavior, and thus may be of therapeutic value in limiting the deleterious impact of excessive inflammation in specific CNS pathologies.
Introduction
Small amounts of lipopolysaccharide (LPS) from invading bacteria are one of the first signals detected by the body upon infection, and detection of LPS primes the immune system to mount a defence. Following the onset of a typical infection, individuals display a coordinated set of behavioral conditions, known collectively as sickness behavior [1,2,3], that reflect a normal acute response to inflammation. The profound changes which constitute sickness behavior include loss of motivation for food and drink, diminished social interaction, fatigue, irritability, depression and cognitive impairment [3]. The expression of sickness behavior relies on motivational reorganization of priorities, which are dependent on the biological state of the animal and therefore can lead to diverse behavioral outcomes. Separate from this, sickness behavior also includes an element of altered motility, which is characteristic in sick animals. Under some circumstances, the initial inflammatory response can become uncontrolled and ultimately lead to other deleterious effects including prolonged inflammation and cytokine release which is known to contribute to CNS dysfunction, chronic depressive disorders and neurodegenerative processes [2,4,5]. Although the CNS actions of cytokines have been implicated in sickness behavior [6,7,8], the mechanisms in the brain that trigger this behavioral response are not well understood.
The Toll-like receptor 4 (TLR4) and its potent ligand LPS, represent one of the first and best characterized ligand and receptor combinations of the innate immune system [9,10]. TLR4 receptors are expressed on microglia in the CNS and on cells of the immune system throughout the body [2,11]. Systemic LPS acts on the CNS through several parallel pathways (reviewed in [11]) including: 1) activation of TLR4 on microglia in regions where the blood brain barrier (BBB) is permeable (e.g. area postrema and circumventricular organ [12]; 2) activation of perivascular cells and endothelial cells of blood vessels in the brain [13]; 3) stimulation of the afferent vagal nerves; and 4) transport across the BBB of cytokines generated by peripheral cells [14]. There is however, disagreement in the literature as to what extent each of these pathways contributes to the effects of the LPS driven inflammatory cascade.
Here we show that Tat-coupled interfering peptides block TLR4 signaling to second messengers and subsequent cytokine production normally induced by LPS, block morphological changes in microglia induced by LPS, and also prevent LPSinduced sickness behavior. We used multiple indices of sickness behavior including various measures of motor performance (open field and modified SHIRPA screen), as well as indices of motivation including titrated intracranial self stimulation (ICSS). Remarkably, these newly synthesized peptides prevent changes in behavior and motivation normally caused by inflammatory stimuli by inhibiting TLR4 signaling. These peptides highlight the roll of TLR4 and microglia in sickness behavior, and thus may be of therapeutic value in limiting the deleterious impact of excessive inflammation in specific CNS pathologies.
Results
Our goal was to manipulate the impact of LPS binding to TLR4 in vivo, and ultimately to impact the pathways involved in sickness behavior. Considering that LPS may be acting directly on TLR4 receptors in accessible regions of the CNS and in peripheral immune cells [2,11] we established a technique that could target CNS TLR4 receptors. Accordingly, we developed interfering peptides coupled to a truncated Tat carrier sequence [15] in an attempt to block TLR4 signalling in brain slices, and subsequently examined their efficacy in preventing TLR4 activation in vivo. Specifically, the peptides were designed to block TLR4-MyD88 binding via the intracellular TIR domain induced by LPS activation of TLR4 ( Figure 1A). We based our sequence on epta-peptides directed against the BB-loop within the TIR domains of TLR4 (Tat-MyD88) and MyD88 (Tat-TLR4) [16,17].
We first determined whether these interfering peptides entered cells in brain slices after bath application in vitro or crossed the BBB and entered CNS cells after i.p. injections in vivo. Dansylated Tat-MyD88 was injected intraperitoneally (i.p.; 6 mg/kg) into mice and 30 minutes later acute brain slices were prepared for immediate examination using two-photon laser scanning microscopy (TPLSM). Strong dansyl fluorescence was detected within cells in the hippocampus, in contrast to vehicle injected controls ( Figure 1B,C), indicating that the Tat-fused peptides could cross the BBB and permeate CNS cells. Similarly fluorescence was observed within cells in brain slices when brain slices were incubated in ACSF with dansylated Tat-MyD88. These observations of labelled Tat-MyD88 peptides in CNS cells show that these peptides can enter cells where they potentially have access to the intracellular binding site of MyD88 and TLR4 receptors.
We next tested whether Tat-MyD88 effectively blocked the interaction between TLR4 and MyD88 under conditions where we saw that the dansylated peptides entered cells. We tested their efficacy in the whole brain by assessing their ability to prevent protein-protein interactions via co-immunoprecipitation. Mice were injected (i.p. 6 mg/kg) with either vehicle (control), Tat-MyD88 peptide, or a scrambled version of the MyD88 sequence coupled to Tat (Tat-scram) and whole brain lysates were prepared 30 minutes later. Western blots of immunoprecipitated brain lysate prepared from mice injected with Tat-MyD88 showed a reduction in the intensity of the MyD88 band co-immunoprecipitated using anti-TLR4 antibody (62.0362.73 a.u.) compared to untreated control (100.00612.45 a.u., p = .041) and Tat-scram treated (103.9466.67 a.u., p = .004; Figure 1D,E). Likewise, the reverse co-immunoprecipitation of TLR4 using the MyD88 antibody was also diminished in mice injected with Tat-MyD88 (50.6367.53 a.u.) compared to untreated control (100.0063.58 a.u., p = .004) and Tat-scram treated (98.9263.84 a.u., p = .005). No change was observed in the co-immunoprecipitation of either MyD88 with TLR4 (p = .794), or TLR4 with MyD88 (p = .847) when Tatscram treated animals were compared to untreated controls ( Figure 1D,E). This data reveals that i.p. injections of the Tatconjugated interfering peptide Tat-MyD88 are capable of blocking interactions between MyD88 and TLR4 in the brain in vivo.
We next tested the efficacy of these peptides on the production of the cytokine TNF-a which is further downstream from the second messenger activation [18], in both brain slices and in vivo. Brain slices were pre-incubated with either vehicle, Tat-MyD88, Tat-TLR4 or Tat-scram I hr before a 2 hr LPS treatment (40 mg/ mL). ELISA detection of TNF-a from brain slice supernatant showed using ANOVA that the LPS-induced (17.3861.00 a.u., p = .001) cytokine production effect was blocked by Tat-MyD88 (p = .532) and Tat-TLR4 (p = .287) but not by Tat-scram (14.7062.17 a.u., p = .003) ( Figure 2G). After determining that these Tat interfering peptides were effective in brain slices we next investigated whether they could effectively prevent in vivo activation of TLR4 receptors by LPS. We tested their efficacy in vivo by i.p. injections (6 mg/kg) of Tat-scram, Tat-MyD88 or Tat-TLR4 I hr before i.p. injection of LPS (0.5 mg/kg), then measured TNF-a levels in whole brain lysate using ELISA after 45 min. Recent studies have demonstrated that i.p. injections of LPS increase cytokines in the brain by two principal mechanisms. LPS directly triggers cytokine release from microglia by diffusing into the brain at the circumventricular organs that lack a functional BBB and second by the peripheral actions of LPS triggering the release of cytokines that are then transported into the brain [2,19]. In our experiments LPS injections triggered robust increases in TNF-a levels in control CNS tissue after 45 min (53.8367.94 a.u., p,.001). In contrast, the increase in brain cytokines from LPS injections were significantly depressed by pre-treatment with either Tat-MyD88 (p = .030) or Tat-TLR4 (p,.001) but not by Tatscram ( Figure 2H). Thus, we were able to demonstrate the efficacy of blocking TLR4-MyD88 interactions by Tat-interfering peptides by monitoring the second messenger activation and the subsequent formation of the cytokine TNF-a.
The stimulation of TLR4 receptors in microglia has been shown to change their morphology from the resting ramified appearance to an amoeboid shaped [20]. We imaged dynamic changes in live EGFP positive microglia [21] prior to and during LPS application to determine the impact of blocking TLR4 signalling on the dramatic morphology changes normally induced in microglia by LPS. Microglia in the deeper healthy parts of brain slices were observed to have normal morphology using TPLSM, with ramified processes. Careful handling of acute slices ensured that only cells at the surface (,10 um) of the acute slice appeared to be affected by the slicing process, and in the depths at which we imaged neurons and astrocytes were healthy, and microglia did not appear activated. Under control conditions, microglia in acutely prepared brain slices exhibit the typical ramified morphology of resting microglia with numerous long branches, and multiple filopodia [22] ( Figure 3A) similar to their appearance in vivo [23]. Staining of fixed tissue has shown that microglia in vivo acquire an amoeboid shape in response to brain injuries or to immunological stimuli such as LPS [24]. The morphological changes in microglia reflect profound functional changes in these cells because it is known that the release of cytokines and other signalling factors into the surrounding tissue [25] is enhanced when microglia acquire amoeboid morphology [24]. Using timelapse TPLSM, we observed the progression of LPS-induced morphology changes in large fields of view where multiple microglia were visible (Movie S1). Within 10 min we observed Tat-MyD88 and Tat-TLR4. A. Representative blots showing P-p38 MAPK and P-JNK rapidly increased in brain tissue following LPS treatment. GAPDH was monitored as a loading control. B,C. Quantification of the increased P-p38 MAPK and P-JNK levels over 60 minutes following LPS treatment. D-F. P-p38 MAP kinase and P-JNK increases from LPS were attenuated by Tat-MyD88 and Tat-TLR4. D. Representative blots of kinase activation following various treatments. E. Quantification of P-p38 MAPK normalized to GAPDH levels. F. Quantification of P-JNK normalized to GAPDH levels. G,H. LPS treatment increased TNF-a levels, and this increase was blocked by Tat-TLR4 and Tat-MyD88. Quantification of TNF-a levels using ELISA in acute brain slice (G) parallels results found in whole brain lysates of injected animals (H). doi:10.1371/journal.pone.0060388.g002 the first indications that LPS application (t = 0) changed microglia morphology from the typical branched and ramified morphology ( Figure 3A), and by 40 minutes, the majority of branches were lost and the cells were amoeboid (Movie S1). The amoeboid morphology of microglia persisted throughout the remainder of imaging (80 minutes). In comparison, when slices were preincubated with Tat-MyD88 or Tat-TLR4, the transition from ramified to amoeboid characterized by branch loss was not observed at either 40 or 80 minutes following LPS treatment (40 mg/mL; Figure 3B). The inhibitory effects of the Tatinterfering peptides on microglia morphology changes was quantified in a separate set of experiments by analysing the number of branches in three dimensional reconstructions using Imaris software of individual microglia (n = 21 cells per group). One way ANOVA demonstrated a significant main effect of treatment (F 4,104 = 212.88, p = ,.001). The number of branches in microglia were significantly reduced by LPS ( Figure 3C, D; control = 187.5629.5 branches, LPS = 78.5617.5 branches; p = 0.011). This morphological transformation was blocked by pre-incubation with either Tat-MyD88 or Tat-TLR4, and was not significantly different from control ( Figure 3C, D; p = 0.722 and p = 0.369 respectively). In contrast, microglia in brain slices preincubated with Tat-scram, showed a change in branch number induced by LPS that was similar to LPS treated slices ( Figure 3D; Tat-scram = 43.564.5 branches; p = 0.04).
The ability of Tat-MyD88 and Tat-TLR4 to prevent many of the cellular actions of LPS such as second messenger stimulation, cytokine formation and transformations of microglia to amoeboid shapes encouraged us to test their effectiveness at treating LPSinduced sickness behavior. We began by assessing mice given LPS (0.5 mg/kg) or LPS (0.5 mg/kg) plus peptide treatments (6 mg/kg) on a number of basic behavioral indices of sickness including reflexive or motor and motivational or hedonic functions (Table S1). Mice were scored for the extent to which they displayed each of the 11 indices of sickness and a cumulative score was calculated (n = 10 mice per group). One way ANOVA demonstrated a significant main effect of treatment (F 5,59 = 597.53, p = ,.001). Control mice scored in the lowest category on each of the measures of sickness, with an average cumulative score of 0.50 (60.40; Figure 4A). In contrast, mice assessed 30 minutes after LPS treatment scored high on each of the measures, with a cumulative score averaging 19.90 (60.84) that differed significantly from control mice (p,.001). Similar results were observed for mice pre-treated with the Tat-scram peptide and LPS (21.6760.24) compared to control mice (p,.001). When mice were pre-treated with either Tat-MyD88 or Tat-TLR4 peptides, we observed a remarkable prevention of LPS-induced sickness as reflected in the cumulative behavioral scores (Tat-MyD88:0.7060.26; Tat-TLR4:1.8060.20), which did not differ from control mice (p = .752 and p = .107).
To evaluate LPS-induced sickness behavior and the effectiveness of the Tat fused interfering peptides further, we observed mouse behavior in a novel home cage. Mice were i.p. injected with LPS (0.5 mg/kg) and returned ot their homecage for 30 min, with a subset treated with Tat-MyD88, Tat-TLR4 or Tat-scram (6 mg/ kg) 30 min prior to LPS treatment. Mice were then singly placed into a novel home cage environment and allowed to explore freely for 30 min. Representative path plots from each of the groups tested illustrate striking differences among the groups ( Figure 4B). Using Noldus Ethovision software for quantification, average speed ( Figure 4C), cumulative distance traveled ( Figure 4D), and total number of rears ( Figure 4E) Given the prevalence of decreased motivation in sickness behavior, we examined the effectiveness of the TLR4-MyD88 interfering peptides on the behavioral response of rewarding intracranial self-stimulation (ICSS). ICSS accesses self-motivated behaviors to acquire brain-stimulation reward related to activation of brain dopamine neurons [26], and consequently may be used to assess the direct effect of LPS on brain function in the absence of peripheral effects [27,28]. Using the titrated protocol of this assay will allow us to examine the effects of sickness and peptide treatment on underlying motivational states. Two way repeated measures ANOVA revealed a significant interaction (intensity x treatment; F 24,272 = 2.60, p,.001) between intensity (F 12,272 = 11.11, p,.001) and treatment (F 2,272 = 10.79, p = .002). In LPS treated animals (0.5 mg/kg), a significant reduction in the number of responses per 5 minutes (least square mean 207.40692.03) was observed compared to baseline (least square mean 773.25692.03; p = .003) and post-treatment (least square mean 674.79692.03; p = .004) sessions ( Figure 4F). In contrast, when animals were pretreated with Tat-MyD88 (6 mg/kg) prior to LPS (0.5 mg/kg), this reduction in the number of responses was not observed ( Figure 4G). This result indicates that peptides interfering with TLR4-MyD88 can reduce the effects of sickness on motivation for self-stimulation. The behavioral data demonstrates that the peptides interfering with TLR4-MyD88 can effect both the motoric and hedonic effects of sickness behavior suggesting that peptide is capable of influencing multiple systems in the brain.
Discussion
Taken together, our results show a remarkable ability of two different Tat-interfering peptides to prevent the downstream actions of TLR4 receptor stimulation at the molecular, cellular and behavioral levels. Although similar peptides have previously been developed, we have been the first to show a behavioral impact of blocking TLR4-MyD88 interaction, likely mediated by a rescue of microglia morphology changes and cytokine production that are normally induced by LPS. These peptides cross the BBB and enter cells where they disrupt the protein-protein interactions between TLR4 and MyD88. The peptides mimicked the key sequences necessary for dimerization and interaction of MyD88 and TLR4 TIR domains. Natural mutations of this key sequence in the TLR4 receptors were previously discovered to explain the unresponsiveness of a specific strain of rat to LPS [29]. We found that interfering peptides that mimic either the sequence of TLR4 receptors or the sequence on the recognition site on MyD88 prevented the co-immunoprecipitation of these proteins and the ability of LPS to activate second messengers and increase cytokine formation in intact tissue in brain slices and in vivo. Using two photon imaging we have further shown the dynamic morphological changes that microglia can undergo in responses to LPS and that these changes can be mediated by TLR4 signaling. In addition these Tat-interfering peptides were remarkably effective at preventing the behavioral syndrome that accompanies sickness caused by LPS. When mice were administered LPS by peripheral i.p. injection, a series of behavioral changes occurred within a one hour period, as previously reported [2]. Following treatment with either Tat-MyD88 or Tat-TLR4, but not a Tat-scrambled peptide, there was a complete absence of motoric (behavioral screen) and motivational (behavioral screen and titrated ICSS) effects of LPS-induced sickness which we attribute to the direct action of the peptide on brain function.
The sickness behavior of animals, triggered by the inflammatory release of cytokines, mirrors the well known symptoms of sickness in humans which include fatigue, loss of appetite and cognitive changes. It is also becoming evident that sickness and inflammation are important contributors to the occurrence of depressive episodes [2]. Therefore, these peptides represent a novel means of blocking the behavioral impact of sickness, and potentially an effective strategy for alleviating symptoms of depression induced by chronic inflammation and sickness. Our detailed description of a molecular target linking inflammation and sickness to motiva-tional states provides valuable insight into pathways involved in the cross talk between the CNS and the immune system. In addition to acute sickness and depression, TLR4 activation in microglia has also been linked to neurodegeneration [30]. Therefore, Tat-interfering peptide strategies may also be useful for investigations of potential therapeutic interventions in CNS diseases in which activated microglia may be a causal factor in the underlying pathology. Ultimately, these findings allow for the further translation of our knowledge about the communication between the brain and the immune system and will create new therapeutics to increase the quality of life for sick patients.
Peptide Design
The generated peptides (Anaspec, San Jose, CA.) were based on epta peptides [17] with sequences mimicking the BB-loop of TLR4 and MyD88 TIR domains, preceded by a truncated Tat sequence (2). Sequences for the peptides are as follows: Tat-TLR4: Tat-TLR4 and Tat-MyD88 peptides were dissolved in saline to a concentration of 20 mg/mL, and again were either bath applied or injected depending upon the experiment. In experiments where both LPS and peptide treatments were used, peptide injection preceded LPS by 30 min.
Slice preparation and solutions
Brain slices were obtained as described previously [31] from CX3CR1-EGFP [21] transgenic mice aged 21-40 days postnatal. Slices were stored at room temperature (20 to 23uC) for 1 hr before imaging in an oxygenated artificial cerebrospinal fluid (aCSF) containing (in mM): NaCl 126, KCl 2.5 or 4.2, NaHCO 3 26, glucose 10, MgCl 2 2, NaH 2 PO 4 1.25 and CaCl 2 2. Slices were transferred to a recording chamber and perfused with oxygenated aCSF at a rate of 1-3 mL/min and maintained at either 25uC with an inline heater (Warner Instruments).
Two-photon imaging
We performed imaging with a two-photon laser-scanning microscope (Zeiss LSM510-Axioskop-2 fitted with a 40X-W/ 0.80 numerical aperture objective lens) directly coupled to a 10 W Chameleon ultrafast laser. EGFP was typically excited at 820 nm and epifluorescence was detected with external detectors. For acquiring images laser intensities were ,25 mW at the tissue and there was no photobleaching nor was there any evidence of cellular damage during extensive scanning to obtain time lapse images. The laser intensity was carefully monitored in all instances and kept comparable between all experiments. Imaging was done at depths in brain slices .50 mm and up to 100 um. The mean depth for imaging lesions was 75 um. Z-stacks were taken in 0.5 mm steps and covered a field of 64.5 mm664.5 mm. The mean scan time for z-stack was approximately 1 min and 18 sec.
3D reconstruction of microglia, and automated assessment of the number of branches was performed using Imaris 5.0 (Bitplane AG, Zurich, Switzerland). The Gaussian filter was set to 0.5 uM in accordance to the dimensions of the PSF of the microscope.
Western Blotting and Co-Immunoprecipitation
All samples were boiled in SDS page sample buffer with DTT for 10 min. After SDS page and transfer, nitrocellulose membranes were probed with anti-TLR4 and anti-MyD88 followed by alexa 680 and IRD800 conjugated secondary antibodies for detection using an Odyssey machine (Li-Cor), or HRP conjugated secondaries for ECL detection.
For co-immunoprecipitation, brain tissue from either Tat-TLR4 injected, Tat-scram injected, or untreated mice was rapidly harvested and homogenized in TEEN buffer (50 mM Tris-HCL, 1 mM EGTA, 150 mM NaCl) plus 0.2%SDS and 0.8% TWEEN, supplemented with PMSF, and protease inhibitor tablet (1 for 10 mL; Roche Applied Science). Following a 30 min lysis, samples were spun at 14000 rpm for 10 min at 4uC, and the supernatant was transferred to a clean tube. Lysed protein was incubated with either anti-TLR4 (Cedarlane) or anti-MyD88 (Cedarlane) for 1 hr at 4uC. Sepharose beads (company) were washed 36 in TEEN buffer and added to tubes containing lysed protein-antibody, and incubated at 4uC for 1 hr. Following incubations, beads were washed 36 in the above described TEEN buffer, and subjected to SDS-page described above.
ELISA
Enzyme-linked immunosorbent assays (ELISA) were performed according to manufacturer instructions (R & D systems, Minneapolis, MN). In brief, hippocampal brain slices were incubated and treated in a homemade chamber (using 12 multi-well plates) equipped with continuous aeration with 95% O 2 /5% CO 2 . Slices were treated with LPS (40 mg/ml) in the absence or presence of Tat peptides. The latter were pretreated for 30 min prior to LPS application. The cell-free supernatants were used for analysis of TNF-a production. For in vivo experiments, mice were i.p. injected with LPS (0.5 mg/kg) and hippocampi were surgically removed for protein assay and mouse TNF-a ELISA (eBioscience, San Diego, CA). Tat peptides (6 mg/kg of body weight) were intraperitoneally injected 30 min prior to LPS administration.
Cumulative Behavioral Score
The behavioral screen were based on the modified SHIRPA protocol employed by European Mouse Phenotyping Resource of Standardised Screens (EMPReSS) designed to evaluate phenotypes of mouse strains [32,33].
The behavioral screen were based on the modified SHIRPA protocol employed by European Mouse Phenotyping Resource of Standardised Screens (EMPReSS) designed to evaluate phenotypes of mouse strains (Brown et al., 2005, Brown et al., 2006.
Novel Home Cage Behavior
The behavior of mice in the novel home cage was assessed using the Noldus Ethovision automated Video Tracking system (Noldus, Wageningen, The Netherlands). Parameters including total distance travelled, average speed, and number of rears were assessed. Results from tracking analysis were analyzed using ANOVA to compare means.
Intra-cranial Self Stimulation
Surgery: Rats were anaesthetized with xylazine (7 mg/kg i.p.) and ketamine hydrochloride (100 mg/kg i.p.), and placed in a standard stereotaxic apparatus. The dorsal surface of the skull was exposed and a single hole was drilled to allow implantation with a stainless-steel bipolar electrode. Electrodes were directed at a site in the medial forebrain bundle corresponding to the level of the posterior lateral hypothalamus (AP, 60.5 mm from bregma; ML, +1.7 mm; DV, x8.3 mm from dura; tooth bar, 5.0 mm above the interaural line). Electrodes were secured to the skull with surgical stainless-steel screws and dental acrylic cement. All animals were allowed to recover from surgery for at least 1 wk before starting ICSS training.
ICSS Training: Training and testing were conducted in 8 Plexiglas boxes (30r30r24 cm), housed within sound-attenuating chambers. Depression of a lever delivered a sinewave current (60 Hz) of fixed duration (200 ms), via a flexible lead connected to the chronically implanted intracranial electrode assembly. During the initial training period, the current was set at 16 mA, and only those animals that maintained consistent lever pressing were used for the second stage of training. This stage consisted of training subjects on an ascending series rate-intensity protocol, whereby current intensities were preset by a computer (Nova-3; Manx software) and increased in 2 mA steps, from an initial value of 8 to 28 mA. Five priming pulses of stimulation were delivered to each animal at the beginning of the first minute of testing at a given current level. The number of bar presses was recorded for the subsequent 4-min period, after which the current intensity was set at the next level. Data collection was controlled by the computer and individual rate-intensity curves were plotted daily for each subject, from which three measures were calculated; the current at which responding was half maximal (M50), the minimum current required to maintain a threshold level of responding of 30 presses per minute, and the asymptotic level of responding.
After stable levels of ICSS responding had been achieved (M50,¡10%, for 3 consecutive days), animals were rank ordered based upon their level of responding over the previous three baseline sessions. Rats were then assigned sequentially to 4 groups, from the highest to the lowest M50 values. Two groups (n = 7, 8 per group) were assigned to receive administration of LPS (0.5 mg/kg), while two groups (n = 7 per group) received vehicle.
Electrode placements were verified at the conclusion of the experiment by sectioning 50 mm coronal slices of the brain at the level of the lateral hypothalamus. Brain sections were stained with Cresyl Violet and the locations of the electrode tips were recorded.
Statistical Analysis
Data were expressed as mean 6 s.e.m. and the statistical significance of differences in mean values was assessed by t-test, one-or two-wayanalysis of variance (ANOVA), or two-way repeated measures ANOVA; Student Newman Keuls post hoc comparison was used as appropriate. Differences among means were considered significant at values of *: p#.05, **: p#.01, ***: p#.001. Table S1 Table outlining behavioural assessments used and whether they inform us about movement and motility (motoric) and/or the brain reward and motivational (hedonic) systems. Check marks indicate that the behaviour contains elements of these parameters. (DOCX)
Supporting Information
Movie S1 Live 2-photon imaging of the effects of LPS on microglia morphology, and blockade of LPS-induced microglia morphology changes by treatment with Tat-MyD88. Under control conditions, microglia in acutely prepared brain slices exhibit the typical ramified morphology characterized by numerous long branches, and multiple filopodia. Within 10 min following LPS treatment, we observed the first indications of morphology change, and by 40 minutes, the majority of branches were lost and the cells were amoeboid. The amoeboid morphology of microglia persisted throughout the remainder of imaging (80 min). In comparison, when slices were preincubated with Tat-MyD88, the transition from ramified to amoeboid characterized by branch loss was not observed at either 40 or 80 minutes following LPS treatment.
(MOV)
Movie S2 Simultaneous viewing of an untreated control mouse (blue), and mice treated with either LPS alone (green), or LPS plus Tat-MyD88 (red) demonstrates the striking differences in behavior. Mice treated with LPS show very little exploratory behavior, take on a hunched posture, and show pronounced piloerection. In contrast, mice treated with LPS plus Tat-MyD88 show a smooth coat, normal posture, and high levels of exploratory behavior, comparable to untreated control animals. (MOV)
Author Contributions
Conceived and designed the experiments: DJH BAM. Performed the experiments: DJH HBC RMH. Analyzed the data: DJH. Contributed reagents/materials/analysis tools: DJH AGP BAM. Wrote the paper: DJH BAM. | 6,634.6 | 2013-03-28T00:00:00.000 | [
"Biology",
"Psychology"
] |
Emergent reconfigurable mechanical metamaterial tessellations with an exponentially large number of discrete configurations
• Modular metamaterials are proposed with an exponentially large number of configurations. • Each unit cell can switch into 4 patterns with multiple acoustic properties. • Cellular metamaterial can be in situ set as wave filter, guide, lens and cloak. • Extrinsic effect is reconceptualized as a design opportunity to form reconfigurable metamaterials. ⁎ Corresponding authors. E-mail addresses<EMAIL_ADDRESS>(N. Yang), js@mu
Contents lists available at ScienceDirect
Materials and Design j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / m a t d e s impact energy absorption, and wave attenuation [26,27]. For example, Zhu et al. [28] proposed 2D metamaterial tessellation based on kirigami units with subwavelength flexural wave manipulation for nondestructive evaluations and structural health monitoring. While the unit cell design was made by cutting and folding a thin metallic plate, this application highlights a key feature of metamaterial engineering that makes it highly attractive for industrial applications. Specifically, bulk-scale response functions of metamaterials are encoded into properties intrinsic to individual units such as geometric angles and lengths without making chemical or molecular modification.
In making this observation about the nature of metamaterial response functions being encoded into the unit cell's properties, we used the term "intrinsic" in the way it is used in thermodynamics, where intrinsic properties depend on the material in question rather than the amount of material involved. The classic example often used to explain the concept of an intrinsic property is density, whereas its extrinsic equivalent is mass, which depends critically on the amount of material involved. Interestingly, factors extrinsic to individual units in a metamaterial structure, such as unit-unit interactions, can arise and obscure the intended unit-scale design. These extrinsic effects are often structure-specific, driven by self-interactions, and can depend on shape, size, orientation, dimensionality, and topology of the bulk material [2,[4][5][6][7]14,18,19,25]. This sensitivity to bulk-scale details makes extrinsic properties difficult to predict, prescribe, or plan when designing mesoscale units [19], and therefore they present a critical obstacle to circumvent when developing general-purpose metamaterial technologies.
A potential solution to the problem of extrinsic effects comes from the introduction of multistability to the individual units of a metamaterial structure. Multistability is an important characteristic of mechanical metamaterials arising in structures that exhibit multiple energetic minima when deformed. This phenomenon can be used to trap energy with snap-through mechanisms [29], morphing surfaces [30], and give rise to enhanced force sensitivity through non-linear response [31]. In some mechanical metamaterials, strategies based on buckled beams has been shown to achieve multistability, but this method's applications are limited due to the low magnitude of forces involved [29]. Tan et al. [29] proposed multistable mechanical metamaterials with the use of magnets, which exhibited larger force response and exhibited reusability in impact protection applications. Similarly, Dudek et al. [32,33] designed multistable mechanical metamaterials using appropriately distributed magnetic inclusions which exhibit obvious negative stiffness. Based on multistability of individual origami units, Yasuda et al. [34] designed a truss-like metamaterial with tunable stability and stiffness that can be used as non-volatile mechanical memory storage devices. Moreover, multistability was used to tune band gaps and wave directionality for 1D metamaterials [35] and 2D spring-mass lattices [36]. Thus, these examples demonstrate how multistability generally enables the design of metamaterials consisting of multiple pixellike units with multiple programmed reconfigurations. Thus, returning to the challenges of extrinsic metamaterial properties, a potential avenue for diffusing these challenges is by fixing the overall dimensions of a multistable mechanical metamaterial and relying on multistability as a means of selecting the desired metamaterial properties.
Although there is a typical infinite design space of continuous geometric parameters [18,[37][38][39][40][41] for metamaterial design, a multistable metamaterial will have an exponential number of discrete configurations. Here, we propose to utilize a Popping and Emergently-Reconfigurable Metamaterial Tessellation (PERMuTE) to study this finite design space of N tessellated multistable units. If each unit has M stable configurations, then, symmetries aside, the bulk metamaterial has~M N discrete configurations [6,17,42]. This design strategy allows us to choose metamaterial properties by selecting a specific configuration for each of the N units while holding the bulk material's shape, size, orientation, dimensionality, and topology constant. Thus, rather than eliminating extrinsic properties, the mechanical contributions of extrinsic effects are incorporated into an exponentially large menu of possibilities that can be selectively chosen from when configuring the metamaterial. Key benefits of this design strategy are: (i) avoiding designing/characterizing new case-specific mesoscale units, and (ii) the ability to dynamically change the mechanical properties in situ. Therefore, we shift effort toward a deeper characterization of a single general-purpose structure and its various configurations for a variety of applications. We demonstrate that the design solutions for 1D wave filter, 2D waveguide, 2D wave lens, and 2D wave cloak exist within the large menu of possible structural patterns.
PERMuTE geometry, kinematics, mechanics, and computation
The basic PERMuTE unit (Fig. 1a-c) is a planar geometric shape cut from a thin foldable material and assembled according to the crease pattern with edge CD joined to C′D′. As designed, this modular unit has five parameters consisting of three lengths and two angles α, γ (Fig. 1a). Throughout all our modeling and experiments, we set Additionally, we utilize (θ 1 , θ 2 ) as the two "weakly-coupled Degrees of Freedom (DOF)" to describe each unit's configuration. To be precise, the PERMuTE geometry has two uncoupled linearly-independent DOF: The easiest way to show these relations is to perform a coordinate transformation on (θ 1 , θ 2 ) and rotate the plane by 45 ∘ . Practically, the convenience offered by (θ 1 ,θ 2 ) is to express a formulation that matches hands-on intuition for the PERMuTE unit's physical behavior, ewespecially regarding the motion of the flanges. Mathematically, this convenience means we use variables that are linearly coupled to one another according to Eq. 2. If we insert the contours defined by θ 1 = θ 2 and θ 2 = 360 ∘ − θ 1 into the expressions for Θ 1 and Θ 2 we indeed confirm (Θ 1 , Θ 2 ) form an orthogonal basis. Of course, these geometric relations for the DOF become more complicated when material bending is introduced. This complication leads to a situation where the practical benefits and conceptual conveniences of (θ 1 , θ 2 ) ultimately outweigh the mathematical formulations of (Θ 1 , Θ 2 ).
Detailed analytic derivations supplementing Finite Element Method (FEM) simulations of unit-scale PERMuTE mechanical properties are provided in the Supplementary Information (SI). Throughout the simulation and experimental analysis of single-and multi-unit structures, we select a variety of frequency ranges to highlight notable acoustic properties. While these frequency ranges are generally non-overlapping, the overall design strategy proposed here is insensitive to these differences because it prioritizes: (i) generating metamaterial structures that can be reconfigured, and (ii) performing a computational search on this "menu of configurations" to identify which configuration has desired metamaterial properties. Even though computational efficiency is a separate question beyond the scope of this work that relates to details of software implementation, we note the benefits of this design strategy are cumulative over time as the number of analyzed configurations (and therefore the size of the "menu") increases. This strategy is generalizable and can be applied to any reconfigurable metamaterial structure, thereby bypassing the challenges of extrinsic properties that arise in any metamaterial pattern.
Simulated PERMuTE frequency filter mechanics
Vibrational dynamics for the 1 × 3 PERMuTE material (Figs. 2 and S3) were calculated in COMSOL 5.1 using a Normal-size free tetrahedral mesh with the geometry of each PERMuTE unit in variable configurations. We chose free boundary conditions to mimic the effect an applied force to an unconstrained structure, and fixed properties to mimic fibrous pulp materials. Specifically, we set the Poisson's ratio ν = 0.3, Young's modulus Y = 3.64 GPa, mass density ρ = 871 kg/m 3 , sheet thickness to 2 mm, and geometric parameters (SI text) ℓ m = 15 mm, ℓ n = 17.5 mm, and ℓ q = 50 mm. Mass dampening was introduced and set to 100 to avoid unphysical exponentially-growing strains at resonant frequencies. The ratio of input oscillation amplitude to output response amplitude is a function of this dampening and can be tuned accordingly across a wide range of physically-plausible values. The initial displacement field and velocity field were all zero. The boundary loading type for the input was set to "face excitation" (Figs. 2 and S3, input force applied at " * "). We use a time-dependent solver in COMSOL from 0 to 0.1 s, with a step size of 5 × 10 −4 s. The resulting amplitude is obtained from the "total displacement" value (Figs. 2 and S3, output measured at " * * "). While time-dependent transient oscillations appear early on, they are damped out by t = 0.05s, and FEM computations produce steady oscillations for the remainder of the simulation. Thus, in plots we show results from 0.05 ≤ t ≤ 0.10.
Simulated PERMuTE 9 × 9 bulk mechanics
The PERMuTE 9 × 9 material (Figs. 3 and S4) was analyzed in COMSOL 5.1 using the same methods as the 1 × 3 PERMuTE frequency filter. However, these results were computed in the frequency domain with vibration deformations incorporated.
Experimental prototype fabrication
Prototype PERMuTE structures were fabricated using Strathmore 500 Series 3-ply Bristol card stock that was laser cut using a PERMuTE design pattern generated in Mathematica 10.2 (Fig. S5). To join edges for each unit's assembly, additional card stock was mounted with super glue (LOCTITE 431) to the facets so that the crease mechanics were identical to folds elsewhere in the structure. To join units into a 3 × 3 tessellation, a long card stock strip was used along the perimeter. The intersection of creases at vertices are often found to be mechanically complex due to the presence of material stretching. We therefore removed a small circular domain at each vertex to avoid such effects and allow for creasing-and bending-driven material properties to dominate the measured response.
Experimental compression measurements
In compression measurements, force was applied to various PER-MuTE structures at a constant loading speed of 0.5 mm/s. Longitudinal force-displacement measurements were performed with the force applied only to the central PERMuTE unit in a 3 × 3 tessellation (Fig. 4a). Transverse force-displacement measurements were performed on an isolated PERMuTE unit (Fig. 4b) as well as a 3 × 3 structure (Fig. 4c). In transverse-compression of an isolated PERMuTE unit, the stress-free size of the unit was 49 mm across, and compression was increased until the unit was 34 mm. In all cases, experiments were repeated three consecutive times for each configuration and averaged. Error estimates reported in the main text (Figs. 4a~c) are the minimum and maximum values across all repeated measurements, demonstrating a high degree of reproducibility in bench-top PERMuTE prototypes. The experimentally accessible range for the folding angle ϕ was different between an isolated unit (45 ∘ ≤ ϕ ≤ 75 ∘ ) and the 3 × 3 tessellation (30 ∘ ≤ ϕ ≤ 60 ∘ ). When units are combined and inserted into the testing apparatus, ϕ decreases under compression from the structure's own weight. Nevertheless, in both cases, we were still able to probe ≈30 ∘ in compression. Simultaneous theoretical fits to all compression data were produced using an elastic model (Fig. 4b, lines; SI text; Fig. S9). Parameter values were extracted and found to be mutually self-consistent with each other.
Experimental frequency-sweep measurements
Frequency-dependent mechanical experiments were performed with an Arbitrary Waveform Generator (20 MHz function / Agilent 33220A) and vibrator used to apply transverse oscillations to a 3 × 3 PERMuTE structure in various configurations (Figs. 4d and e). Displacement responses were measured with an IL-065 laser displacement detector. Frequency sweeps from 1 to 1000 Hz were performed within 5 s for an input wave with an amplitude of 0.5 mm so that the full range of displacement was 1 mm. Displacement measurements were Fourier transformed and the results plotted in frequency-space.
Emergence of extrinsic properties in the PERMuTE unit
We demonstrate our strategy for addressing unintended extrinsic metamaterial effects by first introducing a mesoscale unit comprised of a thin foldable material bonded at the edges (Figs. 1a-c) (SI text; Fig. S1). The unit's design was created using standard origami and kirigami techniques previously developed for modular metamaterial construction. While seemingly similar to other origami and kirigami structures [3,22], we note this present design is different in a number of important respects including: (i) different symmetries, (ii) distinct topologies, (iii) different number and direction of creases, (iv) different number of degrees of freedom, (v) different 3D tessellation patterns, and (vi) different geometric compatibilities (see extended discussion in SI). This unit has two symmetric flanges that actuate using weaklycoupled degrees of freedom (Fig. 1c, θ 1 and θ 2 ; θ 1 is the dihedral angle of facets BCFG and BCF'G'; θ 2 defined similarly) (SI). Each flange has two extreme states resulting in four configurations: both flanges up (Fig. 1c, configuration [1], black square), both flanges down (Fig. 1c, configuration [0,0], black triangle), and a symmetric pair of configurations with one flange up and one flange down (Fig. 1c, [0,1] and [1,0], black diamond and circle). Because the current work is primarily interested in emergent extrinsic properties in tessellations of this unit, we will address unit-level mechanics insofar as it advances us toward a better understanding of the more general problem central to this work (see SI text for additional unit-level details).
In accordance with convention and empirical observations on bench-top prototypes (Methods; SI text), we model folding creases as linear torsional hinges and compute the energetics of geometricallyallowed configurations assuming ideal rigid facets (Fig. 1d, black diagonal lines θ 2 = θ 1 and θ 2 = 360 ∘ − θ 1 ). These crease-only deformations are a subspace in a larger energetic landscape where the facet material is allowed to bend (Fig. 1d, colour corresponds to elastic potential energy from folding plus bending; see SI text for derivations) [3,14,15,18,21]. Regardless of whether facet bending is permitted, an isolated unit is intrinsically tristable with a: (i) strong energetic minimum for the [1] configuration; (ii) weak energetic minima for the [0,1] and [1,0] configurations; and (iii) no minimum corresponding to the [0,0] configuration ( Fig. 1e; Movie S1). However, when a unit is embedded in an N = 3 × 3 = 9 unit tessellation where it interacts with its 8 Beyond statics, unit-unit interactions also have implications in a dynamic regime. To demonstrate the consequences of extrinsic interactions, we performed vibrational analysis of the mesoscale unit using FEM simulations of a thin fibrous pulp material (see SI text for FEM details including consideration of other thin sheet materials). These results show the band structure resulting from the extrinsic multistability has a gap around 2.9 kHz in the ΓX direction that can be reversibly opened or closed depending on whether the unit is in [1] or [0,0] (Fig. 1h, red and blue lines; band gap highlighted by gray rectangle) [10][11][12]16,17,43]. This bandgap exhibits directional dependence, consistent with the unit's orthotropic construction. Examining a broader frequency range shows this configuration-specific opening and closing of band gaps repeatedly occurs throughout the 1-10 kHz range (Fig. S2). Going one step further and integrating the band structure shows the Density of States (DOS) also exhibits a high degree of sensitivity to the module's configuration (Fig. 1I, red and blue lines). Since both directionality and density of vibrational modes are so strongly configuration-specific, the extrinsic interactions of PERMuTE provide an opportunity to develop dynamic metamaterial-based devices from its exponentially large (~4 N = 2 2N ) menu of configurations.
Linear PERMuTE structures have reconfigurable vibration transmission
Examining the properties of a 1 × 3 PERMuTE material illustrates its potential as a platform for developing general-purpose reconfigurable metamaterials. Again using FEM, we input a time-dependent longitudinal force generated by 11 equally-spaced frequencies, F(t) = F 0 ∑ κ= 5 15 sin [2π(κ · 100)t] (Fig. 2a; SI text; Fig. S3). Whereas continuous frequency ranges are well-suited for characterizing band structure, discrete input functions such as the frequency comb used here are better able to highlight the potential functionality of this device.
When the PERMuTE material is in the [0,0][0,0][0,0] configuration (Fig. 2b, top), we find it functionally behaves as a vibrational filter (Fig. 2c, top) that suppresses 10 of the 11 input frequencies (Fig. 2d, top). We then reconfigure the PERMuTE material to [0,0] [1][0,0] (Fig. 2b, middle) by popping its middle unit, and find it now transmits a more complex waveform (Fig. 2c, middle) with two wellpronounced frequencies mixed with low-amplitude side-band contributions (Fig. 2d, middle). Popping another unit (Fig. 2b, bottom) of the same PERMuTE structure to generate the [1] [1][0,0] configuration then leads to another new output waveform (Fig. 2c, bottom) that again consists of two well-pronounced frequencies (Fig. 2d, bottom). However, in this third configuration, the output frequency composition has substantially changed relative to the previous two settings. With only 3 of the 4 3 = 64 configurations examined (SI text; Fig. S3), these results already illustrate how extrinsic factors affecting the PERMuTE material's multistability lead to configuration-specific frequency filtering.
Interestingly, some portion of this 1 × 3 PERMuTE structure's frequency response may itself be an extrinsic effect separate from the configuration-dependent aspects already highlighted. We see this in the reciprocity of [1] [8]. Such non-reciprocal phenomena in the transfer function are known to arise from nonlinear interactions, though whether in this case it is the same non-additive unit-unit interactions enabling multistability is an open question.
Noting extrinsic properties are sensitive to the number of modules and their macroscopic assembly, we recognize the specific band structure for this 1 × 3 tessellation is distinct from that of an isolated unit or another size structure. We therefore expect a 2D PERMuTE material to have similar functional properties, but with a new degree of design freedom.
Planar PERMuTE structures as reconfigurable devices
Using similar methods to those applied to the linear PERMuTE structure, we analyzed vibrational properties of a 9 × 9 PERMuTE material to determine what types of devices can be found among the configurations of this structure. Of the 4 81 ≈ 5.8 × 10 48 possibilities available (SI text; Figs. S4 and S5), we focus here on three. For the first device, we popped a "+" shape of units to [1] and set the remainder to [0,0] (Fig. 3a, top). This pattern was chosen to produce a 1-input/3-output waveguide that we tested by oscillating the input edge unit at various frequencies f (Fig. 3a, bottom; orange unit) while measuring the response amplitude throughout the structure (Fig. 3a, bottom; red heatmap). Defining the response signal efficiency η(f) as the average output amplitude (Fig. 3a, bottom; green units) divided by the average perimeter amplitude (Fig. 3a, bottom; black units), we find η(f) can be quite large (Fig. 3a, bottom; η(1,007 Hz) ≈ 14). These large efficiencies arise when input oscillations excite the specific configuration's band structure resonances.
For the second device, we reset all units to [0,0] and then popped a triangular shaped region into [1] (Fig. 3b, top). This reconfiguration programmed the PERMuTE material to function as a vibrational wave lens that focuses a distributed line of input oscillation (Fig. 3b, bottom; orange units) onto a single output unit (Fig. 3b, bottom; green unit). Again, sweeping frequency while measuring the response signal efficiency, we found a range of functional values where η(f) > 1 (Fig. 3b, bottom; η(658 Hz) ≈ 4).
For the third device, we reset the configuration to a rectangular annulus of [0,0] units with the goal of creating an interior region isolated from vibrations (Fig. 3c, top; central region is targeted for vibration isolation). Measuring η(f), we found values generally <1 with the best performance leading to ≈80% vibration suppression (Fig. 3c, bottom; η(89.9 Hz) ≈ 0.2). In this type of cloaking device, a resonant mode confines input oscillations to the edge of the structure and therefore prevents resonant frequency waves from propagating through the annulus. Because the wave cloaking capabilities are tied to the band structure's resonances, reconfigurations of the device affecting the band gaps can be used to tune the cloaking properties. Inspired by electromagnetic metamaterials that cloak sensors [44], we can foresee this type of structure being useful for protecting sensitive equipment within the annulus from external driving at potentially harmful frequencies.
While the functional range of frequencies for these three devices vary, the basic unit's geometry remains the same. However, this does not mean the functional properties cannot be further adjusted from their current baseline. For example, modifications to the three configurations by changing various units will affect the band structure in difficult-to-predict-but-possibly-useful ways. Deeper computational exploration would be required to know exactly what these changes are, and whether the device's effects fall within the application's desired frequency range. Similarly, as with other classes of metamaterials, the mesoscale unit geometry is independent from the base material's composition, which makes its density and modulus free parameters for additional control over the resonances affecting η(f) (SI text). While the three devices shown here provide specific examples over specific frequency ranges, the ultimate limits are unknown and would make for an interesting computationally-driven material discovery exploration.
Bench-top experiments with PERMuTE structures and configurationspecific extrinsic properties
In light of the insights gained by these FEM studies (Figs. 1-3), we fabricated PERMuTE devices from laser cut cardstock (Fig. S5) and tested the mechanical properties in bench-top experiments (Fig. 4). We first verified an isolated unit is intrinsically tristable (Movie S1), whereas units in a 3 × 3 tessellation are quadstable (Movie S2). We then verified the mechanical properties were dependent on extrinsic interactions with a longitudinal force-displacement measurement of the central unit (Fig. 4a, photos) that demonstrated a sensitivity to the configuration of adjacent units (Fig. 4a, dark and light green data). We also found the extrinsic multistability separating [0,0] from [1] vanished from this force-displacement measurement when testing an isolated unit (Fig. 4a, gray data). Additional transverse compression measurements of a single unit in the [0,0], [0,1], [1,0], and [1] configurations verified the predicted symmetry between [0,1] and [1,0] (Figs. 1e and 4b).
The same transverse compression on a 3 × 3 tessellation with varying number of units in the [0,0] configuration showed the zero-frequency mechanical properties could be easily and reversibly configured due to the extrinsic multistability of the individual units ( Fig. 4c; SI text; Fig. S6). Collectively, these experiments demonstrate two distinct examples for how emergent extrinsic effects can be repurposed. First, they stabilize new configurations of the PERMuTE tessellation's units, and second, they contribute to the bulk metamaterial's mechanical properties.
Transitioning to frequency-dependent measurements, we tested a 3 × 3 PERMuTE structure to verify the anticipated functionality of extrinsic properties. Performing a frequency sweep on the tessellation in five distinct configurations showed resonant peaks that shifted up and down while maintaining a nearly constant central frequency (Fig. 4d, peaks marked P 1 , P 2 , and P 3 ; SI text; Figs. S7 and S8). These measurements demonstrate a range of extrinsic behavior depending on how many units were set to [0,0], with P 3 vanishing in one configuration, while P 2 jumped 5-fold between its extremes. At higher frequencies, we also found that varying configurations caused resonance peaks to discretely shift their center frequency by Δf 1 = (9 ± 1) Hz and Δf 2 = − (9 ± 1) Hz (Fig. 4e). Interestingly, this shift groups configurations so that 0 and 1 units in [0,0] have similar resonance peaks (Fig. 4e, gray and orange lines), while configurations with 7 and 9 units in [0,0] are almost identically shifted (Fig. 4e, purple, black lines shifted by Δf 1 and Δf 2 ). On one hand, these configuration-specific shifts experimentally realize an extrinsic frequency-filtering metamaterial (Fig. 2d). On the other hand, the reversible generation, enhancement, and alteration of resonance peaks (combined results of Fig. 4d and e) are the critical ingredients necessary for constructing the waveguide, wave lens, and wave cloak devices (Fig. 3). Even though these physical experiments only explored 5 of the 4 9 = 262,144 possible configurations over a limited frequency range, the experimental evidence validates the existence of extrinsic properties stemming from extrinsic multistability and demonstrates how their realization is practically implemented with PERMuTE.
Metamaterial design focuses on developing geometric patterns that can be embedded in conventional materials. An advantage to engineering materials this way is that the pattern's parameter space provides wide flexibility in the downstream properties. The cost of this flexibility is a challenging multi-objective parameter optimization problem when developing real-world applications of metamaterials due to emergent and extrinsic effects. In response to this challenge, we proposed, and subsequently demonstrated with PERMuTE designs as a specific example, how extrinsic effects can be embraced and repurposed into a menu of properties that can be selectively chosen from for device functionality. Actuating between these configurations is an application-specific extension of the concept, but can include motors, pneumatics, shape memory alloys, or even manual manipulation. By emphasizing postfabrication reconfiguration of a structure, this metamaterial design approach contrasts with current and conventional methodologies, which instead rely on fixing gradients in unit geometry to statically program bulk-scale mechanics [5,6,8,14,15,18]. While this commonly-used approach allows one to achieve metamaterial functionality, it has little ability to respond to changing user needs or to circumvent the emergence of non-additive effects.
Real-world use-cases of our approach therefore involves: (i) choosing a base material to enhance with metamaterial properties; (ii) determining tolerances of the intended fabrication method in order to set the maximum allowable metamaterial pattern density; (iii) pre-computing response functions over a range of frequencies and configurations; and (iv) selecting desirable configurations for a given application. Because the band structure generally varies with material, tessellation size, and configuration, these steps must be repeated should any of these details change. Additional simulations and experimental results studying these variations with the PERMuTE geometry serves to reinforce how the extrinsic design approach is applied, though such effort likely offers little new generalizable insight beyond what's already presented here. For example, a 2.9 kHz gap in an infinite tessellation (e.g., Fig. 1h) may not necessarily appear in a 1 × 3 or 9 × 9 tessellation (e.g., Figs. 2 and 3), making material-and structure-specific response both open challenges and opportunities for extrinsic metamaterial design. The cumulative benefit of this computationally intensive design strategy is apparent: a single well-characterized structure can be used in numerous applications through simple reconfigurations, which circumvents the challenges to conventional metamaterials that have to be re-designed, re-computed, and refabricated if new properties are desired outside its original application. As such, the extrinsic design strategy enhances the predictability, flexibility, and programmability of metamaterials, increasing their potential impact in future applications.
Conclusion
In this work, we proposed the PERMuTE mechanical metamaterial unit cell, and studied its properties in 1D bar-like structures and 2D tessellations. We investigated the multistability and vibrational band gap in a unit cell, as well as the emergence of a new stable state when the basic unit was tessellated into a larger structure. These findings inspired us to design the 1D wave filter, 2D waveguide, 2D wave lens, and 2D cloak where non-additive extrinsic effects were essential for stabilizing the configuration. These results show the multistability of a single unit is enhanced when it is embedded into a 2D tessellation through extrinsic unit-unit interactions, and this multistability can be utilized to design mechanical metamaterial devices. Importantly, the proposed PERMuTE metamaterial possesses exponentially programmable patterns with different "0/1" combinations of units, which opens up new avenues for designing new smart structures and novel devices in the fields of mechanics, energy, optics, aerospace, and electronics. While our findings are specific to the PERMuTE geometries, the concepts we used to direct the research are more general. The notion of "intrinsic" and "extrinsic" properties can be found in the earliest work of thermodynamics and finds new relevance with mechanical metamaterials where "intrinsic" properties can be attributed directly the unit cell's geometric design, whereas "extrinsic" properties arise at the bulk-scale due to non-linear and non-additive interactions. Whether in the context of orthotropic periodic tessellations as ours or aperiodic patterns such as a Penrose tiling, the distinction of intrinsic and extrinsic effects will be critical for developing metamaterial applications.
Author contributions
N.Y. and J.L.S. designed research; N.Y. and C.C. performed research; N.Y., J.K., and J.L.S. analyzed data; N.Y. and J.L.S. wrote the paper; J.K. and J.L.S. supervised the research. The authors declare no conflict of interest. N.Y. and J.L.S. contributed equally to this work.
Data and materials availability
All data, code, and materials used in this work are freely available upon request.
Credit author statement
The corresponding author (N.Y.) ensures that the descriptions are accurate and agreed by all authors.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 6,827.8 | 2020-11-01T00:00:00.000 | [
"Materials Science"
] |
Design actions with resilient local communities : Goals , drivers and tools
This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0), which permits reproduction, adaptation, and distribution provided the original author and source are credited. Abstract Since resilience is identified as the capacity of communities and institutions to manage environmental, economic and social urgencies in a good and innovative way, design research and actions create the right conditions to engage resilient processes. This paper is about the critical readings of some research projects developed at the Design Department of the Politecnico di Milano (Italy) through goals, drivers and tools presented as relevant for design actions with resilient local communities.
Introduction
In this paper we tackle the topic looking first of all at the definition of resilience and how design can be exploited in the explanation and the pursuit of resilience.Then we point out the main goals that design for resilience has to keep into consideration.We also identify what we called drivers for resilience that are specific areas of intervention particularly relevant both for participatory design and for the activation of resilience processes.Then, we analyze some tools that we find relevant for the implementation of resilience activities for local communities.Finally, we propose a grid to analyze, in light of the identified goals, drivers and tools, some research projects that have been developed by the Design Department of the Politecnico di Milano (Italy).These research projects put in place similar design strategies in order to act at different scales of intervention and reach specific kinds of objectives.
Design and the pursuit of resilience
Since resilience is identified as the capacity of communities and institutions to manage environmental, economic and social problems in an effective and innovative way, design research can be pointed out as an activator for the right conditions to engage resilient processes (Walker et al., 2004;Colucci, 2012;Graziano, 2012;Pisano, 2012;Rodin, 2014;Pinto, 2015).
We are especially interested to resilience, on a small scale.According to the literature, resilience on a small scale has mainly to do with the maintenance and improvement of the quality of life of individuals, which can be achieved thanks to the creation of desirable contextual conditions.Many intangible elements, which have to do more with social capital than with economic capital, have to be taken into consideration and design methods are very useful in highlighting and making explicit resources, tools, relationships, etc. that usually remain hidden.Narra-tives, participation, and co-design are suitable approaches for creating and facilitate visible connections, which in a small scale can help resilience (Fassi and Sedini, in press).
In our view, designing for resilience means to: (i) interpret resilience in a more positive way.According to Manzini, resilience as to be understand as "a deeper expression of the human character and, at the same time, as ground for a possible reconciliation between human beings and nature, between human beings and the irreducible complexity of our world" (2015, p. 22); (ii) take into account four different features of the socio-technical system: diversity, efficiency, adaptability, and cohesion (Fiksel, 2003) In the following pages we will develop our understanding through the identification of goals, drivers and tools for resilience.
Goals
According to the previous paragraph and the soft levers which design is able to activate and use, we have identified three main goals that can be reached thanks to the use of design research and actions in the matter of resilience (Fassi and Sedini, in press): the engagement of people, the development of long-term economic strategies and the influence on policy agendas.These three goals are important for resilience processes and are able to influence the activation of forward looking strategies in order to grant the socio-economic sustainability of communities.
In the next paragraphs, we are going to deepen these three main goals.
with people, thus allowing people to design by and for themselves (Brown, 2009).Looking specifically at resilience issues, engagement is becoming the keyword and a necessary practice for the achievement of high standards in quality of life.Through design practices, which actually can be seen as resilience practices, it is possible to build or re-build a substratum of social capital, which is one of the most valuable capitals for regions and communities (Bourdieu, 1980;Granovetter, 1983;Putnam, 1993).Looking for example at the city of Milan, Italy, the construction and establishment of a renewed social capital is happening in public spaces, through collective moments, supported by both physical and virtual networks.These activities of engagement are not necessarily framed outside the market and promote social change through inclusion and relationships, for example: interventions of recovery and functional redefinition of farmsteads (cascine); social housing projects to give access to the real estate market and to improve deprived areas of the city; use of kitchen gardens as educational tools in schools and as instruments of empowerment, collaboration and improvement of urban areas.
Development of longterm economic strategies
The sum of repeated and cyclical events are able to set in place practices, which might not have an immediate result in terms of economic improvements but through time are able to activate other kinds of economies that are hidden.We can talk about the creation of a certain kind of environment (Marshall, 2013(Marshall, [1890]]; Becattini, 1979;Santagata and Bertacchini, 2011).Aside from infrastructures, building a creative climate or people climate, as Richard Florida (2002) calls it, results to be even more important.This "climate" is nurture by several soft factors which include, for example, an attractive residential environment, tolerance and alternative lifestyles, a lively cultural scene, and the presence of meeting places for business and leisure purposes where the flow of knowledge and information takes place (Musterd et al., 2007).Other theories, such as Field-configuring events (Lampel and Meyer, 2008;Sedini, 2011), stress the capacity of recurrent practices, activities and events are able to influence and prorogate the consolidation of economies even not directly connected with business but which have economic fallbacks in terms of facilitation and empowerment.Usually, policies, which try to give a new image of a city (even a fake one), are organized by short term and task oriented projects.However, those policies are usually soon disembedded and deterritorialized and very far from what can be defined as authentic (Peck, 2005).This influence on policy agendas is going to be discussed in the next section.
Influence on policy agendas
With the shift to the so-called Experience Economy (Pine and Gilmore, 2011) the interdependent relationship between production system (economy) and urban cultural environment (culture and territory) became part of the agenda of the policymakers around the world (Scott, 2006).
There are two main ways in which design can contribute to the development of specific policy agendas.The first is direct and is explicit in the cooperation of institutions in some research projects with the clear intention of having insights and recommendation for policies implementations.The document of the European Commission "Implementing an Action Plan for Design-Driven Innovation" (2013) states that aesthetics, can be a strategic means to foster innovation.In order to exploit these capabilities of design events, projects and initiatives are needed across Europe with a particular attention to the involvement of the public sector policy-makers.The goal is that to acknowledge them about the possibilities and the capabilities of design in generating new economic and social value (Sedini, 2015).
The second is indirect and it is carried out through the capacity of certain research projects to throw some light and increase attention on specific scenarios that, if institutions are careful enough, can become a part of policy agendas.
Drivers
In this section we are going to analyze what we can call drivers for resilience, that are thematic frames within which participatory design activities can be organized and developed in order to activate territorial resilience practices.These drivers (Craft and DIY, Communities and Social Innovation, Arts and Cultural Heritage) have much to do also with the development and the implementation of new business initiatives.
Craft and DIY
Particularly important for resilience are: • the capacity of enterprises for innovation; • the ability of the entrepreneurial environment to create new opportunities; • the attitude of institutions and individuals to be reactive.
In this view it is easy to understand as policies aimed at attracting creative and innovative knowledge skills can be crucial for resilience (Sotarauta, 2005); also because, Creative and Cultural Industries seem to have the highest levels of resilience to the crisis, even if even this sector was penalized (Stumpo and Manchin, 2014).
Craft is a specific and particular sector that it is nowadays in between tradition and innovation.
We noticed a renovate attention for traditional type of works in our societies, such as craft.It seems that in periods of recession the interest towards craft tends to increase (Frayling, 2012).As Sennet (2008) states, there are deep connections between material consciousness and ethical values; therefore the practice of craftsmanship can be directly connect to the creation of a positive cultural environment and the increase of social networks.Craft can be defined as an agent of change, able to give shape to social relationships, since this kinds of knowledge are passed mainly (but not only) through social interaction (Fuad-Luke, 2011).In addition to that, crafts can also contribute to give shape to a sustainably aware future, also as a reaction to mass production (Dillon, 2012).The combination of craft and digital technology has given shape to a new era for DIY (Do-it-yourself ) practitioners.Makers seem to be protagonists of a new "revolution".Technology has been able to widen the access to tools and support for designing and making (Bunnel and Marshall, 2014).This fashion also influence the cities (and places in general) composition and stance.Think about the creation of Hubs, Fab Labs, Makerspaces where digital technologies are accessible and quite affordable and give new opportunities for people first of all to learn and then to design, test and (eventually) sell in a global community (Bunnel and Marshall, 2014).In these spaces, it is both possible to play, experiment, increase knowledge and also to enlarge one own social network and, more in general, social cohesion (von Streit and Lange, 2013;d'Ovidio and Ranci, 2014).
The potentialities of craft and DIY practices to mobilize community capabilities are directly connected to the second driver that we propose in this section: communities and social innovation.
Communities and social innovation
As we mentioned before, the circulation of knowledge and information has been favor by the rise of the Network Society (Castells, 1996).The availability of connection with other people resulted in the so-called traces of communities (Bagnasco, 1999), which are continuously created and re-created and share the same values, knowledge and goals.
The business world is more and more careful about these communities that actually have the power and the willingness to answer social issues which are no more fully in charge of the welfare system.In order to overcome this "emptiness" several start-up or even volunteer activities are focusing on the development of relationships and networks.These kind of projects or ideas try to connect different generations or cultures, for example, trying to avoid isolation that some people (such as seniors or foreigners) are used to experience.Therefore, people who participate in those kind of (business) activities usually hold a double role: they are both consumers and suppliers of the service.This new way of participating in the consumption world favors and contributes to those processes of social construction of meaning (Codeluppi, 1992).An economy of this shape has been defined as Social Economy (Phills et al., 2008) and it is mainly based on the use of digital networks, on the blurred boundaries between production and consumption, on collaboration and on the crucial importance of values.Social Innovation is included in this definition.As Manzini states, it corresponds to the definition of participatory design (Manzini and Rizzo, 2011) because: • both are constituted by very dynamic processes which include co-design activities oriented also to the construction of participants approval; • designers can participate in these activities as facilitators but also as conductors and project creators; • co-design activities are very complex and need artifacts which were explicitly thought and designed.
Arts and Cultural Heritage
Performing and visual arts can have an important role for the promotion of territories, the interpretation of the value of territorialization, the involvement of local com-munities, the development of sustainable forms of tourism, etc.The connection between arts and cultural heritage can be particularly fruitful for the mutual advantages that both create for the other.Culture and creativity have been crucial pillars for the valorization of cities, regions and nations.Arts and culture both in their tangible and intangible manifestations are economic factors able to have a great footprint on the economy of cities, regions and nations.In addition to that, arts and culture are probably the most effective elements able to give shape or to take part in the definition of the image (brand) of a place.Finally, arts and culture are means to develop good communication among different groups of citizens and therefore favor social integration (Vicari Haddock, 2010).Moreover the chance of cross-ferzilation which arts sectors allow with other industries, such as ICT, it is very interesting (Throsby, 2008).Looking at the role of participation and ICT, Convention on the Value of Cultural Heritage for Society (Council of Europe, 2005) clearly stated that heritage has to be framed in a wider way, enlarging the definition of what cultural heritage is.It also stressed the role of people, participation and engagement in order to take some distance from a conventional preserving view to the perspective for the development of a future heritage.New technologies are clearly having a big role and impact in this change of view, trying, through social media, to encourage visitors to actively interact with heritage contents (Giaccardi, 2012).New technologies actually allow new opportunities to use arts and cultural heritage for the development of a sense of place, for the construction of a personal and collective identity, and for the success of tourism sector.We are attending to a further shift of the loss of "aura" of the artistic object described by Walter Benjamin in his book The Work of Art in the Age of Mechanical Reproduction (2008Reproduction ( [1936]]).
Tools
We selected some specific tools for participatory design which we tested in several research carried out by the Design Department of the Politecnico di Milano.In our opinion, these tools are particularly efficient for resilience strategies.
• Co-design workshops: by engaging inhabitants to get expert knowledge that other experts do not have through a collection of information on how to solve wicked problems and exploring fuzzy opportunities (Visser et al., 2005;Sanders and Stappers, 2008).Co-design could be done through several levels of people involvements, the workshop includes their direct and (pro)active engagement by using applied techniques (visual, practical, etc) in different steps.Workshops are leaded by a designer who activates the interaction among participants to generate solutions.The ones our research is based, usually last from few hours to a no more than a full-day and include up to 50 participants and 5 facilitators.
• Prototyping events: by a Participatory Action Research (PAR) where to test ideas immediately through a one day event involving people as users using design toolkits.A prototype not only can be viewed as a thing (an object) (Anders et al., 2011) but rather as socio-material relations where matters of concerns can be dealt with (Björgvinsson et al., 2010).That is why the prototyping action is for us connected to an event where not only products/spaces/services are shown and but where relations are taking place helped by the use of toolkits.The toolkits are made to be used directly by the end users empowering them to develop certain actions or to raise specific goals.This kind of fast small design experiments allowed to come to quick conclusions and continue towards more stable and organized solutions (Meroni et al., 2013).
• Calls for projects: by enlarging the range of solutions through the collection of several ideas focused on a specific topic.This tool allows the creation of a network of proposals, the comparison among them and to help innovation to be used at a bigger scale.Call for projects are usually open to a wide range of stakeholders, including professionals, who answer to a specific brief with some outcomes to be assessed by a committee based on the provided requirements.
• Social media strategy: by disseminating both the results to scale up their use and awareness towards specific issues and solutions.The use of social media and social networking can disseminate information and dialogue on a full range of strategies toward long-term sustainability and well-being in the community (Lachapelle, 2011).It supports the spreading of information related to design actions in local context, allowing them to be used as best practices and to raise a high number of interactions.
Case Studies about participatory design and resilience
In this section, we are going to propose a grid of analysis in order to have a critical reading about some of the design research and actions developed at the Design Department in the Politecnico di Milano.
Engagement of people
Coltivando is in the public university space of the Politecnico di Milano's Bovisa campus, helping people of the community to grow their own food and allowing the local community to discover a public place previously hidden to them.It is a change of use for a space that, for a long time has been used as a building site deposit and then as a green area.Change requires people, vision and commitment (Pincetl, 2012) and here it adds social and environmental value to the campus and a new connection with the local community.
A project like Coltivando coupled with service design models helps to address the gap between knowing the problem of unsustainability and finding solutions for individuals, sustainable design practitioners, communities and government through sustainable everyday design thinking and implementation.This is an experiment of collaboration between service and spatial design to merge diverse members of the community, who live in the same place, by engaging them in designing solutions for resilience for a place that suffered through changes in use classification.Bovisa district has been transformed in the second half of the twentieth century by the removal of almost all the big industries where most of the citizens living in the neighbourhood worked.New residential areas and the opening of a railway station connected to the city centre have brought new life to the neighbourhood.There is still a lack of public spaces like green areas and squares where to meet.In late 90s the Politecnico di Milano, hosting the School of Design, was established on the grounds of "Ceretti & Tanfani", a company that produced cable railways and made Bovisa a working class district.Today it is a green space of about 2.5 acres hosting rooms for classes, a workshop, a library, places for seating and a cafe.The campus could be considered as a 'hidden' public space (Fassi et al., 2016) since no one is using it but the university community.Most of the people who once knew it as a former factory do not even have the chance to see how it has transformed -not because they are not allowed to enter, but because they think it is for students and university staff only.The two types of 'users' (university community and citizens) have very few contact points in common and the Coltivando project is attempting to change this situation.
Target
The local community is mainly composed by retired people, families with young kids with an high percentage of immigrants coming from China and North Africa.The neighborhood hosts one of the highest number of asso-
Communities and social innovation
Social innovations: solutions based on new social forms and economical models.
All social changes towards sustainability, when they can reduce the environmental impact, regenerate common goods and social fabric (Manzini, 2015).
Coltivando took over twelve months to develop, by a group of people who, every Saturday, spent their time building the DIY garden beds made by assembling prefab steel panels; digging channels for the 3000 metres of tubes for the irrigation system; and putting 90 tons of organic soil in the garden beds.
Today Coltivando is a community garden made of 100 garden beds containing more than 50 different vegetables and fruits, managed by a team of 15-20 people from the neighbourhood who regularly meet on Saturdays to work and spend time together.The garden is now also recognized as a place in the neighbourhood where people meet and organize happenings and events.This is slowly changing the perception of this public space -it is now less hidden and more open to all.
Co-design workshops, Prototyping events
Coltivando has been developed by using service design thinking combined with a spatial design approach.
Two main tools were used: the Co-design workshop and the Prototyping events.The garden project has been co-designed considering topics such as service model, governance model, education and programming model, and spatial design.The service model of the garden is based on a collaborative model of sharing responsibilities amongst the group.The first indications of interest for a community project focused on a vegetable garden began in the autumn of 2011 when during "C'è spazio per tutti" prototyping event, a garden bed was designed and a toolkit for community interaction was proposed.That event was the result of a research activity which included 50 international students and a team of 5 research within the Design department.The event proposed 10 different activities to suggest how to open up the campus to the neighbourhood by testing the design actions with the people in the campus.Among this, the garden bed was the most successful.Following this, a design research team was established.According to Sanders and Stappers (2012), "co-creation practised at the early front end of the design development process can have an impact with positive, long-range consequences".Three co-design workshop sessions were organised between May and June 2012.The first was an academic workshop involving people studying and working in the university, the others were community consultations open to local stakeholders and people from the Bovisa neighbourhood.A community-centred design approach (Meroni, 2008) has been used to engage various stakeholders within the university community as well as those of the local Bovisa neighbourhood and several tools were developed to enable many people to design their own garden.
In each workshop, designers proposed an in-progress concept of the convivial garden, according to the results of the previous session, and asked for feedback about possible spatial layouts and rules for managing the future community.We developed tools to collect data and information from the people including questionnaires, space mock-ups, and games to help them creating their garden both in terms of space usage and service rules.We split the people into groups of experts and beginners, to better understand the needs and the motivations of both categories.They were asked to design in response to such issues as where to place the fruit trees, herbs and vegetables plots; to create special plots for growing experiments, areas to relax and a playground for children; to define the roles of members to run the service and ten basic rules to become a member.At the end of the three co-design workshops, we used feedbacks from approximately a hundred people (experts and beginners, academics and residents) to inspire, and adjust to what was possible, the very first design proposal for the space and for the service model of the garden.The design challenge at the end of this process was to match people's desires with what was feasible amid the constraints and available budget.After the co-design sessions, the final working project and the final budget were presented for the project start-up, to obtain funds from the university administration.
Critical issues
Coltivando is now four years old.In the last years, more than 1000 people came across the space and the activities within.The co-design and co-construction phases were the most attractive and interactive.People form different range of age and social background came across the activities for short term (from 1 hour to half a day) or long term (1 full day every week).It has been difficult to keep the interaction high until today, since one of the big issues is the continuity in the participation.The core group is now suffering for a lack of people to take care of the garden and less enthusiasm than the first couple of years.The designers research team is developing actions to solve this problem by enlarging the potential users trying to include the immigrants target that was very difficult to get in contact with and involve in the activities.Further the garden itself, even if it is recognized by the neighborhood as one of the key places for socialization, get robbed by anonymous people who use to take the vegetables without any permission.This affects the mood of the participant who are trying to solve this issue by raising awareness about it in the neighborhood.campUS Description "campUS" ( 2014) is a two years research project developed by the Design Department with the Architecture and Management Engineering departments at the Politecnico di Milano.It was selected at the Polisocial Award 20143 to be funded as one of the best proposals presented.
"campUS" works for a positive relationship between the space and skills available on university campuses and the local context in which they occur.The relationship between the residential districts and universities passes through the structuring of spaces and activities which allow resilience and facilitate interaction, integration and social cohesion.The "campUS" project fits into this frame-work, acting as a possible model of flexible interaction with the surrounding physical and social space, and as an incubator of social practices scalable in the territory.
Engagement of people, Development of long-term economic strategies
There are four main work packages to be developed in close connection with the local neighbourhood: the convivial garden, social tv, the itinerant pavilion and a business model.
"campUS" is divided into two areas of intervention, "campUS in", actions inside the campus and "campUS out", actions outside the Campus (in the surrounding district and beyond): • "campUS" in: through research-action, activation of the spaces of the campus as incubators of social practices where social actions (services, spaces, communication systems) are defined, tested and prototyped using the methods of co-design and participatory planning.The results aim at building a package of tools for the dissemination of good design practices, cohesion and social innovation for specific communities in defined areas of the city; aggregation of a number of figures to support the production of content and the development of a communication platform of the district as a system of narrative social practices, catalyst actions and partnerships; • "campUS" out: definition of a landscape of permanent actions in the neighbourhood that have the potential to lead to social enterprises, through an exchange of prototyped actions for virtuous activities ("campUS" in).The research gives design support, assists with adoption and diffusion of instruments of identity and communitybuilding in the neighbourhood, and aims to identify an innovative business model for the long-term management of these initiatives by directly involving the stakeholders who interact with them.
Target
campUS is mainly addressed to over 65 years old and to NEET (Not Engaged in Education, Employment or Training, 15-35 years old).These targets has been chosen by the research team since they are two key categories in the neighborhood.According to national statistics (ISTAT, 2014), NEET are more than 27% in Italy, and the same percentage is in the Bovisa area where the project is based.Over 65, again according to the national report, are more subjected to depression and suicide.Further the connection in between these two targets could ease the process of cultural and memories exchange, by let them collaborate in some of the expected outputs (community gardens, social tv).More specifically, the social TV did a partnership with a local association dealing with rehab for NEET with minor mental disabilities or problems to be connected in some social contexts; while the community gardens get advantage of a partnership with another association where lots of retired local people were eager to start cultivate a piece of common land.
Craft and DIY, Community and social innovation
This research combines a theoretical and metadesign dimension and an applied one to experience the dynamics of effective involvement, to test tools and to prototype models of innovative social practices.Campus Bovisa and the districts of Bovisa-Dergano represent the real case study where actions and interventions in the public space may actually involve citizens and other social actors, allowing them to explore original methods of relationship among stakeholders.Skills and expertises developed within the academic context are directly shared with the community to trigger DYI practices (Community garden workpack) even by the use of technology (Social tv workpack).The convivial garden takes is lead from the "Coltivando" project to develop an additional community garden by defining the guidelines to highlight both the hardware component of the project (DIY kit for containers for growing, spatial arrangement of artifacts, sizing etc.) and the software ones (rules, operation, management) in line with, and in support of, existing actions promoted by the municipality.The neighbourhood social TV's aims are the formation and aggregation of a series of professional or semi-professional figures to support its activities.This is to develop a narrative system of identified (and identifiable) social practices, with the goal of providing an opportunity for growth and awareness of the neighbourhood's expressive potential and role in society.
Tools
Prototyping events, Co-design workshops, Social media strategy campUS takes adavantage of previous use of tools in order to implement it and being more effective.Prototyping events were used to test solutions within the community garden workpack."Il sabato della Boivisasca" was an event held on March 2015, to engage people in a new community garden located 2km far from the university campus.The connection with a local association allowed the research team to get in contact with a large group of citizens interested in the development of the garden.Five design solutions developed by students and instructors were presented at the event and implemented by the interaction with the people.Co-design workshops were used in every workpack as a way to define design solutions, to exchange skills and knowledge, to create awareness about the subjects and strengthen the team of people engaged in the activities.The use of social media (Facebook as the main one), gives users the possibility to interact not only with text-based information, but also with visual information, audio and video content (Zaglia, 2013).Through this kind of interaction we are able to get qualitative informa-tion about the engagement, along with quantitative data coming from the insights: in their comments, users highlight the most meaningful matters, giving feedbacks about the social experience of seeing themselves as the main characters of a common story and sharing it with their personal audiences on social media (Ciancia et al., 2015).
Critical issues
Some few issues were critic during the two-year programs.Firstly, the creation of a common academic language among the involved researchers from three different departments and discipline (design, architecture and management engineering).Then the creation of a strong network of relationships in the neighborhood with local actors/stakeholders (associations, informal groups, municipality, etc) to guarantee the effectiveness of the results and a good impact on the area.Last, the participation of the people: most of the actions were co-design and put in place with the help of the involved actors, but the participation of the people during the events (connected with the itinerant pavilion work package) or the everyday activities (for the community gardens) was weak in terms of numbers.
Description
CCAlps -Creative Companies in Alpine Space was a project financed within the Alpine Space Program of the European Union, which lasted three years and it was concluded in December 2014.It was aimed mainly at developing the competitiveness and attractiveness of the Alpine Space Area for the so-called Creative and Cultural Economy.CCAlps was based predominantly on the collaboration between institutional and governmental subjects, academia and creative and cultural enterprises.
Engagement of people, Development of long-term economic strategies, Influence on policy agendas
The first goal was mainly addressed through the pilot action called Creative Camp and the organization of an international public event.Creative Camp was developed as an advanced workshop, which had an initial call for ideas and then, after the selection, a very intensive first phase of concept generation followed by a second, longer phase of idea development.Creative Camps included many activities to develop new products and services, enhancing the growth of the local productive system.The international event, named Cross Creativity, was dedicated to cultural and creative industries and brought together over 300 start-ups.
It is easy to understand how these engagement actions were also oriented to the goals of developing longterm economic strategies and influencing policy-agendas.Indeed, CCAlps was devoted to regional planning, since it was a collaboration project between institutions from six European Countries (Italy, France, Germany, Austria, Slove-nia and Switzerland), such as States Government, Development Agencies and Chambers of Commerce.
This composition of the research team was due to the specific focus on practical activities explicitly oriented to their translation into policy actions.Indeed, one of the main objectives was that of delivering insights and recommendation for policies implementations about Cultural and Creative Industries.
Target
CCAlps had as a main target people who would have liked to develop their own business idea in the fields of Design, Fashion and Media.Since we were operating in specific territories, one of the main constrain in the selection of the target was that the Regions taking part in the project wanted to limit the selection to people already living in the region.We did not have a limitation of age, however the participants of the Milanese Creative Camps were mainly composed by young new graduated in design sectors.As it is easy to understand, the working situation of the participants was not well defined: some of them were unemployed, some were doing a stage or similar, some others were freelance or with precarious working contracts.
Craft and DIY, Communities and Social Innovation, Arts and Cultural Heritage
The call for ideas to participate in the Lombardy Creative Camp was focus on three main topics: Multimedia, Fashion and Service Design.Drivers mainly emerged by the proposals selected and by the work developed during the period of tutoring and training which they participated in.Three of the most successful ideas were built on the drivers identified and proposed in this paper.
• MakersHub Milano: mainly operate within Craft and DIY driver.Indeed, it is a co-making and co-working space for makers, designers, DIY lovers and enterprises.It is a place for developing innovative products based on the interaction between craft and new technologies (http://www.makershub.it/).• Craftventure: it is focused both on Craft and DIY, and Communities and Social Innovation drivers.Indeed, it is a service that allows young people and tourists to experience artisans' work.At the same time, the artisans can preserve/renovate/transfer their knowledge thanks to the cultural "clash".This project won the contest during Cross Creativity event (http:// www.craftventure.com/en/).• Case Sparse | Tra l'Etere e la Terra (Spread Houses | Between Ether and Earth): this project clearly operate in the "area" between arts and cultural heritage.Indeed, Case Sparse wants to discover and value remarkable areas thanks to the use of contemporary art.After periods of artistic residences in Malonno area (Brescia) the so-called traces left by artists participated in the enrichment of an open-air museum open to the involvement and participation of locals and tourists (http://www.casesparse.org/).
Call for projects, Prototyping events
In a period length of six months, in 2013, all the partners organized and held their own Creative Camp.A general framework was supplied to the partners, which however could plan and manage their camps on the topics more suitable for their regions and more in line with their competences.
Creative Camps where structured in different ways, however all of them had to follow these steps: (i) a call for ideas (ii) the selection of the best ideas at least two days of intensive workshop for the concept development (iii) a phase of mentoring and coaching (iv) at least a final event of dissemination After being selected, the participants actively work on their ideas supported by experts and mentors.At the end of these two intensive days of ideas re-generation and concept development, the participants re-framed their initial ideas and presented them to the experts involved.After the final presentation, experts evaluated them again and indicated to the organization team which ideas would be admitted to the next step of mentoring and coaching.
Critical issues
In order to evaluate the activities of the project, several analysis procedures were put in place: • a customer satisfaction survey, given to the participants at the end of each Creative Camp in order to check their satisfaction with the organization and the contents of the event; • an evaluation form filled by each Creative Camp manager for the collection of qualitative and quantitative information about the pilot projects in-depth interviews with each Creative Camp manager designed to bring out the strengths and weaknesses of the pilot projects, as well as the possibility to replicate the action model used; • a final online survey submitted to the leaders of collaborative projects originated by Creative Camps and later accompanied with targeted services, to monitor and detect the results of the training, assistance and other form of support offered.We must say that the satisfaction about the Creative Camp as an instrument to conceive or improve ideas was pretty high: the 70% was satisfied or very satisfied and only the 10% was disappointed by the experience.
As it is possible to see from the chart below, the most critical points concerned the communication of information (pre and post-event).
Actually, a general difficulty in achieving the target has been encountered, specifically for the participation to calls, both local and international, although the most evident problems were found in the latter.One of the weakest points was, evidently, relating to the communication to the public.Often the differences in "language" are in part related to the excessive bureaucratization of the process that the public institution must follow and respect.
Conclusions
In this paper, we identified what are the main goals, drivers and approach that a design strategy for resilience should have to be based on.Then we analyzed each of them, starting from the assumption that design for resilience has to be mainly focused on the participation of citizens and institutions in these processes.
Talking about goals of a design strategy for resilience, we found engagement of people, development of longterm economic strategies and influence on policy agenda as the most relevant three.Drivers are identified as thematic frames particularly suitable for the organization and the development of participatory design activities for resilience.These, which are also in a certain way new business drivers, are: Craft and DIY, Communities and Social Innovation, Arts and Cultural Heritage.Specific design tools for a participatory design strategy for resilience are: co-design workshops, prototyping events, call for projects, social media strategies.
Given these elements for the analysis of projects or even for planning projects which want to have a positive impact and to activate resilience processes, we used them to examine and describe three recent or ongoing projects carried out by the Design Department of the Politecnico di Milano: Coltivando, campUS and CCAlps.
According to Manzini (2015), we need to regard "resilience" with a positive meaning by moving from a mainly defensive one to a more positive one, according to which human beings can be part of the solution.Resilience has actually more to do with social capital than with economic capital.For this reason, participatory methods are particularly suitable to create and facilitate the creation and accumulation of social capital at different scales and for different purposes.
Description
Coltivando is a design experiment conceived within the framework of two research programmes run by POLIMI-DE-SIS Lab, a member of the DESIS Network, at the Politecnico di Milano Design Department.The first programme -'Human Cities, reclaiming public spaces' (2010-2012) -worked on the regeneration of public spaces for urban communities.The second -'Feeding Milan, energies for change' (2010-ongoing) -aims to shorten the food chain in the Milanese region.It is a vegetable community garden open to the neighbourhood and to the university staff and students. 1
Figure 1 .
Figure 1.Photo by C. Sedini, Monnalisa-effect, ongoing project on tourism, participation and new technologies. | 9,294.6 | 2017-02-01T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Sociology"
] |
Electronic control of soliton power transfer in silicon nanocrystal waveguides
We demonstrate numerically that the power transfer from one polarization component of a (1 + 1)D vector spatial soliton to the other in a birefringent nonlinear medium can be controlled via the electro-optic Kerr effect by varying the externally applied electric field. We show how several all-optical operations involving fundamental vector solitons can be electronically controlled. We also discover that the split-up of the higher-order vector solitons due to the two-photon absorption (TPA) can be suppressed by adjusting the external electric field. The soliton trapping along the slow optical axis is realized by a planar waveguide, filled with a silicon-nanocrystal material. The external electric field is applied along the fast optical axis of the waveguide. © 2008 Optical Society of America OCIS codes: (190.6135) Spatial solitons; (230.0250) Optoelectronics; (130.4815) Optical switching devices. References and links 1. G. I. Stegeman and E. M. Wright, “All-optical waveguide switching,” Opt. Quantum Electron. 22, 95–122 (1990). 2. K. J. Blow, N. J. Doran, and D. Wood, “Polarization instabilities for solitons in birefringent fibers,” Opt. Lett. 12, 202–204 (1987). 3. M. Delque, D. Fanjoux, and T. Sylvestre, “Polarization dynamics of the fundamental vector soliton of isotropic Kerr media,” Phys. Rev. E 75, 016611 (2007). 4. U. Hempelmann, “Polarization coupling and transverse interaction of spatial optical solitons in a slab waveguide,” J. Opt. Soc. Am. B 12, 77–86 (1995). 5. For an up-to-date review see, Y. S. Kivshar and G. P. Agrawal, Optical Solitons: From Fibers to Photonic Crystals (Academic Press, Boston, 2003), Chapter 9. 6. J. S. Aitchison, J. U. Kang, and G. I. Stegeman, “Signal gain due to a polarization coupling in an AlGaAs channel waveguide,” Appl. Phys. Lett. 67, 2456–2458 (1995). 7. L. Thylen, “Integrated optics in LiNbO3: Recent Developments in Devices for Telecommunications,” J. Lightwave Technol. 6, 847–861 (1988). 8. G. I. Stegeman, E. M. Wright, N. Finlayson, R. Zanoni, and C. T. Seaton, “Third Order Nonlinear Integrated Optics,” J. Lightwave Technol. 6, 953–970 (1988). 9. R. W. Boyd, Nonlinear Optics 2nd Edition (Academic Press, Amsterdam, 2003). 10. M. Cada , M. Qasymeh, and J. Pistora, “Electrically and optically controlled cross-polarized wave conversion,” Opt. Express 16, 3083–3100 (2008). 11. G. P. Agrawal, Nonlinear Fiber Optics 4th Edition (Academic Press, San Diego, 2007). 12. Q. Lin, O. J. Painter, and G. P. Agrawal, “Nonlinear optical phenomena in silicon waveguides: modeling and applications,” Opt. Express 15, 16604–16644 (2007). 13. J. M. Jarem and P. P. Banerjee, Computational methods for electromagnetic and optical systems (Marcel Dekker Inc., New York, 2000). 14. The Photonics Research Lab, University of Maryland, “SSPROP–Split-Step Fourier Propagation Software,” http://www.photonics.umd.edu/software/ssprop/index.html. #95045 $15.00 USD Received 15 Apr 2008; revised 4 Jun 2008; accepted 5 Jun 2008; published 13 Jun 2008 (C) 2008 OSA 23 June 2008 / Vol. 16, No. 13 / OPTICS EXPRESS 9587 15. G. V. Prakash, M. Cazzanelli, Z. Gaburro, L. Pavesi, F. Iacona, G. Franzò, and F. Priolo, “Nonlinear optical properties of silicon nanocrystals grown by plasma-enhanced chemical vapor deposition,” J. Appl. Phys. 91, 4607–4610 (2002). 16. F. Riboli, D. Navarro-Urrios, A. Chiasera, N. Daldosso, L. Pavesi, C. J. Oton, J. Heitmann, L. X. Yi, R. Scholz, and M. Zacharias, “Birefringence in optical waveguides made by silicon nanocrystal superlattices,” Appl. Phys. Lett. 85, 1268–1270 (2004). 17. S. M. Anthony, “Optical properties of nanostructured silicon-rich silicon dioxide,” Thesis (Ph. D.)–Massachusetts Institute of Technology, Dept. of Materials Science and Engineering, (2006), Chapter 4. 18. J. S. Aitchison, A. M. Weiner, Y. Silberberg, D. E. Leaird, M. K. Oliver, J. L. Jackel, and P. W. E. Smith, “Experimental observation of spatial soliton interactions,” Opt. Lett. 16, 15–17 (1991). 19. V. V. Afanasjev, J. S. Aitchison, and Y. S. Kivshar, “Splitting of high-order spatial solitons under the action of two-photon absorption,” Opt. Commun. 116, 331–338 (1995). 20. V. Boucher, R. Barille, and G. Rivoire, “Polarization-switching control in a nonlinear liquid planar waveguide,” J. Opt. Soc. Am. B 20, 1666–1674 (2003).
Introduction
The operation of various all-optical devices, ranging from all-optical waveguide switches to distributed feedback couplers, relies on the power exchange among linearly and/or nonlinearly coupled optical fields [1].A good deal of research has so far been carried out on the power transfer between the two (orthogonal) polarization components of vector spatial or temporal solitons, supported by nonlinear waveguides or fibers, respectively [2][3][4][5].Nonlinear coupling of two orthogonally polarized beams has also been studied in connection with the signal amplification in AlGaAs channel waveguides [6].In this case, not only are the two polarization components coupled via the cross-phase modulation, but they are also linearly coupled due to weak waveguide birefringence resulting from the waveguide design.Furthermore, as the directions of the fast and slow axes, assumed to correspond to the TE and TM polarizations, respectively, are fixed at the waveguide fabrication stage, the direction of the energy flow is also fixed.It is therefore only possible to amplify a weaker TE component at the expense of a stronger TM component.To reverse the energy flow direction, a π/2 phase difference between the two modes must be imposed.
To control the direction of the all-optical power transfer between the two orthogonally polarized modes of a waveguide or soliton polarization components, one could propose using the linear electro-optic effect by applying an electric field across the waveguide.Although the linear electro-optic effect in LiNbO 3 waveguides -a popular choice for integrated optics applications -has been thoroughly studied elsewhere [7,8], it can only take place in nonlinear media with broken inversion symmetry [9].The latter circumstance severely restricts the range of potential applications of the linear electro-optic effect to the optical power exchange control.
In this paper, we focus on a different possibility.As most optical materials exhibit the thirdorder nonlinearity, we propose to use the corresponding quadratic electro-optic effect to electronically control the power exchange between the vector spatial soliton components.Such an interesting emerging paradigm -controlling all-optical operations electronically [10] -can prove quite attractive for the future all-optical networks employing spatial solitons, as it is a relatively easy matter to adjust the properties of the electric fields.
In this work, we present the first proof-of-the-principle results and relegate the study of practical devices for the future.We will refer to the soliton component which acquires the power as the "signal" and the other soliton component as the "pump".In this language, we can realize the amplification of either a TE signal by a TM pump or a TM signal by a TE pump, regardless of the relative amplitudes of the TE and TM soliton components at the input.We demonstrate that the conversion efficiency of higher than 90% can be achieved for the fundamental vector solitons.Moreover, we show that the TPA-induced splitting of a high-order vector soliton into fundamental solitons can be suppressed by adjusting the magnitude of the external electric field.
Our analysis indicates that an "ideal" material for the realization of electronically controllable soliton power exchange must have a large third-order optical susceptibility coefficient such that the induced birefringence is comparable with the intrinsic modal birefringence.In addition, the "ideal" material must possess very small linear and nonlinear losses.We show that most of the desired functionalities can be realized employing a silicon-nanocrystal-based material.
This work is organized as follows.In Section 2, we show how the set of coupled (1 + 1)D nonlinear wave equations, governing the propagation of vector solitons in nonlinear waveguides, can be modified by including the external-field induced linear birefringence terms.In Section 3, we numerically investigate the electric field control of the all-optical power transfer from one component of the vector soliton to the other.We then present our conclusions in Section 4.
Theory
Consider an optical beam, propagating in an isotropic nonlinear medium in the planar waveguide geometry as indicated in Fig. 1.The beam propagation is mediated by an electric field E ext , applied along the y-axis.The electric field of the optical beam can then be represented as Fig. 1.Slab waveguide geometry (Ref.[4], Fig. 1) The dielectric response of the medium is assumed to be isotropic and of electronic origin.The third-order dielectric susceptibility tensor of any such medium is given by the expression [11] χ The nonlinear polarization field, generated by any Kerr-type nonlinear medium, can be written as It follows from Eq. ( 1) -( 3) that the component of the polarization vector at an optical frequency ω 0 takes the form where In the derivation of Eqs. ( 5) and ( 6), we assumed the nonlinear dispersion to be small enough that the third-order optical susceptibility does not significantly depend on frequency in the spectral range of interest, implying that Throughout the rest of the paper, we also assume that the imaginary part of χ (3) xxxx (−ω 0 , ω 0 , −ω 0 , ω 0 ) is not.The latter assumption holds true for many semiconductor materials for which the two-photon absorption processes, responsible for nonlinear losses, can play an important role in the frequency range E g < 2hω 0 < 2E g , where E g is the band-gap energy [12].
The nonlinear wave equation, governing beam propagation, can be written in the form Here D = ε 0 E + P L , where P L = ε 0 χ L E is a linear polarization field.In the usual quasimonochromatic and slow-varying envelope approximation, the solution to (8) in the waveguide geometry can be sought in the form where F(y) is a spatial mode profile of the single-mode waveguide, u j (x, z) are slowly varying amplitudes of the components j = x, y.It is convenient to numerically analyze the optical power exchange in the circular polarization basis; the circular polarization components are related to the linear ones via the transformation Substituting from Eq. ( 9) into (8), averaging over the waveguide mode profile in a standard way [11], and making use of the transformation (10), we obtain a set of the coupled nonlinear wave equations for the circular polarization components in dimensionless (soliton) units: Here j = 1, 2 pertains to right (1) and left (2) circular polarizations, respectively.The dimensionless variables are X = x/w 0 ,Z = z/L D , and where k 0 = ω 0 /c.We have also introduced the notations: L D = β w 2 0 is a diffraction length, w 0 being a typical transverse beam size in the x-direction; β = (β x + β y )/2, n 2 = 3Re{χ (3) xxxx }/8n L is the nonlinear refractive index, and α = αL D /2 is a dimensionless linear loss coefficient.
The three key dimensionless parameters, defining the soliton dynamics, are the field-induced birefringence coefficient κ and a dimensionless TPA strength K specified as In Eqs. ( 13) and ( 14), p is a beam power (per unit length), and the two nonlinear refractive indices n 2 and n NL are measured in different units and are related by n 2 = n NL ε 0 cn L /2 [11].We numerically solved Eq. ( 11) using a standard split-step Fourier method, which is commonly applied to the analysis of beam propagation problems [13].In particular, we modified an open source code of SSPROP [14] by including the dc field-induced linear birefringence as well as nonlinear loss terms.In our simulations, we have chosen silicon nanocrystals (Si-nc) as our nonlinear material due to its high nonlinear refractive index, n NL = 5.18 × 10 −15 m 2 /W, and weak two-photon absorption, β T PA =0.2 m/GW, both given at λ 0 = 813 nm [15].Further, as the size of Si nanocrystals is much smaller than the wavelength of light, a Si-nc rich material usually has homogeneous optical properties and isotropic optical constants in the visible spectral range [16].The magnitudes of the refractive index and the modal birefringence constant are influenced by many factors, such as Si content, the process of waveguide fabrication, etc. [17].In this work, we used the following values of the parameters: λ 0 =813nm, n 0 =1.7, δ n = n T E − n TM = 0.0002.For these values, the dimensionless TPA parameter is K = 0.0025.
Numerical simulations
The input optical field had a hyperbolic secant profile (spatial soliton) with the beam waist w 0 = 3μm.The two orthogonally polarized components have zero phase difference (linear polarization), and the ratio of their amplitudes is specified by the angle θ that the soliton electric field makes with the slow axis of the waveguide.The external electric field was varied in the range from 0 V /μm to 10 V /μm.The length of the waveguide was 100L d , which is about 1.11 cm.Whenever a single soliton-like beam, polarized along the y-axis (TM mode), is launched as shown in Fig. 2(a), one can set the external electric field to a certain value, equal to 2.38V/μm in our parameter regime, such that at the output almost all the power (96%) is switched to the x-component (TE mode), as is illustrated in Fig. 2(a) and 3(a).The total input power is chosen such that N = 1.Notice that since the power of the TM mode gradually decreases due to twophoton absorption, slightly less than 96% of the incident power is actually transferred to the TE mode.Similarly if a TE signal is initially launched as shown in Fig. 2(b), a TM polarized output can be amplified by simply setting the magnitude of the control field to 2.675V/μm.This situation is illustrated in Fig. 2(b) and 3(b).Thus our numerical simulations demonstrate the possibility of designing a reconfigurable TE⇐⇒TM soliton mode converter.
If on the other hand, a vector soliton is launched as indicated in Fig. 2(c) and Fig. 2(d) such that N = 1, but one of the soliton components carries much more power than the other, the magnitude of the controlling electric field can be set to obtain either a weak signal amplification or a power limiting of a desired soliton component to a certain value.The former possibility is shown in Fig. 3(c), while the latter is illustrated in Fig. 3(d).It can be readily inferred by analyzing Fig. 2(d) and 3(d) that despite the nonlinear losses, the power-carrying polarization component loses only a negligible fraction of its initial power over the propagation distance of about 100 diffraction lengths, whereas the other component suffers a dramatic power loss over the same distance.
Finally, we consider a second-order soliton input, N = 2, where the TM component initially carries much more power than the TE one.The two-photon absorption will cause the amplitude and oscillation period of the second-order soliton decrease and increase, respectively, as it propagates down the waveguide until the soliton splits into a pair of fundamental solitons with equal amplitudes and different propagation angles.We display such an evolution scenario in Fig. 4(a) for the second-order soliton propagating without energy exchange, i.e., assuming no external control field and zero internal birefringence.We note that in the absence of the energy exchange between the higher-order vector soliton components, the TPA-induced soliton splitting in Kerrlike nonlinear media is qualitatively similar to the behavior of their scalar counterparts, which was studied experimentally in [18].A comprehensive theoretical analysis of the TPA-induced of higher-order scalar solitons can be found in [19].Such a splitting was characterized as a bifurcation-like phenomenon caused by non-adiabatic energy absorption at the point of the highest soliton field intensity, i.e., at the position of the strong soliton overlap.
The higher-order N = 2, soliton evolution scenario qualitatively changes in presence of the birefringence-induced energy transfer between the polarization components of the vector soliton.In this case, as is evidenced in Fig. 4(b), there is power transfer, accompanied by slow (adiabatic) energy loss due to TPA, with the weaker polarization component transferring its power to the stronger one as well as to the medium.However, no soliton splitting occurs over the entire length of the waveguide, provided the magnitude of the electric field is set to a certain value.We conjecture that the suppression of the higher-order vector soliton splitting is a result of a subtle interplay between the periodic power exchange between the soliton components, whose period is controlled by the external dc field, and the action of the TPA.Indeed, as the peak intensity of the stronger soliton component is drastically reduced due to intensity-dependent two-photon absorption, the other component will transfer enough power to it to guarantee its structural integrity.Eventually, most of the power will reside in the slow-axis component which will then slowly (adiabatically) decay as a result of the TPA.
It should be noted that although we chose a specific input wavelength of 813 nm, all our numerical results remain qualitatively the same for any input wavelength as long as the nonlinearity is strong enough and the effect of TPA can not be neglected.
Conclusion
We have shown that an external electric field can be used to control, via the quadratic electrooptic effect, the power transfer between the components of a spatial vector soliton propagating in a planar waveguide filled with an isotropic Kerr-like nonlinear medium, i.e.Si-nc-based material.In particular, by adjusting both the external field applied along the fast axis of the waveguide, and the input polarization state of the soliton, one can switch the fast and slow axes of the waveguide and attain the maximum power transfer from the fast soliton component to the slow one.Thus, we have demonstrated that the application of the electric field can significantly affect the polarization dynamics of the vector soliton.Our simulation results can find diverse applications to such all-optical operations as the fanout implementation with a switching device [6], all-optical switching [1], power limiting [20], and optical routing, to mention just a few possibilities.It should be stressed here that we are reporting the first proof-of-principles results.Further research is needed to make the proposed functionalities attractive to the design of future electronically controllable all-optical switching/reconfigurable and/or multistable/storing devices.
Fig. 2 .
Fig. 2. Evolution of the intensity profiles of the fundamental vector soliton, N = 1, for different values of E ext and θ .The dimensionless TPA strength is K = 0.0025.
Fig. 3 .
Fig. 3. Evolution of the powers of the fundamental vector soliton components, N = 1, for different values of E ext and θ .The dimensionless TPA strength is K = 0.0025.
Fig. 4 .
Fig. 4. Evolution of the intensity profiles of the second-order, N = 2, vector soliton, K = 0.0025 (a) in the absence of birefringence,i.e., Δβ = 0 (b) with internal birefringence and the external field present. | 4,282.6 | 2008-06-23T00:00:00.000 | [
"Physics"
] |
Identification of the laccase-like multicopper oxidase gene family of sweet cherry (Prunus avium L.) and expression analysis in six ancient Tuscan varieties
Laccase-like multicopper oxidases (LMCOs) are versatile enzymes used as biocatalysts performing the oxidation of different substrates of industrial relevance, with or without the intervention of a mediator. They have attracted a lot of interest for biotechnological applications in light of their eco-friendliness: they indeed oxidize the substrate(s) by coupling the four electron reduction of the final acceptor, molecular oxygen (O2), to water. Plant LMCOs represent a still poorly studied, important class of oxidoreductases controlling e.g. the post-harvest quality of fruits and enabling the tailoring of designer energy crops. We here sought to identify the LMCOs in Prunus avium L., whose fruits are rich in bioactive molecules, but are also highly perishable. The goal was to analyze them using bioinformatics (phylogenetic and in silico structural analyses) and to perform a targeted expression study on a subset of genes in six ancient varieties from Tuscany, all threatened by genetic erosion. These sweet cherry varieties contain higher amount of bioactive molecules, as compared to commercial counterparts. The results shown demonstrate strikingly different gene expression patterns in the six ancient varieties (‘Benedetta’, ‘Carlotta’, ‘Crognola’, ‘Maggiola’, ‘Morellona’, ‘Moscatella’) belonging to the Tuscan Regional Bank of Germplasm, as compared to a widely used commercial one (‘Durone’). The motivation of this study is the economic importance of P. avium and the involvement of LMCOs in post-harvest fruit parameters, like color. The results presented pave the way to follow-up researches on LMCOs of sweet cherry exploring post-harvest fruit parameters (e.g. anthocyanin stability responsible for pericarp browning and the preservation of the appealing red color), as well as developmental processes, like stony pit formation.
The accuracy of secretome prediction is indeed higher when combining different tools; for plants, the combination of SignalP/TMHMM/Phobius/TargetP provides specificity of 96.5% and sensitivity of 90.6% 15 .
Three LMCOs (XP_021829870.1, XP_021834477.1, XP_020426263.1) were predicted to be either mitochondrial or to be localized elsewhere; however, a parallel analysis with CELLO 16 revealed the likelihood of either lysosomal or extracellular localization, although no SP was detected with SignalP 4.1 17 (Table 1).
With the exception of two sequences, XP_008246156.1 and XP_021809566.1, which were shorter in length, all the other putative P. avium LMCOs were found to possess the MCO Table 1). Additionally, the alignment highlighted conservation of the L1 (H-W-H-G-x(9)-D-G-x(5)-Q-C-P-I) and L3 (H-P-x-H-L-H-G-H) regions containing the histidine residues involved in the binding of the T1, T2 and T3 Cu (Fig. S1).
The phylogenetic analysis with LMCOs from Fragaria vesca and thale cress revealed the presence of the six groups (groups 1-6) previously reported in Arabidopsis thaliana 1 : 4 sweet cherry LMCOs belong to group 1, 11 to group 2, 1 to group 3, 14 to group 4, 2 to group 5 and 1 to group 6 ( Table 1 and Fig. 1). We included in the phylogenetic analysis the laccase from Litchi chinensis previously reported to be responsible for anthocyanin degradation 4 ; it clustered in group 4, where the sweet cherry LMCO giving the best correspondence after BLAST analysis, i.e. XP_021827255.1, is also found (Fig. 1).
Fruit LMCOs showed highest identity to enzymes from plants and lowest identity to fungal ones. Both cherry and litchi showed intermediate identity to the plant ascorbate oxidase (Ao, Fig. S2a). It is interesting that the identity between fruit (litchi vs cherry) LMCOs is 61% and between ascomycete (M. albomyces, Ma) and basidiomycete (T. trogii, Tt) is only 31%, indicating that LMCOs do not show very high sequence similarity even between closely related members (Fig. S2a).
Multiple alignment between fruit, plant, fungal LMCOs and plant Ao is shown in Fig. S2b. The results show that all copper-interacting His residues and a Cys (red) are fully conserved across all groups. However, variation is found in the sequences among L1-L3 regions (unboxed green and cyan highlighted) and substrate-interacting loops (boxed). Further variation is found in the M2-M4 (purple highlight and underlined) region in the residues that may affect the redox potential of the enzyme (orange amino acids) and may interact with the substrate within www.nature.com/scientificreports www.nature.com/scientificreports/ the active-site (white amino acids on purple background). The active-site catalytic acidic residues in fungal laccases have been substituted by Asn residues in plant LMCOs (# blue amino acids; Fig. S2b).
In the absence of a plant laccase X-ray structure, I-TASSER identified ascorbate oxidase (Ao) from Cucurbita pepo (Cp; PDB: 1ASP, 1AOZ) as the most suitable template to generate homology models of fruit LMCOs (Fig. 2). For both fruit models, the normalized Z-score for 1AOZ template was in the range 2-7 and >90% coverage. The C-score and TM-score were 0.55 and 0.79 ± 0.09 respectively for litchi and 0.34 and 0.76 ± 0.09 respectively for cherry LMCO. A C-score is typically in the range of [−5, 2], where a C-score of higher value denotes a model with a high confidence, whereas a TM-score > 0.5 indicates a model of correct topology. The quality of both fruit models based on the Ramachandran plots (results not shown) showed that for litchi and cherry models, 94.2 and 93.5% respectively of the residues were found in favored and allowed regions. The fruit models showed good superposition of the overall structure, copper atoms and electron abstracting His with each other, as well as with Ao ( Fig. 2). Figure 3a shows that copper-interacting His residues and a Cys residue in fruit LMCOs are superimposed nicely on those from Ao. The residues that interact with the catechin (yellow) within the active-site surrounded by substrate-binding loops are shown (Fig. 3b). The most critical residues are conserved, except for a residue affecting the redox -potential (cherry, L532), that is substituted by Tyr and Ile in litchi LMCO and Ao, respectively. Acidic residues (D/E) in fungal laccases are implicated in catalysis, but are replaced by Asn in plant LMCOs (N233 in cherry) and by Leu in Ao. Interestingly, a critical substrate-interacting residue that protrudes into the active-site is Arg (R534 in cherry) in all plant LMCOs, is substituted by aromatic (W, F) residues in fungi and Pro in Ao (Fig. 3b).
Various substrates ranging in size from 154 to 514 Da were docked in plant and fungal LMCOs and plant Ao ( Table 2).
The results show that both fruit LMCOs have the largest affinity with catechin and quercetin as substrates, although litchi shows higher affinity than cherry LMCO for both of these substrates. Whereas Ao and cherry LMCO show lowest affinity with the smallest substrate (2, 6-dimethoxyphenol), litchi LMCO showed lowest affinity with the largest substrate (ABTS). This is probably because litchi LMCO is unable to fully accommodate bent ABTS into the active-site (Table 2, in pink), whereas the lacquer laccase shows high affinity (ΔG, −7), as the bent ABTS is fully accommodated in the active-site (Table 2, in yellow). Generally, cherry LMCO shows lowest affinity, whereas laccases from lacquer tree and ascomycete (M. albomyces) show the highest affinity for all substrates tested. The variation in binding affinity may be due to the presence of non-conserved residues (such as small vs large hydrophobic residues) in and around the substrate-binding loops that may change the size of the active-site pocket, as well as interactions with the substrates (Figs 2 and 3b; Table 2). Table 3. The analysis shows the variation in the conformations (extended vs bent) of the substrates, as well as variation in the contours (shallow vs deep) of the active-site of enzymes.
Gene expression analysis in the ancient and commercial varieties. To select the LMCO genes for
RT-qPCR analysis, we decided to focus our attention on the transcriptomic study previously published on sweet cherry fruit development 5,6 and designed primers on the contigs encoding putative laccases present in the dataset. We reasoned that those genes would represent the LMCO members expressed (and hence detectable via RT-qPCR) in the fruit tissues of our sweet cherry varieties. Some of the putative LMCOs were however expressed Figure 2. Models of fruit LMCOs superimposed on the X-ray structures of ascorbate oxidase (Ao) from Cucurbita pepo and laccase from Trametes trogii (Tt) and shown from two different perspectives (a,b). Red, cherry LMCO; pink, litchi LMCO; green, Ao; blue (Tt). Copper atoms from Ao (white) and Tt (black) are also superimposed. Substrate-binding pocket is depicted with brown space-filled substrate (2,6-dimethoxyphenol) and the copper atom nearest to it is T1. Blue/red space filled atoms showed His residue involved in abstracting electrons from the substrate and relying to the T1 Cu atom. Table 2. The models of LMCOs from fruits compared to the X-ray structures of ascorbate oxidase from Cucurbita pepo and LMCO from fungi. Affinity, free-energy of enzyme-substrate binding (more negative value depicts better binding); Ao, Ascorbate oxidase (green); lacquer tree, Toxicodendron vernicifluum, (yellow); cherry LMCO (red); litchi LMCO (pink); M. albomyces, Melanocarpus albomyces (ascomycete, turquoise); T. trogii, Trametes trogii (basidiomycete, blue); respective colored space-filled amino acids, surface-exposed His residue involved in substrate binding that acts as a primary electron acceptor involved in shuttling electrons www.nature.com/scientificreports/ at very low levels in the ancient sweet cherry fruits and were therefore discarded from the subsequent analyses. Indeed, the corresponding primer pairs did not amplify with the range of efficiency accepted for robust expression analysis. A total of nine primer pairs passed the quality control for the amplification efficiencies. Of these nine targets, the BLAST analyses revealed that three belonged to the same transcripts, according to the latest sweet cherry genome assembly 5,6 . Therefore, in the end, six LMCO genes were targeted for gene expression studies, i.e. the genes encoding XP_021833316.1, XP_021824778.1, XP_021809222.1, XP_021834477.1, XP_021814580.1 and XP_021820658.1. The Principal Component Analysis (PCA) of the gene expression data shows co-clustering of the 4 independent biological replicates for each variety studied, as well as a good separation of the varieties (Fig. 4). The first 2 components of the PCA explain 86.4% of the total variance. More specifically, PC1 represents 65.1% of the total variance, while PC2 21%. It is noteworthy that the commercial variety 'Durone' forms a distinct group that is well separated from the ancient varieties here investigated.
The targeted gene expression analysis shows that the six LMCOs analyzed have statistically significant higher expression in almost all the ancient varieties with respect to the commercial one investigated in the study (Figs 5 and S3).
In particular, four major expression patterns can be identified via the hierarchical clustering of the heat maps (absolute Pearson correlation, average linkage): the genes corresponding to XP_021833316.1 and XP_021824778.1 cluster in two branches, while those coding for XP_021809222.1 and XP_021834477.1, together with XP_021814580.1 and XP_021820658.1 form a third and a fourth expression pattern (Fig. 5). In particular, the genes corresponding to XP_021809222.1, XP_021834477.1, XP_021814580.1 and XP_021820658.1 are characterized by higher expressions in the varieties 'Benedetta' , 'Carlotta' and 'Moscatella' , with however a more marked pattern for the cluster formed by XP_021814580.1 and XP_021820658.1 (Fig. 5). The only LMCO gene showing much lower expression in 'Benedetta' with respect to the other ancient varieties is the one corresponding to XP_021824778.1. Notably, this gene is instead expressed at higher levels in 'Moscatella' , a variety displaying lower expressions of the other LMCOs investigated (Figs 5 and S3).
Discussion
Sweet cherry is an economically important tree whose fruits are appreciated for their taste, high content in polyphenols and hence their nutraceutical value 19,20 . The availability of the genome and transcriptome of P. avium 5,6 is a great asset for breeding strategies, as well as for more basic functional studies on genes controlling important fruit parameters, like color, content of bioactives, size, post-harvest stability.
We have here focused our attention on a class of enzymes, the LMCOs, that is still poorly explored in plants, despite their enormous physiological importance. It is for example known that LMCOs control crucial plant physiological processes like lignification 21 , response to exogenous stresses 22 and post-harvest stability of fruits 4,8,9 . Our study, based on a comparative analysis with thale cress and F. vesca, another member of the family Rosaceae, has identified (at least) 33 LMCOs in sweet cherry, the majority of which corresponds to secreted enzymes possessing the reported motifs L1, M2, L3 and M4 of laccases and MCOs 18 ( Table 1).
The 33 P. avium LMCOs cluster in the six phylogenetic groups previously identified in A. thaliana 1 (Fig. 1). Attempts to retrace the physiological role of LMCOs on the basis of characteristics such as pI proved not to be appropriate 1 . In the absence of functional studies, it was proposed that any prediction of function should be based on sequence analysis and phylogenetic clustering 1 . To this end, we included both thale cress and strawberry LMCOs in our analysis, since, for both species, electronic fluorescent pictograms (eFP) are available for different tissues and conditions 23,24 . It is for example possible to notice that the cluster of sweet cherry LMCOs belonging to group 4 and comprising those proteins clustering in a sister group with respect to ATLAC15, i.e. XP_021820472.1, XP_007200191.1 and XP_020426263.1, may be involved in the metabolism of proanthocyanidins (PAs), as well as lignification of the seeds and root elongation, in a manner analogous to what shown in A. thaliana 25,26 . It will therefore be interesting to study the expression of the corresponding P. avium LMCOs in the seed.
As previously mentioned, XP_021827255.1 is the best P. avium BLAST match of the litchi LMCO involved in pericarp browning: it is interesting to note that the strawberry LMCO encoded by gene27526 (giving the best match after BLAST search) is expressed at the highest levels in the fruit cortex and pith at stage 1 and 2 (http:// mb3.towson.edu/efp/cgi-bin/efpWeb.cgi). Hence, BLAST analysis, coupled to phylogeny and eFP database search can provide an indication of the tissues where the gene is expressed at the highest levels and consequently, of the potential role in sweet cherry. More specific studies relying on gene expression at different developmental stages and post-harvest conditions will provide more solid indications of the LMCO role in cherry fruits.
The sweet cherry LMCOs XP_021814580.1 and XP_021809160.1, XP_021809222.1 clustering with ATLAC4 and ATLAC17 (Fig. 1), may be involved in pit lignification, since the corresponding thale cress genes are known to regulate lignification 27 . The corresponding strawberry LMCO encoded by gene18812 is expressed at higher levels in the carpel wall in the strawberry eFP database, a finding corroborating the potential involvement in lignification of the P. avium related LMCOs.
In the absence of a plant laccase X-ray structure, both fruit LMCOs (XP_021827255.1, ADB97327.1) were modelled on the Ao template 28,29 . The functional role of Ao (EC 1.10.3.3) has been a mystery due to its requirement for ascorbate as a sole substrate with obvious disadvantage in lowering plant resistance against stress as a result of ascorbate depletion 30 . Our results indicate that sequence alignment (Fig. S2b), overall protein fold (Fig. 2), copper-binding residues and active-site morphology (Fig. 3, Tables 2 and 3) of Ao are quite similar to LMCOs from cherry and litchi. Additionally, the binding affinities of numerous substrates including fruit-specific from the substrates (shown in various colors) to T2 and T3 copper (white spheres) via T1 Cu (black sphere), conserved Cys and His residues (black).
www.nature.com/scientificreports www.nature.com/scientificreports/ catechin to Ao are either comparable or better than cherry and litchi LMCOs ( Table 2), suggesting that Ao likely acts on other substrates as well. Only few substrates other than ascorbate have been tested with Ao. It has been experimentally found that Ao can act on other substrates (chlorohydroquinone derivatives) albeit with lower (6-8%) activity but higher affinity for smaller chlorohydroquinone and lower affinity for slightly larger 2,6-dichlorohydroquinone compared to that with ascorbate 31,32 . In the future, it will be interesting to experimentally verify the substrate specificities (V max and K m ) of fruit LMCOs and compare them to Ao (Table 2).
Substrate-binding loop I has a catalytic residue (#Figs S2b and 3b). Hydrophilic Glu/Asp in fungal laccases are believed to be involved in reaction mechanism by making H-bond with the substrate and modulate substrate specificity and pH profile 33 . It is interesting that in cherry LMCOs it can be hydrophilic Asp, Asn or Gln (Fig. S1), whereas in Ao it is hydrophobic Leu (#Figs S2b and 3b). This hydrophilic vs hydrophobic substitution is intriguing in view of thevresults showing good affinity of substrates with Ao (Table 2) 31,32 , but low catalytic activities 31,32 . The implication of hydrophobic Leu in the low activity of Ao against common laccase substrates can be experimentally verified by replacing it with hydrophilic amino acids found in fruit laccases. Substrate-binding loop IV has a surface-exposed His (#Figs S2b, 2 and 3) involved in substrate interaction via H-bond and likely acts by accepting an electron from the substrate and relaying it to T1 Cu, which in turn passes it to T2/T3 copper center via Cys residue 27 (Fig. 3a). While Arg in plants ( Fig. S2b; R534 in cherry; Fig. 3b) and Pro in Ao protrude into the active site and likely stabilize deprotonated hydroxyl groups and hydrophobic rings of substrates respectively, Trp507/Phe454 in fungal LMCOs are shown to project away from the active-site (Figs S2b and 3b).
The redox potential of LMCOs determines which substrates can be oxidized [34][35][36][37] . Two axial residues found in substrate-binding loop IV (Fig. S2b, 1 st orange amino acids within the blue box; Fig. 3b) have been implicated in controlling the redox potential. The first axial residue is Ile in fungi 33 Table 3. Poses of various substrates within the binding-site of LMCO from litchi and sweet cherry, laccase from lacquer tree and ascorbate oxidase (Ao) from plants. All laccase and LMCO models were generated using the X-ray structure of Ao from Cucurbita pepo. Refer to Table 1 for the properties of all substrates. Red surface, polar; white surface, non-polar.
Scientific RepoRts | (2019) 9:3557 | https://doi.org/10.1038/s41598-019-39151-z www.nature.com/scientificreports/ be I, L, F (Fig. S1). In litchi and lacquer tree LMCOs it is an aromatic amino acid such as Tyr and Phe (Figs S2b and 3b). The higher hydrophobicity at this position can raise the redox potential of the enzyme 36 . The second axial residue is also known to influence the redox-potential of laccase (Fig. S2b, 2 nd orange amino acids within the grey box, Fig. 3b). In plants, including cherry LMCOs, it is Leu or Met (Fig. S1). In fungal laccases it is Leu or Phe. Met causes a decreased redox potential. Phe has highest redox potential, followed closely by Leu and Met 34,35 . In addition, the presence of hydrophobic residues near the T1 Cu and the distance between the T1 www.nature.com/scientificreports www.nature.com/scientificreports/ Cu and nitrogen atom of electron abstracting His (S2, red#) can also raise the redox potential 30 , beside other factors 31 . A large distance renders T1 Cu more electron deficient thus increasing its redox potential 31 . The distances between T1 Cu and His nitrogen were determined to be 1.93, 2.03 and 2.1 Å in Ma, Tt and Ao X-ray structures respectively. Uncertainties in the homology models preclude the determination of T1 Cu-His distances in fruit laccases.
The gene expression analysis highlighted an overall higher expression of the six LMCOs in the local varieties from Tuscany, as compared to the commercial one (Figs 5 and S3). This finding is interesting if one considers that the polyphenol contents in the local varieties are higher than those found in the commercial one 11 . Particularly interesting in this respect is the expression profile in the variety 'Benedetta' , which was previously found to possess high contents of catechins with respect to the other ancient varieties here investigated 11 .
Catechins are the monomeric units of proanthocyanidins (PAs, also referred to as condensed tannins) and this class of oligomeric/polymeric flavonoids shows great variety depending on stereochemistry and hydroxylation pattern 38 . Cherries are rich in procyanidin B2 (a B-type PA) 38 : the Phenol Explorer database (http:// phenol-explorer.eu/contents/food/46) specifies that they are reported to contain 2.10 mg/100 g FW 39 . It would therefore be interesting to measure the content of PAs in the ancient Tuscan varieties to confirm whether 'Benedetta' has higher contents reflecting the increased quantities of catechins previously quantified 11 .
Likewise, it will be interesting to explore whether the laccases displaying the highest expression in 'Benedetta' (i.e. the genes coding for XP_021814580.1 and XP_021820658.1) are involved in the oxidative polymerization of catechins in sweet cherry. In this respect it should be noted that polyphenol oxidases (PPOs), also proposed to be responsible for the oxidative polymerization of flavonols, do not usually give PAs with structures found in nature 40 . Hence, LMCOs may partake in vivo in the polymerization of PAs, a process that is still not fully characterized nor elucidated 41 . In support of a role of LMCOs in PA polymerization is the TRANSPARENT TESTA 10 (TT10) mutation in thale cress (TT10 encodes a LMCO-type flavonoid oxidase): the tt10 mutant shows a pale-brown seed coat phenotype (due to alterations in seed coat pigmentation), as well as accumulation of soluble PAs and more epicatechin monomers 25 .
LMCOs from plants are still undercharacterized: no 3D structures are yet resolved and only limited knowledge of their functions, even in model systems such as thale cress, is available 42 . The gene redundancy and substrate promiscuity make their study in plants difficult and complex. However, an increased knowledge of LMCOs function in higher plants would give impetus to biotechnology: being mostly secreted, cultivation in bioreactors of bacterial/yeast cells expressing the target plant LMCOs could provide a valuable tool for the production of novel oxidative enzymes that could be used as catalysts in green chemistry. Alternatively, undifferentiated plant cells could be grown in bioreactors and their LMCOs purified from the culture medium. Being enzymes that mediate the response to exogenous stresses, elicitation can be envisaged to boost their production in plant cell cultures. However, protein yield is a limiting factor making the use of engineered bacterial/yeast cells more favorable from a practical point of view.
Conclusions
We have here characterized the sweet cherry LMCOs using in silico and gene expression studies. The phylogenetic analysis has provided a classification in the 6 major groups previously identified in thale cress. The modeling and substrate docking analyses of the sweet cherry LMCO displaying the highest sequence similarity with the litchi enzyme involved in anthocyanin degradation showed affinity for catechin and quercetin. The gene expression analysis highlighted a higher expression of a set of LMCO genes in the ancient non-commercial varieties from Tuscany. Ours is the only work addressing the study of LMCOs from ancient sweet cherry varieties of Tuscany. The rationale behind this study is the willingness to valorize local varieties as alternative (and better) sources of bioactive molecules, as compared to commercial ones. Since LMCOs impact fruit parameters during post-harvest storage, we deemed it interesting to study this class of oxidoreductases in sweet cherry and to measure their expression in ancient Tuscan varieties. The data presented have a dual significance. On one hand they will promote conservation and further studies of the ancient fruit trees representing the agrobiodiversity heritage of Tuscany. Additionally, the data here presented will pave the way to further research addressing the molecular analysis of LMCOs potentially intervening in important physiological processes in sweet cherry, e.g. flavonoid polymerization, stone formation, or impacting industrially-relevant aspects, like anthocyanin degradation during fruit post-harvest storage.
Materials and Methods
Sample collection. Sampling was carried out in 2017 on 18-year-old cherry trees (ancient local varieties of P. avium on P-HL-B rootstocks) grown under standard horticultural conditions at the experimental field of the CNR in Ivalsa Follonica (GR, Italy). The variety Benedetta was sampled in 2016, since no fruits were obtained in 2017. The experimental field is located at the following coordinates: 42°55′59″N, 10°45′57″E. For each variety, the corresponding trees were present in different numbers, depending on how many could be recovered across Tuscany. The total number of trees for each variety is: 'Benedetta' (2), 'Maggiola' (8), 'Morellona' (5), 'Crognola' (3), 'Carlotta' (4), 'Moscatella' (5). A total of 20 cherry fruits were taken from each tree to have enough biological replicates (4 in this study), each consisting of a pool of at least 5 fruits.
Samples were collected on May the 16 th 2016 and May 19 th 2017 from trees grown under field conditions and exposed to natural variations of temperature and solar radiation. The sample collection took place by harvesting fruits located at different places on the tree, in order to minimize the bias due to variations in solar exposition. www.nature.com/scientificreports/ Each fruit was harvested from the tree by detaching it with the stem, which was subsequently rapidly removed. After removal of the stem, the fruits were immediately plunged in liquid nitrogen, brought to the laboratory and stored at −80 °C in Ziplock bags until RNA extraction. The commercial variety 'Durone' was purchased at a local grocery shop in Siena. The phenotypic characteristics of each variety are shown in Fig. 6. All the varieties here studied are preserved in the Regional Bank of Germplasm of Tuscany (http://germoplasma.regione.toscana.it/index. php?option=com_content&view=article&id=4&Itemid=109) 12 .
Bioinformatic analyses. Sweet cherry LMCOs were obtained by blasting the thale cress protein sequences in NCBI, as well as by querying the Genome Database for Rosaceae (GDR; available at https://www.rosaceae. org/) and DBcherry database (available at http://cherry.kazusa.or.jp) 6 . The phylogenetic analysis was carried out by using the full length LMCOs of thale cress 1 , strawberry (downloaded from the Phytozome portal, at https:// phytozome.jgi.doe.gov/pz/portal.html), the litchi enzyme reported to be involved in pericarp browning 4 and a laccase from the polypore mushroom Trametes versicolor to root the tree. The FASTA sequences are given in Supplementary Information. The pair-wise multiple alignment of LMCOs for identifying conserved residues and motifs was determined by using CLUSTAL-Ω (http://www.ebi.ac.uk/Tools/msa/clustalo/) 43 and the alignment was then used to build the maximum likelihood phylogenetic tree using the online program PhyML (bootstraps: 100), available at http://www.phylogeny.fr/one_task.cgi?task_type=phyml 44 . The tree was visualized with iTOL (available at https://itol.embl.de/).
Copper-interacting residues were identified in the X-ray structures of LMCOs and ascorbate oxidase using PDBsum available at http://www.ebi.ac.uk/thornton-srv/databases/cgi-bin/pdbsum/GetPage.pl?pdbcode=index. html 51 . Substrate-binding loops and copper-interacting residues were identified as described previously 37 . The 3D homology models were generated with the I-TASSER Suite (http://zhanglab.ccmb.med.umich.edu/I-TASSER/) 52 utilizing LOMETS, SPICKER, and TM-align. The models were then refined using REMO by optimizing the backbone hydrogen-bonding networks and FG-MD by removing the steric clashes and improving the torsion angles. The quality of the models was checked by RAMPAGE 53 .
Molecular docking of substrates of various sizes with LMCOs was carried out with the online Mcule tool using reference His 514N4080 atom of ascorbate oxidase structure and equivalent atoms in all other structures 54 RNA extraction, cDNA synthesis, primer design and real-time PCR. Sweet cherry fruits were taken from the −80, kept frozen in liquid nitrogen and quickly cut into pieces (comprising the exocarp and ca. 5 mm of the mesocarp tissue) using a sterile liquid nitrogen-cooled scalpel. The tissue pieces were collected in a pre-sterilized frozen mortar filled with liquid nitrogen and immediately ground to a fine powder using a pestle. This procedure was necessary to ensure extra care, as previous tests showed that the RNA of sweet cherry fruits is extremely sensitive to degradation caused by even minimal tissue thawing. RNA was extracted using the modified CTAB procedure previously reported by us for textile hemp 57,58 . In case of low A260/230 nm ratios, a further precipitation/wash step with ammonium acetate/ethanol was performed 57 . The RNA integrity values (RINs) were determined using a 2100 Bioanalyzer (Agilent, Santa Clara, CA, USA); for all the samples the RINs were >7.5. The extracted RNAs were converted to cDNA with the ProtoScript II reverse transcriptase (New England Biolabs, Leiden, The Netherlands) and random primers, according to the manufacturer's instructions. The cDNA was diluted to 2 ng/μL and used for the RT-qPCR analysis in 384-well plates which were prepared with an automated liquid handling robot (epMotion 5073, Eppendorf, Hamburg, Germany). Primers were designed with the online tool Primer3Plus (http://www.bioinformatics.nl/cgi-bin/primer3plus/primer3plus.cgi) and subsequently cross-checked using the OligoAnalyzer 3.1 tool from Integrated DNA technologies (http://eu.idtdna.com/ calc/analyzer). The FASTA nucleotide sequences of the sweet cherry reference genes and LMCOs are given in Supplementary Information. The RT-qPCR reactions were set up and run as described previously 55 . A melt curve analysis was performed at the end of the amplification cycles, in order to assess the specificity of the primers. The primer efficiencies were determined using a 5-fold dilution series of 6 points (10-2-0.4-0.08-0.016-0.0032 ng/ μL) and are reported in Table 4. Table 4. List of primer names, sequences, amplicon size and amplification efficiencies used in this study for RT-qPCR. The expression of the LMCOs was calculated with qBase PLUS (version 2.5, Biogazelle, Ghent, Belgium) by using the reference genes indicated by the geNorm PLUS analysis (6 reference genes were tested for stability and PavAP4 and PavTIP41 were identified as sufficient for data normalization among PavPP2A, PavPolyUbq, PavSerThr, PavAct7). A one-way ANOVA with a Tukey's post-hoc test was performed on log2 transformed NRQs (Normalized Relative Quantities) by using IBM SPSS Statistics v19. | 6,791.4 | 2019-03-05T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
INTEGRATED APPROACH TO MORAL EDUCATION
In article process of moral education, his importance in the modern world is considered, it is analyzed questions of deepening and enrichment of content of moral education, formation of new approaches to moral education on the basis of integration of knowledge of such spheres as the biosphere, a noosphere and also moral education is treated both social, and personal needs.
Introduction
Moral educationa spirituality basis, and spirituality is a basis of any state. The basis has to be strong, otherwise, it can cause a crisis situation. Evolutionary development of society demands the constant growth of moral culture. Besides, intensive development of science, radical changes in society demand from the person of new outlook, new thinking, new approaches and the relations, high morality today. Evolutionary development of society first demands constant perfection of moral spiritual culture. Secondly, modern intensive development of science and technology also demands from mankind of new outlook, new thinking, the new relations and high morality. In the third, moral education which is a spirituality basis demands to lift the basic changes happening in society to higher level. In the pedagogical encyclopedia, moral education is defined as the purposeful formation of moral consciousness, the development of moral feelings, the development of skills and habits of moral behavior. The definition shows that morality as a personal characteristic is a very complex phenomenon that unites such personal structures as reason, feelings, will. Therefore, moral education is defined as a single process of education: -moral feelings (conscience, duty, faith, responsibility, citizenship, patriotism),moral character (patience, mercy), -moral position (the ability to distinguish between good and evil, the manifestation of selfless love, readiness to overcome life trials), -moral behavior (readiness serving the people of the Fatherland, manifesting spiritual prudence, goodwill). From the definition of the Pedagogical Encyclopedia given above, it becomes clear that morality can only be considered as a complex, multi-level system, combining such qualities as reason, will, feelings. The creation of a stable system of moral beliefs, thanks to which a person can independently understand the border between moral and immoral, is determined by the unity and harmony of moral consciousness, expressed in stable moral habits. This belief system tells us about a person's moral maturity. This is an ISSN: 00333077 2886 www.psychologyandeducation.net important sign of the correspondence between the process of education and the development of morality in the educated.
We have deeply studied process of moral education, have carried out the careful analysis of the current situation, have established the available shortcomings and their reasons. Now the pedagogical science considers questions of morality as social requirement. And the concept forms a basis at the organization of moral education. During process of moral education the main attention is paid to external elements of education, i.e. education of moral behavior and the culture of communication. However, on education of moral consciousness, on formation of moral senses and moral beliefs we approach superficially. Moral education which isn't supported with sensibleness and feelings can't be full and doesn't meet the modern requirements. Now we in the course of moral education use the following methods: schooling, explanation, personal example, training, encouragement, punishment. These educational methods are effective in education of culture of communication, moral behavior with participation of the teacher-tutor. But, the methods above specified are insufficiently effective for formation of moral consciousness and moral beliefs especially in the course of self-education.
Materials and methods
Education of moral consciousness is based first of all on thinking of the personality, but we in the course of moral education almost don't use method of the scientific analysis. And also we don't study such question asmorality in thinking. Meanwhile it is necessary to emphasize that education of moral consciousness has to be based on moral thinking. We won't think and we don't reflect that not respect for standards of morality can join, eventually, negative physiological changes. We treat questions of morality, only as the social requirement and we study this question in a narrow foreshortening. And it leads to unilateral and narrow approach in moral education. The possibility of achievement of obviously planned purposes in process of moral education is as a result lost.
Today the mankind enters the new spherewhich is designated as a noosphere. The noosphere declares itself as the transformed biosphere, her new qualitative state. About this new qualitative state V.I. Vernadsky left the reflections in incomplete work "A scientific thought as the planetary phenomenon". The main idea of this work is in justification of a thesis that a noosphere not a utopia, and the strategy of real survival of mankind, permission of global problems, in conditions when the mankind becomes "universal" category. The subsequent development of society has shown that the noosphere is not only means of solution of problems, but also a problem. The power of reason can be transformed for the good, and can turn back the evil. The person is a special component of the biosphere, her conceiving reed" One of the weakest creations of the nature, people becomes the "miracle" capable by force of thought to embrace all Universe. The reason of the person changes the planet, and a conclusion arises here, that geological force is not Homo Sapiens, but his mind, a scientific thought of all mankind. From here it is possible to draw a conclusion that today the mankind has to learn to think on moral canons, respecting ethical standards.
Today, as a result of rapid development of human mind, versatile influence of a noosphere, especially negative influence on other spheres has sharply increased. It is resulted by global problems. In due time V.I. Vernadsky noted: "Evolutionary development mankind will rise by such step when the noosphere will get huge force and it will lead to emergence of global problems for mankind. Only integration it is separate the developing sciences, can give an opportunity of solutions of the arisen global problems". Because integration expands a round of researches and gives the chance to an integrated approach. Proceeding from these logical reasons, it is expedient to organize scientific research works by www.psychologyandeducation.net integration of sciences. Synthesizing and creation of uniform conclusions of discoveries of related sciences is sources of solutions of new problems or emergence of the new directions. Proceeding, for these reasons we in the scientific research at the solution of problems of moral education have addressed integration of sciences and collection of information to synthesis and development of the uniform conclusions in the field of biology, biophysics, bioenergetics, the biosphere, medicine, noospheres connected with morality.
Result and discussion
Today, we have to pay special attention to deepening of processes of moral education, contents enrichment, development of moral spiritual consciousness and moral senses, formation of new approaches to moral education on the basis of integration of knowledge of such spheres as the biosphere, a noosphere, biology, physiology, psychology, bio-energetics. Such approach to the solution of questions of moral education is considered expedient, it gives the chance to enrich the content of moral education, to understand need and the importance of moral education, to realize that the morality comes to the arena not only as social, but also personal need in the course of survival of the society and each person who is separately taken now. Such complete approach to increase efficiency of moral education and will serve as motive to independent work of an individual on itself. By disclosure of influence of morality on the biosphere and on biological essence of the certain person we will be able to awaken new motives and personal interests in students. Such approach will open new opportunities in the course of moral education, will expand an outlook circle, will enrich the content of moral education, forms new motives and will increase efficiency of this process. Because, the doctrine which has personified not only social, but also personal interests, can be full and perfect and also have enormous educational influence.
Conclusion
By revealing the influence of morality on the biological essence, on the health of an individual, we will be able to awaken new motives and personal interests in future specialists. The study of moral canons based on physiology, bioenergetics and human psychology will open up new opportunities in the process of moral education, expand the range of worldview, enrich the content of moral education and increase the effectiveness of this process. Having introduced the specificity of the question, we studied the process and the significance of moral education under the angle of biological and physiological laws. Studied how moral feelings such as respect, love, joy, gratitude, honor affect human health. The influence of negative feelings on human health was compared. For example, any virus is a low-vibration organism with a closed structure of an electromagnetic circuit with a resonance frequency of approximately 5.5 and 14.5 hertz. Starting from 25.5 hertz, the virus dies. We know that emotions also appear as vibrations and it is measured in hertz. Negative emotions manifest as vibrations: • grief -from 0.1 to 2 Hz; • fear -from 0.2 to 2.2 Hz; • resentment -from 0.6 to 3.3 Hz; • irritation -from 0.9 to 3.8 Hz; • disturbance -from 0.6 to 1.9 Hz; This means the above feelings create favorable fluctuations for the development of the virus. High vibrations are manifested with the following feelings: • generosity -95 Hz; • gratitude -45 Hz; • heartfelt gratitude -from 140 Hz and above; • love and friendliness -154 Hz and higher; • complicity and compassion -from 150 Hz and above; • unconditional love -from 205 Hz and above. Such examples show how important it is to cultivate moral feelings in oneself. The study of the importance of moral upbringing using the www.psychologyandeducation.net latest discoveries in the field of bioenergy and biophysics reveals great opportunities in the formation of personal motives that push the students to moral improvement. This approach yielded results. If we compare the traditional approach, which is mainly based on social requirements, and the integrative approach, which is aimed at awakening personal interests, then you can see that academic performance has increased from 62 percent to 72 percent. With an integrated approach, perception and understanding of the essence of moral education expanded the horizons of knowledge, this is in its turn strengthened moral consciousness, moral convictions and served as the basis for the formation of moral principles among future medical workers. Because the teaching, which embodied not only social, but also personal interests, the integration of knowledge has a colossal educational impact and high results. | 2,440.2 | 2021-01-15T00:00:00.000 | [
"Education",
"Philosophy"
] |
Influence of Stress on the Chiral Polarization and Elastrocaloric Effect in BaTiO 3 with 180 ◦ Domain Structure
: The polarization and elastrocaloric effect of chiral barium titanate (BaTiO 3 ) with an Ising– Bloch-type domain wall under stress was investigated using the Landau–Ginzburg–Devonshire (LGD) theory. It has been shown that tensile stresses increase the magnitude of the Ising polarization component in barium titanate, together with a decrease in the domain wall width. Compressive stresses cause a reduction in the Ising polarization component and an increase in the domain width. Under compressive stress, barium titanate exhibits a negative elastrocaloric effect and temperature changes with increasing stress, while BaTiO 3 exhibits a positive elastrocaloric effect under tensile stress. Bloch polarization shows angle-dependent polarization under external force, but the temperature change from the elastrocaloric effect is smaller than that of Ising polarization under stress. This work contributes to the understanding of polarization evolution under tension in ferroelectrics with chiral structure.
Introduction
Ferroelectric materials are generating significant interest for their potential use in devices that rely on the combination of spontaneous polarization that can be altered by an external electric field or stress [1][2][3][4].Ferroelectric devices consist of ferroelectrics with a domain structure characterized by varying polarization orientations [5,6].The domain walls in ferroelectrics, which are an important part of the domain and have a large influence on the properties of ferroelectric materials [7,8], have been investigated extensively by various methods, including atomic force microscopy [9], X-ray diffraction methods [10,11], and X-ray diffractometry [12,13].A single domain wall can be considered to be a viable unit of information in nano-electronic devices, aligning with the ongoing trend of shrinking electronic devices [14][15][16].Conventionally, the 180 • domain walls in ferroelectrics are regarded as Ising-type, wherein the magnitude of the polarization component alters but does not rotate along the domain walls [17][18][19].In the field of early thermodynamics, Lajzerwicz and Niez made a prediction that the order parameter present in the domain wall is specifically chirality [20].The presence of a Néel or Bloch-type domain wall in ferroelectrics has been demonstrated using density functional theory calculations, Landau-Ginzburg-Devonshire (LGD) calculations and phase field simulations [21][22][23][24][25][26][27].In experiments, the Néel-type domain wall was observed in the ferroelectric crystals of Pb(Zr 1−x Ti x )O 3 using transmission electron microscopy [28].Using nonlinear optical microscopy, a dominant Bloch-like form in the trigonal LiTaO 3 bulk crystal was reported [29].The chiral textures of domain walls in ferroelectric antiferromagnet BiFeO 3 were observed using reciprocal and real-space characterization techniques [30].Epitaxial lead titanate thin films were used to illustrate the presence of 180 • -domain walls with non-Ising polarization at ambient temperatures [31].To utilize chiral polarization in electrical devices, it is necessary to investigate its behavior Crystals 2024, 14, 511 2 of 9 under various external fields.It has been demonstrated that chirality in ferroelectric materials can be tuned using a variety of methods.The electric field can be used to regulate and switch the ferroelectric thin film and nanodots of PbTiO 3 , which exhibit the chirality of skyrmions [32,33].The achiral domain wall, a characteristic feature of the Bloch-type domain wall, has garnered significant attention [34].The polarization component within the achiral structure exhibits opposite rotational orientations on both sides of the domain walls, with its center crossing the zero point.In contrast to the classical Bloch-domain wall, the achiral structure maintains the symmetry of the wall, reduces the domain energy and enhances the stability of its features [35,36].The presence of an achiral domain wall has been discovered to significantly affect the mobility of the domain wall in a PbTiO 3 thin film [37], as well as the electrocaloric effect in BaTiO3 [38].Extensive research has been conducted on the study of the caloric impact in ferroelectric materials, specifically focusing on the electrocaloric and elastrocaloric effects.Those works are motivated by the possible use of these phenomena in solid-state refrigeration [39][40][41][42].However, the detection of the Bloch polarization component in chiral ferroelectric materials presents a difficult task in experiments, leading to a restricted investigation of chiral polarization in the caloric effect of ferroelectric materials.Hence, it is crucial to examine the influence of chiral polarization on the caloric effect from a theoretical standpoint.Nevertheless, the impact of the achiral structure on the elastrocaloric effect in ferroelectrics remains unexplored.
In this work, a theory was established to investigate the influence of stress on polarization and the elastrocaloric effect based on the Landau-Ginzburg-Devonshire (LGD) theory.With the Bloch components, flexoelectricity is introduced into the LGD theory.The magnitude of the Ising and Bloch polarization components of BaTiO 3 (BTO) with a 180 • domain wall was investigated in detail.The elastrocaloric adiabatic temperature change under stress with different polarization components was calculated.
Materials and Methods
Tetragonal BTO, a typical ferroelectric material, was chosen because there is evidence for the existence of Ising-Bloch-Néel-type components.[26].According to Figure 1a, the Ising-type component (P 1 ) is defined as the component parallel to the spontaneous polarization ± Ps.The Bloch-type component (P 2 ) is perpendicular to the Ising-type component but parallel to the plane of the domain wall.The Néel-type component (P 3 ) is smaller than the Bloch-type component and is not considered in this work.It is expected that flexoelectricity may produce Bloch-and Néel-type components in tetragonal BTO.The Gibbs free energy (G) of BTO with a 180 • domain is described as follows G =A ij P i P j + B ijkl P i P j P k P l + C ijklmn P i P j P k P l P m P n + ) where E i = −∂φ/∂x i is an electric field, A ij , B ijkl and C ijkl are dielectric stiffness coefficients, D ijkl is the gradient energy coefficients, Q ijkl is the electrostriction strain tensor, s ijkl is the elastic compliances, σ ij is the stress tensor components, F ijkl is the flexoelectric coupling energy, and E i is the applied electric field.E d i = −P 3 /(ε 0 ε b ) is considered, where ε 0 denotes the dielectric constant of the vacuum.As seen in Figure 1, the orientation of the wall between the normal and the cubic crystallographic direction is θ.This is a crystallographic cubic reference (x c1 , x c2 , x c3 ) to characterize the domain tilt.For simplicity, we have assumed P1 and P2 represent the polarization of Ising walls and Bloch walls with a bichiral structure.In the new reference frame, the polarization components have the form of, The boundary conditions for the polarization and the potential away from walls are In this paper, relevant parameter needs to be rewritten because of stress, and the designations are used in previous work [16,43].The elastic stresses are For simplicity, we have assumed P 1 and P 2 represent the polarization of Ising walls and Bloch walls with a bichiral structure.In the new reference frame, the polarization components have the form of, The boundary conditions for the polarization and the potential away from walls are In this paper, relevant parameter needs to be rewritten because of stress, and the designations are used in previous work [16,43].The elastic stresses are After the polarization is obtained, the adiabatic temperature change (∆T) of the BTO under stress is defined as where C is the heat capacity, u is the strain, and σ stands for the applied stress.According , the adiabatic temperature change can be derived from the entropy change under the applied stress.For ferroelectrics, the entropy change is given by ∆S = P 2 σ (T) − P 2 0 (T) /2ε 0 C.Then, the adiabatic temperature change in ferroelectrics can be determined as an expression of the polarization change.The parameter used in this work can be found in the literature [17,39,40].
Results and Discussion
We first obtained the 180 • domain structure of barium titanate without applied stress, which can be seen in Figure 1b.This 2D diagram was reconstructed using three 1D polarization maps determined by calculation, with the individual 1D polarizations shown in the red dashed box in the figure.It can be seen that the domain wall width of this 180 • domain is about 2 nm and the magnitude of its polarization is 0.26 C/m 2 , which is consistent with the experimental observations [44].The stress exerted along the thickness of the barium titanate is investigated.The evolution of the Ising polarization component of barium titanate under tensile and compressive stress is shown in Figure 1c.At a tensile stress of 1 GPa, the Ising polarization component of 0.31 C/m 2 is larger than that without tensile stress (0.26 C/m 2 ), while the polarization component decreases to 0.15 C/m 2 at −1 GPa.This is due to the fact that tensile stress increases the lattice constant along the c-axis in the barium titanate, which leads to an increase in the potential shift and thus the polarization.In contrast, compressive stress decreases the lattice constant and leads to a decrease in polarization [45].Figure 1c also shows the variation in domain widths under different stresses.Compressive stress leads to an increase in domain wall width, while tensile stress leads to a decrease in domain wall width.
Then, the effect of the elastrocaloric temperature change (∆T) caused by the change in the Ising polarization of barium titanate under external stress was studied.For the simulation, the applied external force is gradually increased from 0.1 GPa to 1 GPa. Figure 2a shows the temperature change in barium titanate under compressive stress, whereby barium titanate exhibits a negative elastrocaloric effect.When the compressive stress increases from −0.1 to −1 GPa, the minimum adiabatic temperature change decreases from −0.14 K to −2.3 K.According to Equation (5), the magnitude and sign of the adiabatic temperature change are related to the magnitude of the polarization before and after the applied stress.The compressive stress leads to a decrease in polarization, which results in a negative adiabatic temperature change.In contrast, tensile stress leads to an increase in polarization so that a positive elastrocaloric effect occurs under tensile stress, see Figure 2b.Both Figure 2a,b show that the temperature change at the domain wall is larger than that of the ferroelectric domains.This is due to the fact that the temperature change at the domain wall is larger and the value of the polarization change caused by the external force is larger, resulting in the larger temperature change.Notably, the polarization at the domain wall does not change, so there is no temperature change.The comparison between the mean value of (∆T mean ) and maximum (∆T max ) or minimum (∆T min ) adiabatic temperature change in BTO under the tension and compression as a function of stress is shown in Figure 2c,d.When the compressive stress decreases from -0.1 to −1 GPa, ∆T min decreases from −0.17 to −2.34 K, as shown in Figure 2c. Figure 2d shows that the ∆T max of BTO increases from 0.13 to 1.45 K when then tension increases from 0.1 to 1 GPa.Therefore, the adiabatic temperature change in the ferroelectric material can be enhanced by moving the domain wall since the temperature change caused by the domain wall is much larger than that of the domain.
from −0.17 to −2.34 K, as shown in Figure 2c. Figure 2d shows that the ΔTmax of BTO increases from 0.13 to 1.45 K when then tension increases from 0.1 to 1 GPa.Therefore, the adiabatic temperature change in the ferroelectric material can be enhanced by moving the domain wall since the temperature change caused by the domain wall is much larger than that of the domain.The polarization component of domain walls varies with the angle of rotation due to the dependence of flexoelectric coefficients on the rotation of the domain wall.The Bloch components of the BTO are investigated as a function of the angle under different stress conditions, as shown in Figure 3a. Figure 3a shows that the Bolch polarization component (0.02 C/m 2 ) is much smaller than the Ising polarization component (0.26 C/m 2 ).The magnitude of the P2 component in the BTO without tension or compression is zero θ = nπ/4, where n is an integer.When the angle rises to π/12, the magnitude of P2 achieves its greatest value and subsequently declines to zero when the angle increases to π/4.The P2 polarization rises in magnitude under the application of compressive stress.Figure 3b shows the Bloch polarization of BTO, which is shown as a green dashed line in Figure 3a.P2 increases from 0.02 C/m 2 at −0.1 GPa to 0.05 at −1 GPa.The P2 polarization decreases from 0.024 to 0.019 C/m 2 under the tensile stress.The tensile stress leads to a reduction in Bloch polarization, and the compressive stress increases it.This can be explained by the decrease in polarization caused by the tensile stress, which reduces the lattice constant.On the contrary, the Bloch polarization increases under compressive stress as the lattice constant increases along the x-axis.The polarization component of domain walls varies with the angle of rotation due to the dependence of flexoelectric coefficients on the rotation of the domain wall.The Bloch components of the BTO are investigated as a function of the angle under different stress conditions, as shown in Figure 3a. Figure 3a shows that the Bolch polarization component (0.02 C/m 2 ) is much smaller than the Ising polarization component (0.26 C/m 2 ).The magnitude of the P 2 component in the BTO without tension or compression is zero θ = nπ/4, where n is an integer.When the angle rises to π/12, the magnitude of P 2 achieves its greatest value and subsequently declines to zero when the angle increases to π/4.The P 2 polarization rises in magnitude under the application of compressive stress.Figure 3b shows the Bloch polarization of BTO, which is shown as a green dashed line in Figure 3a.P 2 increases from 0.02 C/m 2 at −0.1 GPa to 0.05 at −1 GPa.The P 2 polarization decreases from 0.024 to 0.019 C/m 2 under the tensile stress.The tensile stress leads to a reduction in Bloch polarization, and the compressive stress increases it.This can be explained by the decrease in polarization caused by the tensile stress, which reduces the lattice constant.On the contrary, the Bloch polarization increases under compressive stress as the lattice constant increases along the x-axis.
The presence of the Bloch component P 2 leads to an additional polarization near the domain wall that can affect the ∆T of BTO.The contribution of the Bloch-type polarization component on the ∆T in the BTO is illustrated in Figure 4a,b.Under tensile stress, Bloch polarization of the barium titanate exhibits a negative elastrocaloric effect, and the absolute value of temperature change increases with the increase in tensile stress.This is a decrease in polarization due to tensile stress.According to Equation ( 5), Bloch polarization shows a positive elastrocaloric effect under compressive stress.∆T increases with the increase in compressive stress and reaches a maximum value of 0.015 K at 1 GPa.At 1 GPa, ∆T reaches −0.001 K.However, this temperature change is much smaller than the temperature change caused by Ising polarization.This study shows that the main contribution to the elastrocaloric effect in ferroelectric materials is related to the Ising polarization component.The presence of the Bloch component P2 leads to an additional polarization near the domain wall that can affect the ΔT of BTO.The contribution of the Bloch-type polarization component on the ΔT in the BTO is illustrated in Figure 4a,b.Under tensile stress, Bloch polarization of the barium titanate exhibits a negative elastrocaloric effect, and the absolute value of temperature change increases with the increase in tensile stress.This is a decrease in polarization due to tensile stress.According to Equation ( 5), Bloch polarization shows a positive elastrocaloric effect under compressive stress.ΔT increases with the increase in compressive stress and reaches a maximum value of 0.015 K at 1 GPa.At 1 GPa, ΔT reaches −0.001 K.However, this temperature change is much smaller than the temperature change caused by Ising polarization.This study shows that the main contribution to the elastrocaloric effect in ferroelectric materials is related to the Ising polarization component.When the temperature decreases, BaTiO3 would transform from a cubic without polarization to a tetragonal with the polarization along the <100> direction at 373 K, to an orthorhombic with polarization along the <110> direction at 278 K, and to a rhombohedral with polarization along the <111> direction at 183 K.This means that the chirality When the temperature decreases, BaTiO3 would transform from a cubic without polarization to a tetragonal with the polarization along the <100> direction at 373 K, to an orthorhombic with polarization along the <110> direction at 278 K, and to a rhombohedral with polarization along the <111> direction at 183 K.This means that the chirality associated with the polarization does not exist in the cubic.It is believed that this method can be used to study the influence of external stress and electric fields on the chirality of ferroelectric PbTiO 3 since it has the same tetragonal structure as BaTiO 3 at room temperature.For the orthorhombic and rhombohedral, the polarization component would increase with decreasing temperature, which has been studied in many works and therefore is not shown in this work.The 180-degree domains can be considered simplistically as consisting of two single domains with opposite polarization directions.The elastrocaloric adiabatic temperature of BaTiO 3 with a single domain structure is plotted as a function of temperature, as shown in Figure 5.The change in adiabatic temperature increases with temperature, reaching a maximum near the Curie temperature and then decreasing rapidly as the temperature continues to rise.Under compressive stress, the negative adiabatic temperature change increases with decreasing temperature.Accordingly, it can be assumed that the temperature trend of Ising polarization under external stress is similar to that of single domains.However, the temperature change near the domain wall may be different for Ising polarization than for single domains and should be investigated using an appropriate theory.The Bloch-type polarization component may be different due to the change in symmetry and will be investigated in the future.
is not shown in this work.The 180-degree domains can be considered simplistically as consisting of two single domains with opposite polarization directions.The elastrocaloric adiabatic temperature of BaTiO3 with a single domain structure is plotted as a function of temperature, as shown in Figure 5.The change in adiabatic temperature increases with temperature, reaching a maximum near the Curie temperature and then decreasing rapidly as the temperature continues to rise.Under compressive stress, the negative adiabatic temperature change increases with decreasing temperature.Accordingly, it can be assumed that the temperature trend of Ising polarization under external stress is similar to that of single domains.However, the temperature change near the domain wall may be different for Ising polarization than for single domains and should be investigated using an appropriate theory.The Bloch-type polarization component may be different due to the change in symmetry and will be investigated in the future.P, where Q11 and Q12 are the electrostrictive coefficients, S11 and S12 are the elastic constants, ε33 and ε0 are the dielectric constant and the dielectric constant of the vacuum [46].Therefore, the chiral Bloch structure contributes to the piezoelectricity of the ferroelectric.Moreover, the pyroelectric and electrocaloric effects are inverse effects, so the trends of the properties of pyroelectric and electrocaloric effects under external forces are similar.Our previous study on the influence of the chirality of barium titanate on the electrocaloric effect shows that Bloch polarization makes a small contribution to the electrocaloric properties.This implies that Bloch polarization also contributes to the pyroelectric effect under an external field.Furthermore, the accuracy of this simulation should be verified experimentally.However, the focus of this work is on the investigation of the influence of stress on the chirality and elastrocaloric in BaTiO3.The corresponding experimental data will be investigated in the future.
Conclusions
The Landau-Ginzburg-Devonshire (LGD) theory was used to study the polarization and elastrocaloric effect of chiral BTO with an Ising-Bloch-type domain wall under stress.The investigations have shown that the application of compressive stresses reduces the Ising polarization component in barium titanate and increases the domain wall width, [46].Therefore, the chiral Bloch structure contributes to the piezoelectricity of the ferroelectric.Moreover, the pyroelectric and electrocaloric effects are inverse effects, so the trends of the properties of pyroelectric and electrocaloric effects under external forces are similar.Our previous study on the influence of the chirality of barium titanate on the electrocaloric effect shows that Bloch polarization makes a small contribution to the electrocaloric properties.This implies that Bloch polarization also contributes to the pyroelectric effect under an external field.Furthermore, the accuracy of this simulation should be verified experimentally.However, the focus of this work is on the investigation of the influence of stress on the chirality and elastrocaloric in BaTiO 3 .The corresponding experimental data will be investigated in the future.
Conclusions
The Landau-Ginzburg-Devonshire (LGD) theory was used to study the polarization and elastrocaloric effect of chiral BTO with an Ising-Bloch-type domain wall under stress.The investigations have shown that the application of compressive stresses reduces the Ising polarization component in barium titanate and increases the domain wall width, while the tensile stresses have opposite effects on the polarization and domain wall width.The BTO shows a negative elastrocaloric effect with the application of compressive stresses and a positive elastrocaloric effect with tensile stresses that lead to a decrease in temperature.Bloch polarization shows a polarization that varies with angle in response to an external stress, while the temperature change caused by the elastrocaloric effect is less significant compared to the temperature change caused by Ising polarization under stress.This study improves our understanding of the polarization and elastrocaloric effect occurring in a ferroelectric material with a chiral structure under stress.
9 Figure 1 .
Figure 1.(a) Schematic representation of the polarized components of the Ising and Bolch type; (b) Ising type of the polarized components as a function of position in BaTiO3 at room temperature.(c) Relationship between Ising polarized component and tensile and compressive stresses at room temperature.
Figure 1 .
Figure 1.(a) Schematic representation of the polarized components of the Ising and Bolch type; (b) Ising type of the polarized components as a function of position in BaTiO 3 at room temperature.(c) Relationship between Ising polarized component and tensile and compressive stresses at room temperature.
Figure 2 .
Figure 2. Temperature variation in Ising polarization components in BaTiO3 under (a) compressive and (b) tensile stresses.The average (ΔTmean), maximum (ΔTmax) and minimum (ΔTmin) value of Ba-TiO3 as a function of (c) compressive and (d) tensile stresses.
Figure 2 .
Figure 2. Temperature variation in Ising polarization components in BaTiO 3 under (a) compressive and (b) tensile stresses.The average (∆T mean ), maximum (∆T max ) and minimum (∆T min ) value of BaTiO 3 as a function of (c) compressive and (d) tensile stresses.
Figure 3 .
Figure 3. (a) The polarization component of P 2 as a function of rotation angle π in the BaTiO 3 under compression.(b) The magnitude of P 2 at π/24 as a function of angle.(c) The polarization component of P 2 as a function of the angle of rotation π in the BaTiO 3 under tensile stress.(d) The magnitude of P 2 at π/12 as a function of angle.
Figure 3 .
Figure 3. (a) The polarization component of P2 as a function of rotation angle π in the BaTiO3 under compression.(b) The magnitude of P2 at π/24 as a function of angle.(c) The polarization component of P2 as a function of the angle of rotation π in the BaTiO3 under tensile stress.(d) The magnitude of P2 at π/12 as a function of angle.
Figure 4 .
Figure 4. Adiabatic temperature induced by the P2 polarization component in the BaTiO3 with the 180 o domain wall as a function of angle under (a) compressive stress and (b) tensile stress.
Figure 4 .
Figure 4. Adiabatic temperature induced by the P 2 polarization component in the BaTiO 3 with the 180 o domain wall as a function of angle under (a) compressive stress and (b) tensile stress.
Figure 5 .
Figure 5. Elastrocaloric adiabatic temperature change in BaTiO3 with single domain structure as a function of temperature, (a) compressive stress and (b) tensile stress.Barium titanate has both piezoelectric and pyroelectric properties due to symmetry breaking.This means that external forces act on both the piezoelectric and pyroelectric properties of barium titanate with a chiral structure.The piezoelectric coefficient (d 33 ) of the ferroelectric has the form of d 33 =2ε 33 ε 0 Q 11 − 2s 11 Q 12 s 11 +s 12
Figure 5 .
Figure 5. Elastrocaloric adiabatic temperature change in BaTiO 3 with single domain structure as a function of temperature, (a) compressive stress and (b) tensile stress.Barium titanate has both piezoelectric and pyroelectric properties due to symmetry breaking.This means that external forces act on both the piezoelectric and pyroelectric properties of barium titanate with a chiral structure.The piezoelectric coefficient (d 33 ) of the ferroelectric has the form of d 33 = 2ε 33 ε 0 Q 11 − 2s 11 Q 12
s 11 +s 12 P
, where Q 11 and Q 12 are the electrostrictive coefficients, S 11 and S 12 are the elastic constants, ε 33 and ε 0 are the dielectric constant and the dielectric constant of the vacuum funding acquisition, B.L. All authors have read and agreed to the published version of the manuscript. | 5,864.6 | 2024-05-28T00:00:00.000 | [
"Physics",
"Materials Science"
] |
An unformed chip thickness approach to study the influence of process vibration on machining performance in milling
The vibration in the milling process plays a key role in machining, which can significantly affect the machining quality of the workpiece. Some vibrations have negative influences on the workpiece surface, while other vibrations can improve machining stability. Therefore, it is critical to distinguish the influence of different types of vibration on machining quality. A simulation method of undeformed chip thickness considering process vibration is presented in this article, in which a finite element model is established to analyze the dynamic milling process of 7075-T651 aluminum alloy from the aspects of cutting force and temperature. A series of experiments are carried out to verify the effectiveness of the simulation model, and the results show that the proposed model is accurate in predicting both milling force and temperature. Furthermore, the effect of milling vibration on machining performance is studied with the proposed method, in which the relationship between the amplitude-frequency characteristics of vibration and milling force-temperature fluctuation is revealed. The results show that the proposed method can determine the influence of milling vibration and provide a basis for distinguishing favorable and unfavorable vibration parameters of machining quality in milling.
Introduction
Precision milling processes have been widely applied in manufacturing parts including automotive, aerospace, and precision machinery. To improve the machining quality of the parts, the finite element method, numerical analysis method, and experimental method were applied to predict and evaluate the machining quality [1][2][3]. Process vibration has an important effect on the machining quality. As such, it is very important to distinguish which vibration parameters are favorable or unfavorable. Favorable vibration parameters refer to the conditions that can reduce the cutting force and temperature, which improves machining quality. Unfavorable vibration parameters refer to the conditions which would lead to poor surface quality, accelerate the cutting tool wear, and shorten the tool durability in cutting.
In grinding and milling processing technology, vibration generated from the processing process has a great impact on machining accuracy and quality [4][5][6][7][8]. The vibration phenomenon in the processing process will lead to poor surface quality and affect the machine life. Shtehin et al. [9] have carried out experimental research on low-frequency vibration when the spherical milling cutter is machining bevels. Their results show that the effect of low-frequency vibration on the processing surface is more significant than that of ordinary vibration. By taking regenerative vibration and frictional vibration into consideration, Kecik et al. [10] studied the problem of vibration during high-speed milling. Afazov and Uzunov [11] made a comparative study on the mathematical model of cutting force and the two flutter prediction models of directly-measured cutting force. It was found that the chatter model established by directly measuring the cutting force was in good agreement with the fast Fourier transform analysis. These studies are based on the process reliability study, so the vibration effect on the process performance requires further study.
In contrast, vibration in the process can also have a favorable effect on the machining quality. In this area, ultrasonic-assisted vibration processing plays a crucial role in improving the processing quality of parts [12,13]. The influence of the vibration parameters on milling force and heat is very important in vibration-assisted milling. Verma and Pandey [14] experimentally evaluated the effect of process parameters on milling force. The results showed that the most effective parameter of milling force is the feed, and axial vibration assistance also reduces the average milling force. Elhami et al. [15] studied the effect of mixed machining parameters on average milling force, and the results showed that the milling force of ultrasonic auxiliary milling could be reduced by about 27% compared to conventional milling. Through simulation and experimental studies, Shu and Sugita [16] found that the cutting force decreased with the increase of vibration frequency or amplitude in the cutting process of elliptical vibrating bone. Verma et al. [17] established a cutting force calculation model for an axial ultrasonic-assisted milling process based on process physics and carried out experimental research. It was found that the superposition of axial ultrasonic vibration in the milling operation could reduce the cutting force and improve the surface finish. Furthermore, it was determined that ultrasonic vibration-assisted milling can produce a periodic separation between the tooltip and the workpiece in the cutting process to reduce the milling force and produce the pulse cutting effect.
In addition, ultrasonic-assisted vibration can not only reduce the milling force but also plays an important role in reducing the milling heat. Feng et al. [18] proposed a model to analyze the ultrasonic vibration-assisted milling temperature, and the effect of milling parameters and vibration parameters on temperature was studied. Lu et al. [19] used finite element analysis techniques to study the effect of frequency and amplitude on milling temperature. The study found that the milling temperature increased with the increase of amplitude and decreased with the increase of frequency. Luo et al. [20] simulated and tested the ultrasonic vibration-assisted milling of aluminum alloy 7075-T651, finding that the milling temperature decreased accordingly with the increase of amplitude and frequency. Verma et al. [21] developed a process physics-based equation to predict temperature rise in vibration-assisted milling.
Researchers generally studied either the conventional milling process or the ultrasonic vibration-assisted milling process. Most of the above studies focused on the specific frequency or amplitude range, but it is important to conduct comprehensive research on the vibration parameters (wide vibration frequency range and multiple amplitude characteristics). A model with specific requirements needs to be developed to study the influence of process vibration on machining performance in milling.
In this paper, the effects of vibration frequency and amplitude on milling performance are systematically studied. Firstly, a simulation method of undeformed chip thickness considering process vibration is presented in this article, in which a finite element model is established. Taking 7075-T651 aluminum alloy as the object, the dynamic milling performance of the cutting force, temperature, and surface roughness are analyzed, and the accuracy of the model in predicting milling force and milling temperature was verified by experimentation. Finally, the effect of milling vibration on machining performance is studied with the proposed method, and the relationship between the amplitude-frequency characteristics of vibration and the milling force-temperature fluctuation is revealed, providing a basis for distinguishing the favorable and unfavorable vibration parameters of machining quality.
Tool path in vibration condition
As shown in Fig. 1, the vibration in the process has a significant effect on the trajectory of the tool. In milling, the workpiece feeds the tool at a constant milling speed, while the tool makes periodic reciprocating movements to the feed direction and vertical feed direction. In Fig. 1a, when the milling tool vibrates in the vertical feed direction, the milling tool begins to move at point A, which is the midpoint of The speed of the tool relative to the workpiece can be expressed in the time derivative of the tool position, as follows: In Fig. 1b, when the milling tool vibrates in the feed direction, the milling tool starts to move at point C and reaches the end of a cycle at point D. The vibration trajectory of the milling tool relative to the workpiece is as follows: where a and b are the amplitudes, fx and fy are the vibration frequencies in the x-and y-directions, t is the time parameter, φx and φy are the initial angles, and v is the feed rate. Figure 1 clearly demonstrates the meaning of the initial phase in the equation. The speed of the tool relative to the workpiece can be expressed in the time derivative as follows: When the tool vibrates in the feed direction, it is assumed that the variable k is the ratio of the maximum vibration speed of the tool to the milling speed v. Here, k is expressed as: When k < 1, the tool is separated from the chips and workpieces. The separation of the tool and chip can effectively reduce both milling temperature and milling force.
Unformed chip thickness considering vibration
The undeformed chip thickness has an important influence on the cutting force and temperature in machining. According to the study of Li et al. [22], it is impossible to establish the relationship between the cutting force and temperature in the tool feed direction and vertical feed direction, and the undeformed chip thickness. Therefore, according to the cutting motion direction of the tool and the undeformed chip thickness value direction, as well as based on the characteristics of slot milling, a semicircular model was established by considering the machining paths of two adjacent cutter teeth along the feed direction, as shown in Fig. 2. According to the undeformed cutting thickness model shown in Fig. 2, the mathematical model shown in Eq. (6) is established, and the undeformed cutting thickness is obtained by solving Eq. (7).
where fz is the tool feed per tooth, R is the tool radius, α is the milling arc angle, and UCT is the thickness of unformed chips [22]. It can be seen from the milling model and formula that the parameters affecting the thickness of unformed chips are tool diameter and feed per tooth. Furthermore, the vibration in the process would also impact the unformed chip thickness. Accordingly, an undeformed chip thickness model considering vibration is established. Figure 3 shows the variation of the thickness of the undeformed chip with
Simulation parameters
The aluminum alloy 7075-T651 has good fatigue resistance, with its chemical composition being shown in Table 1 [20]. Carbide tools have the advantages of high hardness, good temperature hardness, and good wear resistance. The parameters of the tool are recorded in Table 2. The material characteristic parameters of the workpiece and tool are recorded in Table 3. The Johnson-Cook (JC) model is a good reflection of the high-temperature deformation of metals at high strain, high strain rate, and high temperature [23]. For the finite element simulation of material deformation processes, such as machining and plastic forming, the control equations are: where σ is the flow stress; ε is the effective plastic strain; ̇ is the effective plastic strain rate; ̇0 is the reference plastic strain rate; T is the ambient temperature; T m is the melting point temperature of the material; A is the yield stress of the material; B is the processing hardening parameter of the material; C is the strain rate reinforcement index; m is the temperature change rate index; and n is the strain hardening index. The JC model parameters for 7075-T651 aluminum alloys are shown in Table 4 [24].
In the finite element model of the milling process, the critical value reached by plastic strain accumulation is often used as a criterion for chip damage, and the JC fracture criterion is used as the failure criterion in this study. The failure criterion provides a calculation method for the equivalent plastic strain when the material reaches the failure point, and the fracture failure parameter D is applied to determine the removal of the material: where f is a failure strain, while ∆ε indicates an increase in effective plastic strain at a unit load. According to the JC fault guidelines, the fault failure strain of the material is calculated as follows [25]: In which m represents the mean of positive pressure; is effective; and d 1 -d 5 is the material failure parameter. The JC damage model parameters for the 7075-T651 aluminum alloy are shown in Table 5 [24].
The software sets the relationship between the tool and workpiece (rigid elastic-plastic) to ensure that the simulation process of the tool mesh does not distort the iteration. According to Coulomb friction law, the friction factor between the tool and the workpiece plays a decisive role in the final simulation results. Based on the research of Bil et al. [26], the friction coefficient is taken as 0.5 in the present study. The software in this paper uses the friction relation as the Coulomb formula. As shown in the following formula: where F is the force between the tool and the workpiece surface, μ is the friction factor, and F f is the friction force caused by friction.
The main simulation parameters are shown in Table 6.
Simulation results and analysis
As shown in Fig. 5, the simulation results of milling force in both non-vibration and vibration conditions are presented. It can be found that both milling forces noticeably showed a parabolic trend, which is in line with the change law of unformed chip thickness in slot milling. Furthermore, while in milling without vibration, the milling force has only a small range of fluctuations, which is an inherent characteristic of milling. In milling with vibration, the milling force will produce periodic large-scale fluctuations with vibration, which is mainly due to the periodic movement of the tool. To obtain the force values, the force data is post-processing, as shown in Fig. 5c, in which the band-pass filtering is applied to calculate the periodic fluctuation curve, and the force fluctuations after filtering are consistent with the applied vibration signal. The average milling force is obvious in the figures, and the milling force fluctuation is the value of function amplitude. The average milling force directly affects the machining quality, while the fluctuation value of the milling force affects the stability of the machining system. As shown in Fig. 6, the results of the milling temperature simulation in non-vibration and vibration conditions are presented. It can be found that the highest temperature in the milling area is at the tip of the tool, and the maximum temperature of milling with vibration is higher than that without milling. Furthermore, from the partial amplification of the machining workpiece surface, it can be found that the non-vibration machined surface is relatively flat, while the vibration machined surface appears as an undulating wave, which matches with the movement between the tool and the workpiece. In the milling process, the milling temperature is mainly concentrated in the first and second milling areas. In the first milling area, the temperature is mainly caused by the plastic deformation of metal materials. In the second milling area, the temperature is mainly generated by the friction of the rear face. A gradient from high to low temperature is The International Journal of Advanced Manufacturing Technology (2022) 120:5363-5375 5368 formed inside the workpiece, which has an important effect on the surface temperature of the processed workpiece.
Experimental setup
This verification experiment was carried out on the carved Carver S600A vertical milling machine, as exhibited in Fig. 7. With specially designed workpieces fixed on the piezoelectric ceramic driver platform (model specification PT1500707301), the piezoelectric ceramic driver was fixed on the dynamometer to produce a certain vibration frequency and amplitude. The milling temperature was measured by the K-type thermocouple using the NI 9213 acquisition card. The milling forces were measured by a Kistler Force Dynamometer (Type 9139AA) mounted at the machine bed, for which the sampling rate was set to 2500 Hz. In the test, the tool and workpieces' material were chosen as in Tables 1 and 2, which are the same as that in the simulation.
As shown in Fig. 8, the milling workpiece is divided into two parts. The end and bottom of workpiece 1 are provided with grooves, and the thermocouple is arranged in the grooves between workpiece 1 and workpiece 2, which are connected by bolts.
During the temperature measurement in milling, when the tool cuts the aluminum alloy material in the thermocouple test position, the temperature increases sharply and then falls, which produces a peak temperature, called the maximum temperature. The thermocouple measurement area is on the milling workpiece surface.
Experimental verification results
The processing parameters of the validation experiment are shown in Table 7. The measurement results of the milling forces and milling temperatures are shown in Fig. 7. The milling parameters are as follows: spindle speed 4000 r/min, milling depth 0.6 mm, and feed speed 0.8 m/min. The vibration parameters are as follows: amplitude 10 µm and frequency 2 kHz. The finite element simulation model is experimentally verified from the milling force and milling temperature, and the results are shown in Fig. 9. It can be found that the maximum error of force between the experiment and simulation is 12%, while the maximum error of temperature is 15.7%. The simulation results in present good agreement with the experimental observations, which proves the accuracy of the simulation model and can realize the simulation prediction of the milling force and temperature.
Effect of vibration characterization parameters on machining performance
In the milling of 7075-T651 aluminum alloys, the vibration characterization parameters (amplitude and frequency) have an impact on the milling force and temperature, it cannot be ignored in the precision manufacturing process. It is very difficult to obtain different vibration characterization parameters in experiments, the simulation method is applied to analyze the effect of vibration characterization parameters on the processing results.
Effect of vibration frequency on milling force and temperature
The single factor test of frequency was conducted in simulation, as presented in Table 8. The vibration of the amplitude of 10 µm was applied to the feed direction and Table 8, and the relationship between vibration frequency, milling force, and milling temperature was analyzed, as shown in Fig. 10. As in Fig. 10a, b, while the vibration frequency in the feed direction increases, the average milling force in the x-and y-directions varies little when the frequency is below 20 kHz and decreases gradually when the frequency is greater than 20 kHz. The average surface temperature of the workpiece increases and then gradually decreases with frequency. Furthermore, the milling force fluctuation value of vibration in the xdirection gradually increased when the frequency was more than 20 kHz. As in Fig. 10c, d, while the vibration frequency in the vertical feed direction increases, the average milling force of vibration in the x-and y-directions varies when the frequency is lower than 20 kHz and decreases gradually when the frequency is greater than 20 kHz. The average surface temperature of the workpiece increases and then gradually decreases with frequency. Furthermore, the milling force fluctuation value of vibration in the y-direction gradually increased when the frequency was more than 20 kHz.
Effect of amplitude on milling force and temperature at low-frequency vibration
The single factor test of amplitude was conducted in simulation as processing parameters in Table 9, and the vibration frequency of 2 kHz was applied to the feed direction and vertical feed direction. The simulation results are shown in Table 9.
The relationship between the amplitude, milling force, and milling temperature was analyzed, as shown in Fig. 11. As presented in Fig. 11a, b, with the increase of the amplitude in the feed direction, the average milling forces of vibration in the x-and y-directions remain basically unchanged, while the average temperature of the surface of the workpiece decreases gradually. As shown in Fig. 11c, d, with the increase of the amplitude in the vertical feed direction, the average milling force of vibration in the x-and y-directions and the average surface temperature of the workpiece show an increasing trend. Additionally, the milling force fluctuation value of vibration in the x-and y-directions gradually increases.
3
In summary, in the situation of low-frequency vibration in the vertical feed direction, the vibration amplitude will increase the average milling force, temperature, and milling force fluctuations, resulting in poor machining quality and machining system stability.
Effect of amplitude on milling force and temperature at ultrasonic vibration
The single factor test of amplitude was conducted in simulation as processing parameters in Table 10, and a vibration frequency of 20 kHz was applied to the feed direction and vertical feed direction. The simulation results are shown in Table 10. The relationship between the amplitudes, milling force, and milling temperature was analyzed, as exhibited in Fig. 12. As shown in Fig. 12a, b, with the increase of the amplitude in the feed direction, the average milling force of vibration in the x-and y-directions and the average temperature of the surface of the workpiece decreases gradually.
This is because the ultrasonic vibration causes intermittent milling of the tool, decreasing both the average milling force and milling temperature. The milling force fluctuation value gradually increases with the amplitude increase. Therefore, the selection of amplitude is very important to the stability of the system in ultrasonic-assisted milling. As can be seen from Fig. 12c, d, with the increase of the amplitude in the vertical feed direction, the average milling force of vibration in the x-direction decreases gradually, while the average milling force in the y-direction increases gradually. The average milling temperature of the workpiece surface decreases at the first stage and then increases gradually. The temperature change is due to the change in the dominant position of cutter-workpiece discontinuous milling and increased milling area. The milling force fluctuation values of vibration in the x-and y-directions gradually increase, and the amplitude of the milling force fluctuation value in the y-direction increases more obviously.
Conclusion
In this paper, a simulation method of milling considering process vibration is presented by considering the undeformed chip thickness. Additionally, the effectiveness of the simulation method and model are verified by milling experiments. The influence of vibration parameters on milling performance is studied by using the simulation model. According to the analysis results, the following conclusions can be drawn as follows: 1. Based on the theory of unformed chip thickness in the milling process, a simulation method of unformed chip thickness considering vibration was proposed. The machining performance under different vibration parameters can be studied with the proposed model. 2. Experimental tests were performed to verify the simulation method and model, and the results show that the effectiveness of the finite element method (FEM) models in predicting the milling force and milling temperature. Therefore, the simulation model can define the influence of milling vibration on machining quality and can be applied to distinguish the favorable and unfavorable vibration parameters. 3. The influence of vibration frequency on the milling force and temperature was studied by the simulation model.
The results indicate that the tool vibration can effectively decrease the average milling force and temperature at an ultrasonic frequency, though it simultaneously increases the fluctuation degree of the milling force. Therefore, it is important to choose the vibration frequency in the vibration-assisted process. 4. When low-frequency vibration in the vertical feed direction is applied, increasing the vibration amplitude will increase the average milling force, temperature, and milling force fluctuation values, which adversely affects the machining quality. When the ultrasonic vibration in the feed direction is applied, increasing the amplitude would reduce the average milling force and temperature, but increase the milling force fluctuation value. | 5,282.2 | 2021-10-25T00:00:00.000 | [
"Materials Science"
] |
“PATHETIC” LITERARY CRITICISM IN THE ESSAYS BY JOSEPH WARTON: A COMPROMISE BETWEEN AUGUSTANISM AND PRE-ROMANTICISM
K e y w o r d s : neo-classicism; pre-romanticism; the sublime; the pathetic; the ancient-modern controversy; the category of imagination; Joseph Warton; The Adventurer «ПАТЕТИЧЕСКАЯ» КРИТИКА В ЭССЕИСТИКЕ ДЖОЗЕФА УОРТОНА: МЕЖДУ АВГУСТИАНСТВОМ И ПРЕДРОМАНТИЗМОМ Поляков О. Ю. Вятский государственный университет (Киров, Россия) ORCID ID: https://orcid.org/0000-0002-9362-7720 Поляков О. Ю. «Патетическая» критика в эссеистике Джозефа Уортона: между августианством... 274 А н н о т а ц и я . Статья посвящена анализу литературно-критических эссе видного английского писателя, публициста XVIII в. Джозефа Уортона, которого представляют, как правило, лишь в качестве зачинателя пересмотра ценности художественного наследия А. Поупа и одного из родоначальников предромантического движения в Англии. При этом корпус публикаций Уортона в журнале «Эдвенчерер» дает возможность более полно и объективно изучить истоки и фундаментальные положения его эстетической теории, отличавшейся гетерогенностью и отразившей состояние переходности в английской литературе и литературной критике середины XVIII в. Цель данной статьи – всесторонне рассмотреть эссеистику Дж. Уортона с целью уточнения особенностей его литературно-критического метода и определения его роли в процессе формирования английского предромантизма. Методология исследования опирается на обновленную концепцию английского литературного процесса XVIII в., которая представляет его не столько с позиций стадиальности, сколько как комплексный феномен, отличающийся компромиссностью идейно-художественных исканий писателей и критиков и взаимопроникновением ведущих литературных методов. В статье анализируются в широком контексте античной и английской эстетической мысли генезис эстетики Уортона и специфика его взглядов на миметическую природу творчества, рассматриваются основные положения его концепций воображения, возвышенного и патетического, проявленные в литературно-критических текстах журнала «Эдвенчерер», раскрываются позиция Уортона в споре «древних и новых» и его представления о жанровой поэтике. В ходе исследования показано, что определяющее значение в эстетике Уортона имеет категория возвышенного, представления о которой у него сложились под влиянием Псевдо-Лонгина, Квинтилиана и Аддисона и которая определяет особенности его рецепции библейских текстов, творчества Гомера и Шекспира. При этом литературная теория английского мыслителя восходит в равной мере к как к псевдо-лонгиновскому, так и к горацианскому началам, актуализирующим, соответственно, оригинальное и универсальное в художественной репрезентации, что обусловливает взаимодействие в его литературной критике классицистических (августианских) и предромантических подходов. К л ю ч е в ы е с л о в а : классицизм; предромантизм; возвышенное; патетическое; спор «древних и новых»; категория воображения; Джозеф Уортон; «Эдвенчерер» Д л я ц и т и р о в а н и я : Поляков, О. Ю. «Патетическая» критика в эссеистике Джозефа Уортона: между августианством и предромантизмом / О. Ю. Поляков // Филологический класс. – 2021. – Т. 26, No 4. – С. 273–283. – DOI: 10.51762/1FK-2021-2604-24. F o r c i t a t i o n : Polyakov, O. Y. (2021). “Pathetic” Literary Criticism in the Essays by Joseph Warton: a Compromise between Augustanism and Pre-Romanticism. In Philological Class. Vol. 26. No. 4, pp. 273–283. DOI: 10.51762/1FK-2021-26-04-24. Introduction Joseph Warton (1722–1800) was an outstanding man of letters who, after studying at Oxford, took up spiritual, pedagogical and literary career, that of a church rector, a schoolmaster and a poet. His most prominent achievement was made in the field of literary criticism: he is considered one of the precursors of Romanticism, a defender of genius, enthusiasm and poetic “fire”. As a periodical critic of the 18th century, Joseph Warton deserves to be honoured together with J. Addison and R. Steele, S. Johnson and O. Goldsmith, although he is mostly known and merited as the author of the fundamental Essay on the Genius and Writings of Pope (1756–1782). His essays, published in the Adventurer (1752–1754), are outstandingly representative of the aesthetic search of his epoch concerned with the problems of original art and the sources of literary imitation, the categories of genius, the sublime, and the pathetic. Joseph Warton tended to analyse particular literary texts, relying mainly on his own emotive response and general psychological attitudes rather than normative criticism, and his critical method, being descriptive and based on inductive empirical approach, anticipated significant changes in reviewing, associated with overcoming the neo-classical taste. A complex study of J. Warton’s essays is an actual matter, as it makes it possible not only to present his criticism in a wider context of aesthetic ideas of his time, but also to reveal border elements in his literary theory representative of the compromising character of English NeoClassicism as a whole, which is seen in the interrelation of neo-classical, sentimental and pre-romantic poetics in it. Considering J. Warton’s periodical criticism an insufficiently studied issue, we aim to outline the aesthetic foundations of his reviewing by analysing the Adventurer essays in the context of the critic’s predecessors’ and contemporaries’ opinions. We will consider the specificity of J. Warton’s views on the mimetic nature of art, highlight his Polyakov O. Y. “Pathetic” Literary Criticism in the Essays by Joseph Warton: a Compromise... 275 conceptions of imagination, the pathetic, and the sublime, reveal his attitude to the ancientmodern controversy, genre theories of his time and characterize his contribution to the advancement of psychological and historical methods of criticism. Finally, relying on the provided data, we will focus on evaluating J. Warton’s role in forming the theoretical basis of PreRomanticism in English literature. Methodological framework of the study The methodology of the study is based on the previous research of Joseph Warton’s critical heritage and, more essentially, on the reconsidered conceptions of the eighteenthcentury English literary evolution, which is seen nowadays not as a straightforward movement from NeoClassicism to PreRomanticism, but as a complex phenomenon distinguished by heterogeneity of aesthetic basis and interpenetration of the leading literary trends. This approach was anticipated by the works of N. Frye, B. Bronson, and R. Wellek [Frye 1956: 144–152; Bronson 1968: 3–4; Wellek 1981, 1st ed. 1955: 30], and in Russian literary studies it was developed by O. Y. Polyakov [Polyakov 2003: 7–10]. The number of works, devoted to the Adventurer essays on literature, is rather scarce. In the late 19th and early 20th c. G. Saintsbury [Saintsbury 1904] and H. Beers [Beers 1926] forwarded the problem of “romantic” tendencies in J. Warton’s literary criticism. H. Trowbridge studied the genesis of Warton’s aesthetic theory and his conception of imagination on the material of his essays more profoundly [Trowbridge 1937]. Nevertheless, he did not consider the category of the pathetic in the critic’s works published in the Adventurer. In the 1930–1950s, several generalizing works on the history of criticism appeared, in which Warton’s essays on Shakespeare were estimated. R. Wellek, in particular, marked their importance as one of the first specimens of a new kind of criticism, “probably, psychological” [Wellek 1981: 117]. A. Bosker, who appreciated the critic as a “defender of taste”, relied on the Essay on Pope, leaving Warton’s periodical essays without attention [Bosker 1953]. J. Atkins, who represented the development of the 18 c. English literary criticism as a steady movement towards romantic ideas, declared Warton one of the first apologists of original art, ignoring neo-classical elements of his aesthetics [Atkins 1951]. Then followed a break in studying J. Warton’s critical heritage, which ended in the 1970s, when J. Pittock’s book The Ascendancy of Taste was published. This work considers mainly aesthetic context of Warton’s criticism [Pittock 1973]. Then J. Vance gave a brief survey of the Adventurer essays on literature and emphasized that, in spite of undervaluing Homer’s Iliad and English Restoration comedy, Warton judged literary pieces objectively and contributed much to the eighteenth century Shakespearean and Miltonian criticism [Vance 1983]. In the 1990–2010s, J. Warton’s works attracted attention of scholars only occasionally: mostly, his Essay on the Genius and Writings of Pope was referred to in surveys of the history of English neo-classical criticism [Nisbet, Rawson 2005] or in the studies of particular aspects of 18 c. English literary process, such as the classical reception in the national literature of the period [Hopkins, Martindale 2012] and the formation of the national literary canon [Kramnik 1997]. In Russia, J. Warton’s criticism is predominantly viewed as an aesthetic source of PreRomanticism [Solov’eva 2005: 34–35; Lukov 2006: 160–161]. His periodical essays were once considered in the context of transformations of genre criticism in mid-18 c. England [Polyakov 2003: 129–155]. Undoubtedly, putting forward the issue of the sources of pre-romantic aesthetics in J. Warton’s literary criticism may sound disputable, as it tends to ignore a long and fruitful tradition in English literary theory, which helped to promote new aesthetic values (original imagination, the sublime, etc.). Warton’s conceptions were anticipated by T. Hobbes, J. Locke, J. Addison, D. Hume and M. Akenside, whose works were to become true sources of pre-romantic theory. Nevertheless, the most active shaping of new critical approaches occurred in the mid-eighteenth century, and from this point of view, J. Warton’s works, especially his periodical publications, are of considerable interest. Results and discussion Joseph Warton was the author of the greater part of critical essays published in the Adventurer to which he started to contribute his papers after joining the famous Samuel Johnson’s Club. The journal identified itself as a moral periodical, so Warton’s essays are predominantly didactic, alПоляков О. Ю. «Патетическая» критика в эссеистике Джозефа Уортона: между августианством... 276 though their ethical bias often gives way to aesthetic functions of literature, its subjective reception by the readers and psychological mechanisms of didactic effects. Like S. Johnson, J. Warton was conscious of the succession of his periodical to J. Addison’s Spectator, the archetypal model of didactic journalism, which led him to comparing his aesthetic views with those of the prominent Augustan. The Spectator’s critical essays encouraged J. Warton to reflect on artistic strengths of J. Milton’s Paradise Lost, emotional aspects of tragedy, the ancientmodern controversy and the functions of criticism. Warton’s reception of Addison’s criticism is often polemical. Summing up his publicist activities, he wrote in Adventurer 139 (1754) that criticism should perform social functions by correcting tastes of those who prefer “the tinsel of a Burletta” to “the gold of Shakespeare” [The British Essayists 25: 303].To achieve it, it must regain its high academic status which was lost when Addison declared his aim to bring “philosophy out of closets and libraries, schools and colleges, to dwell in clubs and assemblies, at tea-tables, and in coffeehouses” [The Spectator: 46]. Contemporary criticism, hasty and superficial, needs sophistication, so Warton demands that “literary subjects should be again introduced among the polite and gay”, who would articulate their ideas “without laboring too much to disguise them like common prattle”; criticism “should be weeded of folly and impertinence, of commonplace rhetoric, jingling phrases” [The British Essayists 25: 303]. This urge for rationalization, sophistication of critical discourse was not new (it was one of the aims of S. Johnson’s periodical activity) and it was an important aspect of the self-reflection of criticism which recognized its significant sociocultural mission. Men of letters were conscious of the fact that the massification of the basic categories of criticism, which was a result of its cooperation with periodicals, resulted in the bloom of pedantry mocked in the collective images of pseudocritics, such as Dick Minim and Timothy Tittle. It is quite understandable, then, that J. Warton turned to the most complicated aesthetic problems and issues of critical methodology. In particular, in Adventurer 49 (1753) he considered the works of Rapin, Le Bossu, Brumoy and Fenelon that had come to fashion among his contemporaries. Their treatises “administer great consolation to the indolent and incurious, to those who can tamely rest satisfied with second-hand language” and are ready to speak about the virtues of Greek and Roman classical works without reading the originals [The Adventurer 2: 107]. He demands that critics should scrupulously analyse texts, comprehend their “spirit and scale” and reveal authors’ individual manners. Thus, it is obvious that he tends to a break with neo-classical critical techniques by making a shift from the general to the particular, from poetics and authoritative interpretations to the text per se and the personality of its creator. Besides, criticism of NeoClassicism from the positions of the classics was a major liberating factor of the development of mid-eighteenthcentury English literary theory. Turning to ancient literary heritage, not mediated by French interpretations, was characteristic of English criticism in 18 c. (Ch. Gildon, J. Addison, S. Johnson), thus confirming a comparatively autonomous development of the national literary thought. J. Warton’s concern with classical literature influenced his position in the ancientmodern controversy. He was convinced that ancient writers had surpassed new authors in epic poetry, yet he praised J. Milton as the author of Paradise Lost for “the sublime conceptions he has copied from the Book of God” and revealed convincingly the personages’ psychology [The British Essayists 25: 226]. Warton regards that it is not the static scenes of Eden or episodes portraying celestial battles that should be praised most, but the depiction of Adam’s and Eve’s lamentations on being expelled from Eden, or Satan’s speech at the beginning of Book IX, in which “his inextinguishable pride and fierce indignation against God, and his envy towards man are so blended with an involuntary approbation of goodness, and disdain of the meanness and baseness of his present undertaking” that one can consider it “the most natural, most spirited, and truly dramatic speech, that is, perhaps, to be found in any writer whether ancient or modern” [The Adventurer 3: 266]. This remark is evident of Warton’s subtle critical vision and his ability to perceive the complexity of the epic characters. Like S. Johnson, he gives priority to the subjective response of critics who must “judge from their own sensations” and not to be “content to echo the decision of others” [The Adventurer 3: 265]. In the genre of tragedy the critic merits Shakespeare, Racine and Corneille who can compete Polyakov O. Y. “Pathetic” Literary Criticism in the Essays by Joseph Warton: a Compromise... 277 with Aeschylus, Sophocles and Euripides, and in the field of comedy he declares the superiority of Moliere over all ancient masters. The French playwright did not limit himself by portraying ordinary personages, he studied “the numberless varieties of human nature” [The British Essayists 25: 262], noticed their subtle distinctions and depicted them with an outstanding artistic talent, in particular, in the characters of Tartuffe, Alcestis and Garpagone. The critic states that Moliere’s plays represent the true nature of the genre which he limits by the comedy of character, noting that its main traits are originality and individuality of the character type. In this sense, plays written by Restoration comedians, especially those of W. Congreve, in which the protagonists go back to the trivial type of a libertine, are inferior to Moliere’s comedies. Besides, their dramatic works are permeated with “false satire, ribaldry, obscenity, and blasphemy”; murderers, gamesters, knaves and spendthrifts are depicted in them with sympathy, “but a faithful husband is a dupe and cuckold, and a plain country gentleman a novice and a fool” [The Adventurer 3: 84]. Moral tendencies in J. Warton’s criticism, his support of decorum and sophisticated style that witness his reception of neo-classical standards, are also evident in his remarks about satirical genres. He thinks that Boileau’s and Pope’s satires surpass those of ancient authors, Horace and Juvenal, as their poems are more exquisite and their ridicule is less straightforward. Warton claims that one of the achievements of the “new” masters of satire, not known in the ancient times, was the creation and development of heroic comical poem. N. Boileau, A. Pope and S. Garth, having travestied the high epic kind, provided their works with “dignity and gracefulness” [The British Essayists 25: 264]. The superiority of the new in the satirical and comical genres is explained in the Adventurer by sociopolitical reasons, by the fact that European monarchies used to cultivate secular communication which made private and public vices more evident to become an object of ridicule. It is important that Warton drew literary analysis beyond the limits of poetics by focusing on social determination of literary facts. Later, in his Essay on the Genius and Writings of Pope, he declared authoritatively that it is impossible to judge correctly about literature of the past without taking into consideration the “climate, country and age” that begot it. Warton’s historical thinking led him to the conclusion that ancient culture could not be restored and a blind imitation of the masterpieces of Antiquity would be fruitless. This motivated him to join the discussion of original and imitative art in which such prominent men of letters as S. Johnson and R. Hurd took part. Proper imitation, according to him, presupposes not borrowing the style of the ancient, not using their epithets or expressions, but “catching a portion of their spirit, and adapting their images and ways of thinking to new subjects” [The British Essayists, vol. 24: 300]. Specimens of such ideal imitations can be found in Racine’s (Phaedra, Iphigenia) and Milton’s (Paradise Lost) works. Warton’s interest in Racine is quite remarkable, for he considered the ability to portray characters, appealing to the spectators’ sympathy, a major virtue of an author. Sensibility and the pathetic are the notions so often referred to in the Adventurer essays that one can conclude about the influence of sentimentalism on J. Warton. The critic considered the pathetic in a close connection with the sublime, the latter being a matter of concern of many thinkers who turned to PseudoLonginus. S. Monk notes that Warton’s aesthetic views, as well as those of E. Young and R. Hurd, took shape in the process of revision of NeoClassicism from the point of view of originality and imagination, the categories praised by the ancient critic [Monk 1960: 63]. Their immediate predecessors were D. Hume, M. Akenside, J. Bailey and R. Lowth. R. Hume in A Treatise of Human Nature (1739) considered the sublime from the point of view of its emotional impact and reflected on the functions of spacious properties of the objects influencing imagination. M. Akenside (The Pleasures of Imagination, 1744), following J. Addison, emphasized the significance of largescale natural phenomena for evoking sublime feelings. J. Bailey (An Essay on the Sublime, 1747) deepened the tendency for liberating the sublime from rhetorical interpretations and separated this aesthetic category from the pathetic. Like T. Burnett, J. Dennis and J. Addison, he thought that observations of the impressive natural events lead one to the idea of the Creator’s greatness. Growth of the interest to the sublime (encouraged partly by the critical revision of Milton’s heПоляков О. Ю. «Патетическая» критика в эссеистике Джозефа Уортона: между августианством... 278 ritage) was connected with repeated attempts to comprehend the Bible from the point of view of PseudoLonginus’s theory. The Holy Scripture was considered a specimen of high eloquence since the Middle Ages (St. Augustine). In the eighteenth century, J. Dennis (The Grounds of Criticism in Poetry, 1704) and J. Addison (Spectator essays on Paradise Lost, 1712) highlighted the role of the biblical imagery as a source of the sublime in Milton’s poem. Ideas of Christianity, according to Dennis, have all the properties of PseudoLonginus’s sublime (“tender response of the soul, power and duration of impression”) [Dennis 1704: 73–89]. T. Blackwell in his Sacred Classics (1725) approached the Bible from the positions of PseudoLonginus’s sublime in N. Boileau’s interpretation. He viewed it as a just, majestic and marvelous idea that does not need ornamentation: the Christian ideas as such are able to cause admiration [Monk 1960: 78]. Warton was directly influenced by R. Lowth’s views expressed in the book The Sacred Poetry of the Hebrews (1753). Like Bailey, Lowth distinguished the sublime from the pathetic, but he also saw their immediate connection and shifted attention from the object of perception to the aesthetic subject. The author of The Sacred Poetry of the Hebrews found the examples of the sublime in the Bible which he approached historically. He insisted that critics should consider literature of the past, taking into consideration social and natural circumstances of its development and individual manners of authors. In particular, Lowth explained the great expressiveness of biblical metaphors and similes by their organic connection with Palestinian scenery and the folk ways of life. As we have already seen, J. Warton also recognized the influence of extraliterary factors on writers’ works, but in his publicist practice he employed the idea of determinism not often. Like Lowth, he called the Bible one of the most sublime masterpieces which surpasses the most prominent works of ancient Greek literature, and emphasized, first of all, the perfection of its language. He devoted to it two Adventurer essays (Nos. 51 and 57, 1753) presented as a PseudoLonginus’s manuscript found in the library of Benedictine monks at Lyons. This mystification was motivated by the fact that PseudoLonginus quoted Five Books of Moses as a specimen of elevated ideas. In the first essay J. Warton focuses on the pathetic which he equals with the moving and whose examples he finds in the Books of Moses. In particular, he notes that the story of Joseph and his brothers is written “with so many little strokes of nature and passion, with such penetrating knowledge of human heart, with such various and unexpected changes of fortune [...], as cannot be read without astonishment and tears”, Aristotle himself would have preferred it to the story of Oedipus [The Adventurer 2: 126]. Drawing parallels between biblical materials and dramatic experience and poetics, Warton, probably, attempted to confirm the dignity of the sacred texts as facts of literature and, besides, like R. Lowth, he revealed his addiction to conventions of critical analysis (in The Sacred Poetry of the Hebrews, Lowth tried to distribute biblical texts between the departments of the traditional genre system). On the other hand, he made an accent on psychologism, on the dramatic devices that his contemporaries could borrow from the evangelists and ancient tragedians. In particular, he singled out portraying silence which can be “more affecting, and more strongly expressive of passion, than the most artful speeches” [The Adventurer 2: 127] (we see here the influence of PseudoLonginus’ idea that a great utterance is an echo of the soul’s greatness and not a result of linguistic sophistication). Warton noted that the silences of Aeschylus’s Niobe, Sophocles’ Deianira and Job’s friends are the most expressive. J. Warton disproved of those French neo-classical tragedies and English heroic plays in which the depiction of genuine, sincere feelings was substituted for by rhetorical devices. He saw the sources of the pathetic / the moving not in the abstract, but in the concrete, that which involves emotionally loaded and picturesque details appealing to the audience. Warton’s thesis about the rhetorical efficiency of description has its origins in Quintilian’s Institutes of Oratory in which he emphasized the concrete and detailed character of the utterance as a condition of the orator’s expressiveness. In 18 c., as R. Wellek justly noted, Quintilian’s theory was actualized due to the achievements of empirical philosophy with its special accent on sensual perception [Wellek 1981: 113]. Besides, in English literary criticism there existed a long tradition of appealing to PseudoLonginus who wrote in his treatise On the Sublime that a poet, creating visible images, evokes the “illusion of presence” in the readers [O vozvyshennom: 20]. J. Dryden, J. Dennis, L. Welsted, J. Addison, J. Hughes, A. Pope Polyakov O. Y. “Pathetic” Literary Criticism in the Essays by Joseph Warton: a Compromise... 279 and R. Hurd regarded this ability as a mark of a genius. Analyzing the Old Testament from the point of view of detalization of style, Warton relied on the thesis of Quintilian’s Institutes of Oratory. This is evident from the choice of quotations, a considerable part of which is concerned with the destruction of biblical cities. The critic admires “tender and affecting strokes”, describing the devastation of Babylon and Tyre, desolation and famine. Evangelists selected “such adjuncts and circumstances upon each subject, as are best calculated to strike the imagination and embellish their descriptions” [The Adventurer 2: 128–129]. Warton also makes an accent on the characters’ visions as a source of the sublime. He pays special attention to personification and simile as efficient devices of creating vivid images (Adventurer 57, 1753). The critic’s concern with using tropes as a vehicle of the sublime is suggestive of his following the traditional rhetorical understanding of this aesthetic category. At the same time, Warton states: “It is the peculiar privilege of poetry, not only to place material objects in the most amiable attitudes, and to clothe them in the most graceful dress, but also to give life and motion to immaterial beings; and form, and color, and action, even to abstract ideas, to embody the virtues, the vices, and the passions; and to bring before our eyes, or on a stage, every faculty of the human mind” [The Adventurer 2: 173–174]. The dynamic character of this definition of the functions of poetry reveals the author’s dissatisfaction with neo-classical statics of aestheticized descriptions. J. Pittock justly supposes that Warton’s words contain a key to overcoming the typical in poetic representation: a writer’s pretenсe of originality would be groundless if he does not depict the changeability of human emotional states, the complexity of man’s nature, by using nontrivial metaphors and comparisons among other devices [Pittock 1973: 138]. M. Abrams, commenting on Warton’s definition, concludes: “Thus by the mid-century, what had been a purely rhetorical figure had become an act of creation [..] having its analogue in God’s peopling of this world of which, naturally, the effect on the reader is a sublime astonishment and enlargement of soul. As a result, poetic personification, together with that fairy way of writing, was elevated to the highest achievement of poetic imagination” [Abrams 1981: 289]. Alas, the scholar does not take into consideration the contexts of Adventurer’s criticism which make it evident that Warton did not break with the mimetic doctrine of art and with the traditional neo-classical conceptions of imagination as a faculty of visualization of images. This is convincingly confirmed by Adventurer essay No. 63 (1753), devoted to borrowings in A. Pope’s works. The beginning of the essay seems to paraphrase Rambler 121: following S. Johnson, Warton complains that the number of original authors is rather small and the majority prefer to “creep tamely and cautiously in the track of their predecessors” [The Adventurer 2: 227]. On the other hand, he shares R. Hurd’s thesis, articulated in his Discourse on Poetical Imitation (1751), that nature as an object of imitation is always uniform and unchangeable, so there will always be certain similarity in writers’ works (this idea was also supported by S. Johnson in Rambler 125 and 136). Warton writes: “The objects material or animate, extraneous or internal, which they [writers – O. P.] all imitate, lie equally open to the observation of all, and are perfectly similar, [so] the first copier must be, perhaps, entitled to the praise of priority; but a succeeding one ought not certainly to be condemned for plagiarism” [The Adventurer 2: 228]. H. Trowbridge emphasizes that though Warton, like Hurd, reduces imitation to description, he provides its broadened interpretation which includes reflection, contemplation, comprehension of “internal essences”, the world of human feelings [Trowbridge 1937: 77]. Generally, Warton follows neo-classical conceptions of mimesis, understanding it as imitation of the eternal and unchangeable in nature: in spite of numerous achievements in the field of science and art, evolution of material and spiritual conditions of human existence, a contemporary epic or dramatic writer “would find it difficult or impossible to be totally original, and essentially different from Homer and Sophocles. The causes that excite and the operations that exemplify the greater passions, will always have an exact coincidence, though perhaps a little diversified by climate or customs: every exasperated hero must rage like Achilles, and every afflicted widow mourn like Andromache; an abandoned Armida will make use of Dido’s execrations; and a Jew will nearly resemble a Grecian, when almost placed in the same situation; i. e. the Ioas of Racine in his incompaПоляков О. Ю. «Патетическая» критика в эссеистике Джозефа Уортона: между августианством... 280 rable “Athalia”, will be very like Ion of Euripides” [The Adventurer 2: 228–229]. To prove this thesis, Warton appeals to the authority of N. Boileau and A. Pope, who expressed similar opinions in The Art of Poetry and An Essay on Criticism, and thus he leaves no doubts about his commitment to NeoClassicism. On the other hand, when the critic refers to Boileau [The Adventurer 2: 230], who stated that the freshest and the most unusual ideas are not those which were never uttered, but those which come to anyone’s mind in similar situations, probably, he means not only the universal, but also the concrete, thus showing his involvement in the search in the 1740–1750s English poetry which, as N. A. Solovyova notes, was aimed at making the usual, common poetically significant and original [Solov’eva 1988:30]. To reduce Warton’s aesthetic creed to neo-classical orthodoxy would be an unacceptable oversimplification, as his aesthetic theory is heterogeneous: its sources include the ideas of not only N. Boileau and A. Pope, R. Hurd and S. Johnson, but also those of J. Addison, which are of quite contradictory nature (especially, his theory of imagination, which influenced very much the formation of romantic views in England). In Adventurer 80 (1753), the great, unusual and beautiful, declared by the Spectator as sources of the pleasures of imagination, are presented as acknowledged and just criteria for judging the works of art. As such, they were promoted by actualization of Addison’s ideas in M. Akenside’s poem The Pleasures of Imagination which was very popular in England and which can be seen as a poetic periphrasis of the Spectator ’s essays. This poem might spur Warton’s interest in Addison’s views. Applying Addison’s categories to Homer’s works, J. Warton regards his Iliad as a sublime poem and the Odyssey a beautiful and “unusual” one; the former “resembles the river Nile, when it descends in a cataract that deafens and astonishes” an observer, and the latter is like the Nile, too, when “when its genial inundations gently diffuse fertility and fatness over the peaceful plains of Egypt” [The Adventurer 3: 89, 96]. Warton admires Homer’s “boundless exuberance of imagination”, his “unwearied spirit and fire” [The Adventurer 3: 90], emphasizes the variety of events in his poems, concreteness and detailing of descriptions, vivid pictures of customs and ways of ancient life, individualization of characters, dynamic plots and unexpected events. Alongside with it, to the majestic and tremendous in art, he opposes the pathetic, understood as the moving, which is “as strong an evidence of true genius as the sublime” [The Adventurer 3: 94]. He notes that PseudoLonginus in his treatise On the Sublime provided examples of expression of this aesthetic category in the descriptions of battles, elements, fantastic creatures, heroes’ traits, whereas one needs not less genius to portray such simple and moving pictures as parting of Andromache with Hector, and “the tender circumstance of the child Astyanax starting back from his father’s helmet and clinging to the bosom of his nurse”, the description of an old man tenderly waiting for his son’s return, not knowing that he was dead, the depiction of widows’ suffering, etc. Thus, we can single out several elements in the structure of artistic imagination, as Warton saw it. Firstly, as it was said above, he insisted authors use bright, vivid, picturesque metaphors seen by him as “one of the greatest efforts of the creative power of a warm and lively imagination” [The Adventurer 2: 174], and, consequently, he revealed his commitment to the traditional neo-classical understanding of imagination as a capacity for visualization of images. Secondly, as R. Wellek justly observed, in the 18 c. this conception was gradually ousted by equaling imagination with associational activity of the mind, ability of a writer to evoke sympathy, compassion [Wellek 1981: 111], and such understanding of imagination, as we have already seen, was also shared by J. Warton. Therefore, relying on the traditional system of artistic methods, we can state that the neo-classical in his aesthetics is associated with the sentimental. One more component of the category of imagination, as it is understood by J. Warton, ascends to J. Addison, who wrote in Spectator 419 about the “fairy way of writing” which is connected with using fantastic images in poetic works (fairies, witches, ghosts, etc.). Poetry, according to Addison, cannot limit itself by imitating the sensually perceived world; it must create its own worlds. Warton relied on this idea, developing his own conception of imagination in his essays devoted to Shakespeare’s dramatic works. The first advantage of the Elizabethan, praised in Adventurer 93 (1753), is his great fantasy that Polyakov O. Y. “Pathetic” Literary Criticism in the Essays by Joseph Warton: a Compromise... 281 distinguishes The Tempest especially in which Shakespeare “has carried the romantic, the wonderful, and the wild, to the most pleasing extravagance” [The Adventurer 3: 196]. The irrational, based on folklore, serves to an “expansion of imagination” and it does not need any justification from the positions of the traditional mimetic doctrine. But there is a personage in Shakespeare’s drama that cannot be found in folk tales. Caliban is “the creature of his own imagination, in the formation of which he could derive no assistance from observation or experience” [The Adventurer 3: 225]. Characterizing this personage, the critic uses highly emotional adjectives: “brutal barbarity, unfeeling savageness, horrible delight”, “fierce and implacable spirit”. “The poet is a more powerful magician than his own Prospero: we are transported into fairy land; we are wrapt in a delicious dream, [...] all around is enchantment”, writes Warton, for whom an author’s ability to strike the reader’s imagination is more important than following the neo-classical principle of probability [The Adventurer 3: 203]. In many aspects, Warton is an innovator: he enriches critical discourse with new emotive lexis, he supports subjectivization of criticism and makes a special accent not on formal traits of drama, but on characters (in his Tempest essays though, they are analysed with reliance on traditional approach which presupposes considering their consistency). M. G. Abrams is disposed to associate the beginning of worshipping Shakespeare with the essays on The Tempest [Abrams 1971: 275–276]. In any case, The Adventurer ’s Shakespearean essays are a certain result of a long development of English Shakespearean criticism started by J. Dryden, who respected the Elizabethan not less than J. Warton. Turning back to the sublime / the pathetic opposition, we can conclude that The Tempest belongs to the former category (although Warton finds in it many examples of “the moving” and “the natural”, in particular in the character of Miranda), while King Lear to the latter one. Warton devoted several essays to King Lear which we will consider briefly below. What is crucially important in these essays is a subtle analysis of Shakespeare’s psychologism in describing Lear’s madness, surpassing, according to Warton, “Euripides himself” with his Orestes. The basis of this analysis is formed by the idea that “absurd” standards of neo-classical criticism are inapplicable to Shakespeare’s works. The critic notes that it is easy just to declare Lear’s mental disorder “very natural and pathetic”. But in this case the readers or spectators will not see the protagonist’s “secret workings and changes of mind” [The Adventurer 4: 80] which vary from one cue to another and, consequently, must be considered in detail, with reliance on the text. That is why Warton pays attention to minute incidents, quotes much and follows the manifestations of Lear’s insane mind which explain the reader the causes of his catastrophe. The critic reveals Shakespeare’s intentions in the scenes portraying the pictures imagined by Lear (the trial of Goneril and Regan), and shows the playwright’s vivid imagery and stylistic devices (unexpected metaphors, emotionally loaded repetitions). It is for the first time in English literary criticism that Warton analysed so profoundly a Shakespearean character, so we cannot agree with T. M. Raysor who thinks that his essays are written “in the manner of J. Hughes, pointing out beauties in the plays rather than analysing the motives of the characters” [Raysor 1927: 496]. Indeed, Warton’s critical heritage is not free from errors caused by the authority of neo-classical standards. Among Shakespeare’s “drawbacks” he lists violations of probability, decorum and unity of action. These errors cannot eclipse the strong sides of Warton as a critic, one of which is the subjective character of his literary analysis, breaking with the traditions of “impartial criticism”, aesthetically distancing itself from a work of art. Warton is a “sensible” critic, ready to inform a reader that a literary piece caused his powerful excitement, floods of tears, and this, undoubtedly, witnesses of a certain shift in the critical standards which occurred under the influence of sentimentalism. H. Robinson called the “sincerity of feeling” the most striking trait of Warton as a critic of Shakespeare [Robinson 1932: 91]. To sum up, Warton’s essays witness a gradual shift from deductive to inductive critical approaches, from mimetic to psychological method of literary analysis. Conclusion The tenets of J. Warton’s literary theory, in spite of their heterogeneous character, are inspired by the category of the sublime which he associated directly with the pathetic and, relying on PseudoПоляков О. Ю. «Патетическая» критика в эссеистике Джозефа Уортона: между августианством... 282 Longinus and Quintilian, found its examples in the Bible. Being also influenced by J. Addison’s conception of imagination, the critic applied his views to Homer’s works and supported the irrational in Shakespeare’s plays. His Shakespearean criticism tended to break with genre doctrines, as it focused on dramatic characters, their motive sphere and its realization in the texts, and the author’s psychologism. This approach, as L. Damrosch put it, got “the criticism of drama off the dead center where it had rested since 16 c.” [Damrosch 1972: 234]. The normative criticism was to give way to the “criticism of taste” which made a special accent on subjective analysis and the critic’s intuition. One cannot be but ambiguous when deciding conclusively on Warton’s creed, as his views were contradictory enough. E. Gosse declared him a romanticist who anticipated Wordworth’s and Coleridge’s aesthetic views. G. Saintsbury wrote that “the spirit of time caught Warton”, but he followed it “half-consciously” [Saintsbury 1904: 260]. H. Trowbridge agreed with this opinion, noting that NeoClassicism limited his views and “controlled his taste” [Trowbridge 1937: 76]. This conclusion seems only partially true, as Warton’s aesthetic views are compromising, their origins are connected both with Horatian and Longinian traditions which presuppose, respectively, the universal and the original in artistic representation. Warton as a critic, in spite of his praise of neo-classical drama and the accent on character types and didactic functions of literature, defends new values, “the pleasures of imagination”, with much more enthusiasm. A close reading of his essays convinces us that he was a critic of Sense. Never were the sublime (with the exception of the Spectator) and the irrational, the fantastic as its sources supported so passionately in English periodicals before him. In J. Warton’s, as well as T. Warton’s and E. Young’s works there appears “an idea of genuine poetry as a source of pleasure and beauty helping to explain human experience and enriching sensibility”, which was important for “comparing the regular taste with susceptibility to the beautiful and the pathetic” [Solov’eva 1988: 32]. In general, we can conclude that J. Warton’s periodical criticism anticipated such manifestos of preRomanticism as E. Young’s Conjectures on Original Composition (1759) and R. Hurd’s Letters on Chivalry and Romance (1762). Литература Луков, Вл. А. Предромантизм / Вл. А. Луков. – М. : Наука, 2006. – 683 с. О возвышенном. – М. ; Л. : Наука, 1966. – 149 с. Поляков, О. Ю. Литературная критика в периодических изданиях Англии 1750-х гг. (проблема метода) / О. Ю. Поляков. – Киров : ВГГУ, 2003. – 182 с. Соловьева, Н. А. История зарубежной литературы: Предромантизм / Н. А. Соловьева. – М. : Академия, 2005. – 272 с. Соловьева, Н. А. У истоков английского романтизма / Н. А. Соловьева. – М. : Изд-во МГУ, 1988. – 232 с. Abrams, M. H. The Mirror and the Lamp. Romantic theory and the critical tradition / M. H. Abrams. – London ; Oxford ; New York : Oxford University Press, 1971. – 406 p. Atkins, J. English Literary Criticism. Seventeenth and Eighteenth Centuries / J. Atkins. – London : Methuen, 1951. – 383 p. Beers, H. A. A History of English Romanticism in 18 Century / H. A. Beers. – London : Kegan and Paul, 1926. – 467 p. Bosker, A. Literary Criticism in the Age of Johnson / A. Bosker. – Groningen ; Djakarta : J. B. Wolters, 1953. – 345 p. Bronson, B. When Was Neo-Classicism? // Facets of the Enlightenment. – Berkeley : University of California Press, 1968. – P. 1-25. Damrosch, L. Samuel Johnson and the Tragic Sense / L. Damrosch. – Princeton : Princeton University Press, 1972. – 280 p. Dennis, J. The Grounds of Criticism in Poetry / J. Dennis. – London : Geo. Strahan, 1704. – 127 p. Frye, N. Towards Defining an Age of Sensibility // English Literary History. – 1956. – Vol. 23. – P. 144–152. Kramnik, J. B. The Making of the English Canon / J. B. Kramnik // Publications of Modern Language Association. – 1997. – Vol. 112, –No. 5. – P. 1087–1101. Monk, S. H. The Sublime. A Study of Critical Theories in Eighteenth-Century England / S. H. Monk. – Ann Arbor : University of Michigan Press, 1960. – 250 p. Pittock, J. The Ascendancy of Taste. The Achievement of Joseph and Thomas Warton / J. Pittock. – London : Routledge, 1973. – 230 p. Raysor, T. M. The Study of Shakespeare’s Characters in the Eighteenth Century / T. M. Raysor // Modern Language Notes. – 1927. – Vol. 42, No. 8. – P. 495–500. Saintsbury, G. A History of Criticism and Literary Taste in Europe: Vol. 3 / G. A. Saintsbury. – Edinburgh : W. Blackwood and Sons, 1904. – 1748 p. The Adventurer. – 5th ed. – London : A. Millar, 1766. – 4 vols. Polyakov O. Y. “Pathetic” Literary Criticism in the Essays by Joseph Warton: a Compromise... 283 The British Essayists: with prefaces, historical and biographical, by A. Chalmers. – London : J. Johnson et al., 1802– 1803. – 39 vols. The Cambridge History of Literary Criticism. Vol. 4: The Eighteenth Century / Ed by D. Nisbet, C. Rawson. – Cambridge : Cambridge University Press, 2005. – 970 p. The Oxford History of Classical Reception in English Literature. Vol. 1 / Ed. by D. Hopkins, C. Martindale. – Oxford : Oxford University Press, 2012. – 752 p. The Spectator. Vol. 1. – Dublin : P. Wilson, 1755. – 324 p. Trowbridge, H. Joseph Warton on the Imagination / H. Trowbridge // Modern Philology. – 1937. – Vol. 35, No. 1. – P. 73–87. Vance, J. A. Joseph and Thomas Warton / J. A. Vance. – Boston : Twayne, 1983. – 152 p. Wellek, R. A History of Modern Criticism 1750-1950. Vol. 1 / R. Wellek. – Cambridge : Cambridge University Press, 1981. – 368 p. References Abrams, M. H. (1971). The Mirror and the Lamp. Romantic theory and the critical tradition. London, Oxford, New York, Oxford University Press. 406 p. Atkins, J. (1951). English Literary Criticism. Seventeenth and Eighteenth Beers, H. A. (1926). A History of English Romanticism in 18 Century. London, Kegan and Paul. 467 p. Bosker, A. (1953). Literary Criticism in the Age of Johnson. Groningen, Djakarta, J. B. Wolters. 345 p. Bronson, B. (1968). When Was Neo-Classicism? In Facets of the Enlightenment. Berkeley, University of California Press, pp. 1–25. Centuries. London, Methuen. 383 p. Century England. Ann Arbor, University of Michigan Press. 250 p. Damrosch, L. (1972). Samuel Johnson and the Tragic Sense. Princeton, Princeton University Press. 280 p. Dennis, J. (1704). The Grounds of Criticism in Poetry. London, Geo. Strahan. 127 p. Frye, N. (1956). Towards Defining an Age of Sensibility. In English Literary History. 1956. Vol. 23, pp. 144–152. Hopkins, D., Martindale, C. (Eds.). (2012). The Oxford History of Classical Reception in English Literature. Vol. 1. Oxford, Oxford University Press. 752 p. Kramnik, J. B. (1997). The Making of the English Canon. In Publications of Modern Language Association. Vol. 112. No. 5, pp. 1087-1101. Lukov, Vl. A. (2006). Predromantizm [Pre-Romanticism]. Moscow, Nauka. 683 p. Monk, S. H. (1960). The Sublime. A Study of Critical Theories in Eighteenth Nisbet, D., Rawson, C. (Eds.). (2005). The Cambridge History of Literary Criticism. Vol. 4: The Eighteenth Century. Cambridge, Cambridge University Press. 970 p. O vozvyshennom [On the Sublime]. (1966). Moscow, Leningrad, Nauka. 149 p. Pittock, J. (1973). The Ascendancy of Taste. The Achievement of Joseph and Thomas Warton. London, Routledge. 230 p. Polyakov, O. Yu. (2003). Literaturnaya kritika v periodicheskikh izdaniyakh Anglii 1750-kh gg. (problema metoda) [Literary Criticism in English Periodicals of the 1750s (the problem of method)]. Kirov, VGGU. 182 p. Raysor, T. M. (1927). The Study of Shakespeare’s Characters in the Eighteenth Century. In Modern Language Notes. Vol. 42. No. 8, pp. 495–500. Saintsbury, G. A. (1904). History of Criticism and Literary Taste in Europe: Vol. 3. Edinburgh, W. Blackwood and Sons. 1748 p. Solov’eva, N. A. (1988). U istokov angliiskogo romantizma [The Origins of English Romanticism]. Moscow, Izdatel’stvo MGU. 232 p. Solov’eva, N. A. (2005). Istoriya zarubezhnoi literatury: Predromantizm [A History of Foreign Literature: Pre-Romanticism]. Moscow, Akademiya. 272 p. The Adventurer. (1766). 5th ed. London, A. Millar. 4 vols. The British Essayists: with prefaces, historical and biographical, by A. Chalmers. (1802–1803). London, J. Johnson et al. 39 vols. The Spectator. (1755). Vol. 1. Dublin, P. Wilson. 324 p. Trowbridge, H. (1937). Joseph Warton on the Imagination. In Modern Philology. Vol. 35. No. 1, pp. 73–87. Vance, J. A. (1983). Joseph and Thomas Warton. Boston, Twayne. 152 p. Wellek, R. (1981). A History of Modern Criticism 1750-1950. Vol. 1. Cambridge, Cambridge University Press. 368 p. Данные об авторе Поляков Олег Юрьевич – доктор филологических наук, профессор кафедры русской и зарубежной литературы и методики обучения, Вятский государственный университет (Киров, Россия). Адрес: 610000, Россия, Киров, ул. Московская, 36. E-mail<EMAIL_ADDRESS>Author’s information Polyakov Oleg Yurievich – Doctor of Philology, Professor of Department of Russian and Foreign Literature and Methods of Teaching, Vyatka State University (Kirov, Russia).
Introduction
Joseph Warton (1722-1800) was an outstanding man of letters who, after studying at Oxford, took up spiritual, pedagogical and literary career, that of a church rector, a schoolmaster and a poet. His most prominent achievement was made in the field of literary criticism: he is considered one of the precursors of Romanticism, a defender of genius, enthusiasm and poetic "fire". As a periodical critic of the 18 th century, Joseph Warton deserves to be honoured together with J. Addison and R. Steele, S. Johnson and O. Goldsmith, although he is mostly known and merited as the author of the fundamental Essay on the Genius and Writings of Pope (1756-1782. His essays, published in the Adventurer (1752)(1753)(1754), are outstandingly representative of the aesthetic search of his epoch concerned with the problems of original art and the sources of literary imitation, the categories of genius, the sublime, and the pathetic.
Joseph Warton tended to analyse particular literary texts, relying mainly on his own emotive re-sponse and general psychological attitudes rather than normative criticism, and his critical method, being descriptive and based on inductive empirical approach, anticipated significant changes in reviewing, associated with overcoming the neo-classical taste.
A complex study of J. Warton's essays is an actual matter, as it makes it possible not only to present his criticism in a wider context of aesthetic ideas of his time, but also to reveal border elements in his literary theory representative of the compromising character of English Neo-Classicism as a whole, which is seen in the interrelation of neo-classical, sentimental and pre-romantic poetics in it.
Considering J. Warton's periodical criticism an insufficiently studied issue, we aim to outline the aesthetic foundations of his reviewing by analysing the Adventurer essays in the context of the critic's predecessors' and contemporaries' opinions. We will consider the specificity of J. Warton's views on the mimetic nature of art, highlight his conceptions of imagination, the pathetic, and the sublime, reveal his attitude to the ancient-modern controversy, genre theories of his time and characterize his contribution to the advancement of psychological and historical methods of criticism. Finally, relying on the provided data, we will focus on evaluating J. Warton's role in forming the theoretical basis of Pre-Romanticism in English literature.
Methodological framework of the study
The methodology of the study is based on the previous research of Joseph Warton's critical heritage and, more essentially, on the reconsidered conceptions of the eighteenth-century English literary evolution, which is seen nowadays not as a straightforward movement from Neo-Classicism to Pre-Romanticism, but as a complex phenomenon distinguished by heterogeneity of aesthetic basis and interpenetration of the leading literary trends. This approach was anticipated by the works of N. Frye, B. Bronson, and R. Wellek [Frye 1956: 144-152;Bronson 1968: 3-4;Wellek 1981Wellek , 1 st ed. 1955, and in Russian literary studies it was developed by O. Y. Polyakov [Polyakov 2003: 7-10].
The number of works, devoted to the Adventurer essays on literature, is rather scarce. In the late 19 th and early 20 th c. G. Saintsbury [Saintsbury 1904] and H. Beers [Beers 1926] forwarded the problem of "romantic" tendencies in J. Warton's literary criticism. H. Trowbridge studied the genesis of Warton's aesthetic theory and his conception of imagination on the material of his essays more profoundly [Trowbridge 1937]. Nevertheless, he did not consider the category of the pathetic in the critic's works published in the Adventurer.
In the 1930-1950s, several generalizing works on the history of criticism appeared, in which Warton's essays on Shakespeare were estimated. R. Wellek, in particular, marked their importance as one of the first specimens of a new kind of criticism, "probably, psychological" [Wellek 1981: 117]. A. Bosker, who appreciated the critic as a "defender of taste", relied on the Essay on Pope, leaving Warton's periodical essays without attention ]. J. Atkins, who represented the development of the 18 c. English literary criticism as a steady movement towards romantic ideas, declared Warton one of the first apologists of original art, ignoring neo-classical elements of his aesthetics [Atkins 1951].
Then followed a break in studying J. Warton's critical heritage, which ended in the 1970s, when J. Pittock's book The Ascendancy of Taste was published. This work considers mainly aesthetic context of Warton's criticism ]. Then J. Vance gave a brief survey of the Adventurer essays on literature and emphasized that, in spite of undervaluing Homer's Iliad and English Restoration comedy, Warton judged literary pieces objectively and contributed much to the eighteenth century Shakespearean and Miltonian criticism [Vance 1983].
In the 1990-2010s, J. Warton's works attracted attention of scholars only occasionally: mostly, his Essay on the Genius and Writings of Pope was referred to in surveys of the history of English neo-classical criticism [Nisbet, Rawson 2005] or in the studies of particular aspects of 18 c. English literary process, such as the classical reception in the national literature of the period [Hopkins, Martindale 2012] and the formation of the national literary canon ]. In Russia, J. Warton's criticism is predominantly viewed as an aesthetic source of Pre-Romanticism [Solov'eva 2005: 34-35;Lukov 2006: 160-161]. His periodical essays were once considered in the context of transformations of genre criticism in mid-18 c. England [Polyakov 2003: 129-155].
Undoubtedly, putting forward the issue of the sources of pre-romantic aesthetics in J. Warton's literary criticism may sound disputable, as it tends to ignore a long and fruitful tradition in English literary theory, which helped to promote new aesthetic values (original imagination, the sublime, etc.). Warton's conceptions were anticipated by T. Hobbes, J. Locke, J. Addison, D. Hume and M. Akenside, whose works were to become true sources of pre-romantic theory. Nevertheless, the most active shaping of new critical approaches occurred in the mid-eighteenth century, and from this point of view, J. Warton's works, especially his periodical publications, are of considerable interest.
Results and discussion
Joseph Warton was the author of the greater part of critical essays published in the Adventurer to which he started to contribute his papers after joining the famous Samuel Johnson's Club. The journal identified itself as a moral periodical, so Warton's essays are predominantly didactic, al-though their ethical bias often gives way to aesthetic functions of literature, its subjective reception by the readers and psychological mechanisms of didactic effects.
Like S. Johnson, J. Warton was conscious of the succession of his periodical to J. Addison's Spectator, the archetypal model of didactic journalism, which led him to comparing his aesthetic views with those of the prominent Augustan. The Spectator's critical essays encouraged J. Warton to reflect on artistic strengths of J. Milton's Paradise Lost, emotional aspects of tragedy, the ancient-modern controversy and the functions of criticism.
Warton's reception of Addison's criticism is often polemical. Summing up his publicist activities, he wrote in Adventurer 139 (1754) that criticism should perform social functions by correcting tastes of those who prefer "the tinsel of a Burletta" to "the gold of Shakespeare" [The British Essayists 25: 303].To achieve it, it must regain its high academic status which was lost when Addison declared his aim to bring "philosophy out of closets and libraries, schools and colleges, to dwell in clubs and assemblies, at tea-tables, and in coffeehouses" [The Spectator: 46]. Contemporary criticism, hasty and superficial, needs sophistication, so Warton demands that "literary subjects should be again introduced among the polite and gay", who would articulate their ideas "without laboring too much to disguise them like common prattle"; criticism "should be weeded of folly and impertinence, of common-place rhetoric, jingling phrases" [The British Essayists 25: 303]. This urge for rationalization, sophistication of critical discourse was not new (it was one of the aims of S. Johnson's periodical activity) and it was an important aspect of the self-reflection of criticism which recognized its significant socio-cultural mission. Men of letters were conscious of the fact that the massification of the basic categories of criticism, which was a result of its cooperation with periodicals, resulted in the bloom of pedantry mocked in the collective images of pseudo-critics, such as Dick Minim and Timothy Tittle.
It is quite understandable, then, that J. Warton turned to the most complicated aesthetic problems and issues of critical methodology. In particular, in Adventurer 49 (1753) he considered the works of Rapin, Le Bossu, Brumoy and Fenelon that had come to fashion among his contemporaries. Their treatises "administer great consolation to the indolent and incurious, to those who can tamely rest satisfied with second-hand language" and are ready to speak about the virtues of Greek and Roman classical works without reading the originals [The Adventurer 2: 107]. He demands that critics should scrupulously analyse texts, comprehend their "spirit and scale" and reveal authors' individual manners. Thus, it is obvious that he tends to a break with neo-classical critical techniques by making a shift from the general to the particular, from poetics and authoritative interpretations to the text per se and the personality of its creator. Besides, criticism of Neo-Classicism from the positions of the classics was a major liberating factor of the development of mid-eighteenth-century English literary theory. Turning to ancient literary heritage, not mediated by French interpretations, was characteristic of English criticism in 18 c. (Ch. Gildon, J. Addison, S. Johnson), thus confirming a comparatively autonomous development of the national literary thought.
J. Warton's concern with classical literature influenced his position in the ancient-modern controversy. He was convinced that ancient writers had surpassed new authors in epic poetry, yet he praised J. Milton as the author of Paradise Lost for "the sublime conceptions he has copied from the Book of God" and revealed convincingly the personages' psychology [The British Essayists 25: 226].
Warton regards that it is not the static scenes of Eden or episodes portraying celestial battles that should be praised most, but the depiction of Adam's and Eve's lamentations on being expelled from Eden, or Satan's speech at the beginning of Book IX, in which "his inextinguishable pride and fierce indignation against God, and his envy towards man are so blended with an involuntary approbation of goodness, and disdain of the meanness and baseness of his present undertaking" that one can consider it "the most natural, most spirited, and truly dramatic speech, that is, perhaps, to be found in any writer whether ancient or modern" [The Adventurer 3: 266]. This remark is evident of Warton's subtle critical vision and his ability to perceive the complexity of the epic characters. Like S. Johnson, he gives priority to the subjective response of critics who must "judge from their own sensations" and not to be "content to echo the decision of others" [The Adventurer 3: 265].
In the genre of tragedy the critic merits Shakespeare, Racine and Corneille who can compete with Aeschylus, Sophocles and Euripides, and in the field of comedy he declares the superiority of Moliere over all ancient masters. The French playwright did not limit himself by portraying ordinary personages, he studied "the numberless varieties of human nature" [The British Essayists 25: 262], noticed their subtle distinctions and depicted them with an outstanding artistic talent, in particular, in the characters of Tartuffe, Alcestis and Garpagone. The critic states that Moliere's plays represent the true nature of the genre which he limits by the comedy of character, noting that its main traits are originality and individuality of the character type.
In this sense, plays written by Restoration comedians, especially those of W. Congreve, in which the protagonists go back to the trivial type of a libertine, are inferior to Moliere's comedies. Besides, their dramatic works are permeated with "false satire, ribaldry, obscenity, and blasphemy"; murderers, gamesters, knaves and spendthrifts are depicted in them with sympathy, "but a faithful husband is a dupe and cuckold, and a plain country gentleman a novice and a fool" [The Adventurer 3: 84].
Moral tendencies in J. Warton's criticism, his support of decorum and sophisticated style that witness his reception of neo-classical standards, are also evident in his remarks about satirical genres. He thinks that Boileau's and Pope's satires surpass those of ancient authors, Horace and Juvenal, as their poems are more exquisite and their ridicule is less straightforward. Warton claims that one of the achievements of the "new" masters of satire, not known in the ancient times, was the creation and development of heroic comical poem. N. Boileau, A. Pope and S. Garth, having travestied the high epic kind, provided their works with "dignity and gracefulness" [The British Essayists 25: 264].
The superiority of the new in the satirical and comical genres is explained in the Adventurer by socio-political reasons, by the fact that European monarchies used to cultivate secular communication which made private and public vices more evident to become an object of ridicule. It is important that Warton drew literary analysis beyond the limits of poetics by focusing on social determination of literary facts. Later, in his Essay on the Genius and Writings of Pope, he declared authoritatively that it is impossible to judge correctly about literature of the past without taking into consideration the "climate, country and age" that begot it.
Warton's historical thinking led him to the conclusion that ancient culture could not be restored and a blind imitation of the masterpieces of Antiquity would be fruitless. This motivated him to join the discussion of original and imitative art in which such prominent men of letters as S. Johnson and R. Hurd took part. Proper imitation, according to him, presupposes not borrowing the style of the ancient, not using their epithets or expressions, but "catching a portion of their spirit, and adapting their images and ways of thinking to new subjects" [The British Essayists,vol. 24: 300]. Specimens of such ideal imitations can be found in Racine's (Phaedra, Iphigenia) and Milton's (Paradise Lost) works.
Warton's interest in Racine is quite remarkable, for he considered the ability to portray characters, appealing to the spectators' sympathy, a major virtue of an author. Sensibility and the pathetic are the notions so often referred to in the Adventurer essays that one can conclude about the influence of sentimentalism on J. Warton. The critic considered the pathetic in a close connection with the sublime, the latter being a matter of concern of many thinkers who turned to Pseudo-Longinus. S. Monk notes that Warton's aesthetic views, as well as those of E. Young and R. Hurd, took shape in the process of revision of Neo-Classicism from the point of view of originality and imagination, the categories praised by the ancient critic [Monk 1960: 63] He viewed it as a just, majestic and marvelous idea that does not need ornamentation: the Christian ideas as such are able to cause admiration [Monk 1960: 78].
Warton was directly influenced by R. Lowth's views expressed in the book The Sacred Poetry of the Hebrews (1753). Like Bailey, Lowth distinguished the sublime from the pathetic, but he also saw their immediate connection and shifted attention from the object of perception to the aesthetic subject. The author of The Sacred Poetry of the Hebrews found the examples of the sublime in the Bible which he approached historically. He insisted that critics should consider literature of the past, taking into consideration social and natural circumstances of its development and individual manners of authors. In particular, Lowth explained the great expressiveness of biblical metaphors and similes by their organic connection with Palestinian scenery and the folk ways of life.
As we have already seen, J. Warton also recognized the influence of extra-literary factors on writers' works, but in his publicist practice he employed the idea of determinism not often. Like Lowth, he called the Bible one of the most sublime masterpieces which surpasses the most prominent works of ancient Greek literature, and emphasized, first of all, the perfection of its language. He devoted to it two Adventurer essays (Nos. 51 and 57, 1753) presented as a Pseudo-Longinus's manuscript found in the library of Benedictine monks at Lyons. This mystification was motivated by the fact that Pseudo-Longinus quoted Five Books of Moses as a specimen of elevated ideas.
In the first essay J. Warton focuses on the pathetic which he equals with the moving and whose examples he finds in the Books of Moses. In particular, he notes that the story of Joseph and his brothers is written "with so many little strokes of nature and passion, with such penetrating knowledge of human heart, with such various and unexpected changes of fortune […], as cannot be read without astonishment and tears", Aristotle himself would have preferred it to the story of Oedipus [The Adventurer 2: 126]. Drawing parallels between biblical materials and dramatic experience and poetics, Warton, probably, attempted to confirm the dignity of the sacred texts as facts of literature and, besides, like R. Lowth, he revealed his addiction to conventions of critical analysis (in The Sacred Poetry of the Hebrews, Lowth tried to distribute biblical texts between the departments of the traditional genre system). On the other hand, he made an accent on psychologism, on the dramatic devices that his contemporaries could borrow from the evangelists and ancient tragedians. In particular, he singled out portraying silence which can be "more affecting, and more strongly expressive of passion, than the most artful speeches" [The Adventurer 2: 127] (we see here the influence of Pseudo-Longinus' idea that a great utterance is an echo of the soul's greatness and not a result of linguistic sophistication). Warton noted that the silences of Aeschylus's Niobe, Sophocles' Deianira and Job's friends are the most expressive.
J. Warton disproved of those French neo-classical tragedies and English heroic plays in which the depiction of genuine, sincere feelings was substituted for by rhetorical devices. He saw the sources of the pathetic / the moving not in the abstract, but in the concrete, that which involves emotionally loaded and picturesque details appealing to the audience. Warton's thesis about the rhetorical efficiency of description has its origins in Quintilian's Institutes of Oratory in which he emphasized the concrete and detailed character of the utterance as a condition of the orator's expressiveness. In 18 c., as R. Wellek justly noted, Quintilian's theory was actualized due to the achievements of empirical philosophy with its special accent on sensual perception [Wellek 1981: 113]. Besides, in English literary criticism there existed a long tradition of appealing to Pseudo-Longinus who wrote in his treatise On the Sublime that a poet, creating visible images, evokes the "illusion of presence" in the readers [O vozvyshennom: 20]. J. Dryden, J. Dennis, L. Welsted, J. Addison, J. Hughes, A. Pope and R. Hurd regarded this ability as a mark of a genius.
Analyzing the Old Testament from the point of view of detalization of style, Warton relied on the thesis of Quintilian's Institutes of Oratory. This is evident from the choice of quotations, a considerable part of which is concerned with the destruction of biblical cities. The critic admires "tender and affecting strokes", describing the devastation of Babylon and Tyre, desolation and famine. Evangelists selected "such adjuncts and circumstances upon each subject, as are best calculated to strike the imagination and embellish their descriptions" [The Adventurer 2: 128-129].
Warton also makes an accent on the characters' visions as a source of the sublime. He pays special attention to personification and simile as efficient devices of creating vivid images (Adventurer 57, 1753).
The critic's concern with using tropes as a vehicle of the sublime is suggestive of his following the traditional rhetorical understanding of this aesthetic category. At the same time, Warton states: "It is the peculiar privilege of poetry, not only to place material objects in the most amiable attitudes, and to clothe them in the most graceful dress, but also to give life and motion to immaterial beings; and form, and color, and action, even to abstract ideas, to embody the virtues, the vices, and the passions; and to bring before our eyes, or on a stage, every faculty of the human mind" [The Adventurer 2: 173-174]. The dynamic character of this definition of the functions of poetry reveals the author's dissatisfaction with neo-classical statics of aestheticized descriptions. J. Pittock justly supposes that Warton's words contain a key to overcoming the typical in poetic representation: a writer's pretenсe of originality would be groundless if he does not depict the changeability of human emotional states, the complexity of man's nature, by using nontrivial metaphors and comparisons among other devices [Pittock 1973: 138].
M. Abrams, commenting on Warton's definition, concludes: "Thus by the mid-century, what had been a purely rhetorical figure had become an act of creation [..] having its analogue in God's peopling of this world of which, naturally, the effect on the reader is a sublime astonishment and enlargement of soul. As a result, poetic personification, together with that fairy way of writing, was elevated to the highest achievement of poetic imagination" [Abrams 1981: 289]. Alas, the scholar does not take into consideration the contexts of Adventurer's criticism which make it evident that Warton did not break with the mimetic doctrine of art and with the traditional neo-classical conceptions of imagination as a faculty of visualization of images. This is convincingly confirmed by Adventurer essay No. 63 (1753), devoted to borrowings in A. Pope's works. The beginning of the essay seems to paraphrase Rambler 121: following S. Johnson, Warton complains that the number of original authors is rather small and the majority prefer to "creep tamely and cautiously in the track of their predecessors" [The Adventurer 2: 227]. On the other hand, he shares R. Hurd's thesis, articulated in his Discourse on Poetical Imitation (1751), that nature as an object of imitation is always uniform and unchangeable, so there will always be certain similarity in writers' works (this idea was also supported by S. Johnson in Rambler 125 and 136). Warton writes: "The objects material or animate, extraneous or internal, which they [writers -O. P.] all imitate, lie equally open to the observation of all, and are perfectly similar, [so] the first copier must be, perhaps, entitled to the praise of priority; but a succeeding one ought not certainly to be condemned for plagiarism" [The Adventurer 2: 228]. H. Trowbridge emphasizes that though Warton, like Hurd, reduces imitation to description, he provides its broadened interpretation which includes reflection, contemplation, comprehension of "internal essences", the world of human feelings [Trowbridge 1937: 77].
Generally, Warton follows neo-classical conceptions of mimesis, understanding it as imitation of the eternal and unchangeable in nature: in spite of numerous achievements in the field of science and art, evolution of material and spiritual conditions of human existence, a contemporary epic or dramatic writer "would find it difficult or impossible to be totally original, and essentially different from Homer and Sophocles. The causes that excite and the operations that exemplify the greater passions, will always have an exact coincidence, though perhaps a little diversified by climate or customs: every exasperated hero must rage like Achilles, and every afflicted widow mourn like Andromache; an abandoned Armida will make use of Dido's execrations; and a Jew will nearly resemble a Grecian, when almost placed in the same situation; i. e. the Ioas of Racine in his incompa-rable "Athalia", will be very like Ion of Euripides" [The Adventurer 2: 228-229]. To prove this thesis, Warton appeals to the authority of N. Boileau and A. Pope, who expressed similar opinions in The Art of Poetry and An Essay on Criticism, and thus he leaves no doubts about his commitment to Neo-Classicism.
On the other hand, when the critic refers to Boileau [The Adventurer 2: 230], who stated that the freshest and the most unusual ideas are not those which were never uttered, but those which come to anyone's mind in similar situations, probably, he means not only the universal, but also the concrete, thus showing his involvement in the search in the 1740-1750s English poetry which, as N. A. Solovyova notes, was aimed at making the usual, common poetically significant and original [Solov'eva 1988:30].
To reduce Warton's aesthetic creed to neo-classical orthodoxy would be an unacceptable oversimplification, as his aesthetic theory is heterogeneous: its sources include the ideas of not only N. Boileau and A. Pope, R. Hurd and S. Johnson, but also those of J. Addison, which are of quite contradictory nature (especially, his theory of imagination, which influenced very much the formation of romantic views in England). In Adventurer 80 (1753), the great, unusual and beautiful, declared by the Spectator as sources of the pleasures of imagination, are presented as acknowledged and just criteria for judging the works of art. As such, they were promoted by actualization of Addison's ideas in M. Akenside's poem The Pleasures of Imagination which was very popular in England and which can be seen as a poetic periphrasis of the Spectator's essays. This poem might spur Warton's interest in Addison's views.
Applying Addison's categories to Homer's works, J. Warton regards his Iliad as a sublime poem and the Odyssey a beautiful and "unusual" one; the former "resembles the river Nile, when it descends in a cataract that deafens and astonishes" an observer, and the latter is like the Nile, too, when "when its genial inundations gently diffuse fertility and fatness over the peaceful plains of Egypt" [The Adventurer 3: 89,96].
Warton admires Homer's "boundless exuberance of imagination", his "unwearied spirit and fire" [The Adventurer 3: 90], emphasizes the variety of events in his poems, concreteness and detailing of descriptions, vivid pictures of customs and ways of ancient life, individualization of characters, dynamic plots and unexpected events. Alongside with it, to the majestic and tremendous in art, he opposes the pathetic, understood as the moving, which is "as strong an evidence of true genius as the sublime" [The Adventurer 3: 94]. He notes that Pseudo-Longinus in his treatise On the Sublime provided examples of expression of this aesthetic category in the descriptions of battles, elements, fantastic creatures, heroes' traits, whereas one needs not less genius to portray such simple and moving pictures as parting of Andromache with Hector, and "the tender circumstance of the child Astyanax starting back from his father's helmet and clinging to the bosom of his nurse", the description of an old man tenderly waiting for his son's return, not knowing that he was dead, the depiction of widows' suffering, etc.
Thus, we can single out several elements in the structure of artistic imagination, as Warton saw it. Firstly, as it was said above, he insisted authors use bright, vivid, picturesque metaphors seen by him as "one of the greatest efforts of the creative power of a warm and lively imagination" [The Adventurer 2: 174], and, consequently, he revealed his commitment to the traditional neo-classical understanding of imagination as a capacity for visualization of images.
Secondly, as R.
Wellek justly observed, in the 18 c. this conception was gradually ousted by equaling imagination with associational activity of the mind, ability of a writer to evoke sympathy, compassion [Wellek 1981: 111], and such understanding of imagination, as we have already seen, was also shared by J. Warton. Therefore, relying on the traditional system of artistic methods, we can state that the neo-classical in his aesthetics is associated with the sentimental.
One more component of the category of imagination, as it is understood by J. Warton, ascends to J. Addison, who wrote in Spectator 419 about the "fairy way of writing" which is connected with using fantastic images in poetic works (fairies, witches, ghosts, etc.). Poetry, according to Addison, cannot limit itself by imitating the sensually perceived world; it must create its own worlds. Warton relied on this idea, developing his own conception of imagination in his essays devoted to Shakespeare's dramatic works.
The first advantage of the Elizabethan, praised in Adventurer 93 (1753), is his great fantasy that distinguishes The Tempest especially in which Shakespeare "has carried the romantic, the wonderful, and the wild, to the most pleasing extravagance" [The Adventurer 3: 196]. The irrational, based on folklore, serves to an "expansion of imagination" and it does not need any justification from the positions of the traditional mimetic doctrine. But there is a personage in Shakespeare's drama that cannot be found in folk tales. Caliban is "the creature of his own imagination, in the formation of which he could derive no assistance from observation or experience" [The Adventurer 3: 225]. Characterizing this personage, the critic uses highly emotional adjectives: "brutal barbarity, unfeeling savageness, horrible delight", "fierce and implacable spirit". "The poet is a more powerful magician than his own Prospero: we are transported into fairy land; we are wrapt in a delicious dream, […] all around is enchantment", writes Warton, for whom an author's ability to strike the reader's imagination is more important than following the neo-classical principle of probability [The Adventurer 3: 203]. In many aspects, Warton is an innovator: he enriches critical discourse with new emotive lexis, he supports subjectivization of criticism and makes a special accent not on formal traits of drama, but on characters (in his Tempest essays though, they are analysed with reliance on traditional approach which presupposes considering their consistency).
M. G. Abrams is disposed to associate the beginning of worshipping Shakespeare with the essays on The Tempest [Abrams 1971: 275-276]. In any case, The Adventurer's Shakespearean essays are a certain result of a long development of English Shakespearean criticism started by J. Dryden, who respected the Elizabethan not less than J. Warton.
Turning back to the sublime / the pathetic opposition, we can conclude that The Tempest belongs to the former category (although Warton finds in it many examples of "the moving" and "the natural", in particular in the character of Miranda), while King Lear to the latter one. Warton devoted several essays to King Lear which we will consider briefly below.
What is crucially important in these essays is a subtle analysis of Shakespeare's psychologism in describing Lear's madness, surpassing, according to Warton, "Euripides himself" with his Orestes. The basis of this analysis is formed by the idea that "absurd" standards of neo-classical criticism are inapplicable to Shakespeare's works. The critic notes that it is easy just to declare Lear's mental disorder "very natural and pathetic". But in this case the readers or spectators will not see the protagonist's "secret workings and changes of mind" [The Adventurer 4: 80] which vary from one cue to another and, consequently, must be considered in detail, with reliance on the text. That is why Warton pays attention to minute incidents, quotes much and follows the manifestations of Lear's insane mind which explain the reader the causes of his catastrophe. The critic reveals Shakespeare's intentions in the scenes portraying the pictures imagined by Lear (the trial of Goneril and Regan), and shows the playwright's vivid imagery and stylistic devices (unexpected metaphors, emotionally loaded repetitions). It is for the first time in English literary criticism that Warton analysed so profoundly a Shakespearean character, so we cannot agree with T. M. Raysor who thinks that his essays are written "in the manner of J. Hughes, pointing out beauties in the plays rather than analysing the motives of the characters" [Raysor 1927: 496].
Indeed, Warton's critical heritage is not free from errors caused by the authority of neo-classical standards. Among Shakespeare's "drawbacks" he lists violations of probability, decorum and unity of action. These errors cannot eclipse the strong sides of Warton as a critic, one of which is the subjective character of his literary analysis, breaking with the traditions of "impartial criticism", aesthetically distancing itself from a work of art. Warton is a "sensible" critic, ready to inform a reader that a literary piece caused his powerful excitement, floods of tears, and this, undoubtedly, witnesses of a certain shift in the critical standards which occurred under the influence of sentimentalism. H. Robinson called the "sincerity of feeling" the most striking trait of Warton as a critic of Shakespeare [Robinson 1932: 91]. To sum up, Warton's essays witness a gradual shift from deductive to inductive critical approaches, from mimetic to psychological method of literary analysis.
Conclusion
The tenets of J. Warton's literary theory, in spite of their heterogeneous character, are inspired by the category of the sublime which he associated directly with the pathetic and, relying on Pseudo-Longinus and Quintilian, found its examples in the Bible. Being also influenced by J. Addison's conception of imagination, the critic applied his views to Homer's works and supported the irrational in Shakespeare's plays. His Shakespearean criticism tended to break with genre doctrines, as it focused on dramatic characters, their motive sphere and its realization in the texts, and the author's psychologism. This approach, as L. Damrosch put it, got "the criticism of drama off the dead center where it had rested since 16 c." [Damrosch 1972: 234]. The normative criticism was to give way to the "criticism of taste" which made a special accent on subjective analysis and the critic's intuition.
One cannot be but ambiguous when deciding conclusively on Warton's creed, as his views were contradictory enough. E. Gosse declared him a romanticist who anticipated Wordworth's and Coleridge's aesthetic views. G. Saintsbury wrote that "the spirit of time caught Warton", but he followed it "half-consciously" [Saintsbury 1904: 260]. H. Trowbridge agreed with this opinion, noting that Neo-Classicism limited his views and "controlled his taste" [Trowbridge 1937: 76]. This conclusion seems only partially true, as Warton's aesthetic views are compromising, their origins are connected both with Horatian and Longinian traditions which presuppose, respectively, the universal and the original in artistic representation. Warton as a critic, in spite of his praise of neo-classical drama and the accent on character types and didactic functions of literature, defends new values, "the pleasures of imagination", with much more enthusiasm. A close reading of his essays convinces us that he was a critic of Sense. Never were the sublime (with the exception of the Spectator) and the irrational, the fantastic as its sources supported so passionately in English periodicals before him. In J. Warton's, as well as T. Warton's and E. Young's works there appears "an idea of genuine poetry as a source of pleasure and beauty helping to explain human experience and enriching sensibility", which was important for "comparing the regular taste with susceptibility to the beautiful and the pathetic" [Solov'eva 1988: 32]. In general, we can conclude that J. Warton's periodical criticism anticipated such manifestos of pre-Romanticism as E. Young's Conjectures on Original Composition (1759) and R. Hurd's Letters on Chivalry and Romance (1762). | 18,732 | 2021-01-01T00:00:00.000 | [
"History",
"Philosophy"
] |
Studying molecular profiles above the Cherenkov Telescope Array sites
The Cherenkov Telescope Array (CTA) will bring a whole new insight to the gamma-ray Universe. In order to fulfill its performance requirements, we need to understand and correct the atmospheric effects that influence the acquired instrument data. One such systematic effect is due to the varying molecular density profile with time. We have studied such profiles for both CTA sites using publicly available historical data assimilation archives. Our study reveals that we can distinguish at least three differentiated seasonal periods at the northern site and at least two at the southern site, that allow to model the molecular part of the atmosphere using average profiles, as done with current Cherenkov telescope projects. Seasonal transitions are smoother at the southern site than at the northern one. Moreover, the latter shows a greater amplitude in density variations at an altitude of 15 km. We also explored deviations of the molecular profiles with respect to their mean values using a 5-years data set and concluded that they are always found within specifications.
Introduction
Imaging Atmospheric Cherenkov Telescopes (IATCs) observe very-high energy gamma rays indirectly from the ground through the Cherenkov light emitted when the radiation interacts with the atmosphere. The characteristics of the Cherenkov light-cone produced depend on the state of the atmosphere, particularly on the molecular profile [1]. Later on, the transmission of Cherenkov light through the atmosphere changes with its molecular and aerosol content. Thus, this observational technique is prone to a series of systematic effects due to the uncertainties inherent to the local climate and weather conditions. In particular, changes of the local density profile [1][2][3] and the aerosol content [4][5][6][7] are its main contributions.
Up to now, IATC collaborations have modeled the atmosphere using one average density profile, tailored to each particular site. The next generation of IACTs, embodied in the Cherenkov Telescope Array (CTA) [8], with more than one hundred IATCs distributed over two sites, one in the Northern Hemisphere at the Observatorio del Roque de los Muchachos, on the island of La Palma, Spain, and one in the Southern Hemisphere located at the Atacama desert close to Paranal, in northern Chile, has much more stringent requirements on allowed systematic uncertainties than previous installations, for instance those affecting assumed absolute energy scale shall be brought from currently 15-20% [2,4,5] down to a level of the less than 10%, across a wider energy range.
Here, we study the molecular profiles from data provided by modern data assimilation systems.
The global data assimilation archives
There are many meteorological data assimilation systems (DAS) available, each of them with different characteristics in terms of availability, spatial and temporal resolution, physical parameters and height levels, among others. In order to approach the problem of characterizing the atmosphere above the two CTA sites, we wanted to take into account those, which are easily accessible for our locations of interest, updated with good spatial and temporal resolution and with all the physical parameters available that we need to compute the density profiles. For this reason, after reviewing most of the available DAS, we chose to use two: the National Centers for Environmental Prediction (NCEP) Final Analysis obtained from the Global Data Assimilation System (GDAS) 1 and the European Centre for Medium-Range Weather Forecasts (ECMWF) ERA-Interim reanalysis 2 . The main characteristics of these two DAS are summarized in Table 1.
NCEP Final Analysis
This DAS model consists of a continuous analysis of the data from the Global Data Assimilation System (GDAS from now on), collected by the Global Telecommunications System (GTS) and other sources. The dataset is structured in a grid of 1 • ×1 • resolution, with data provided every 6 h at 26 pressure levels, from surface at 1000 hPa up to 10 hPa. For the purpose of our study, among the many available physical parameters, we selected geopotential height, temperature, relative humidity and u-and v-components of the wind. The data can be downloaded for a specific grid point via a python script and are stored in grib2 format, readable also with a python script that we developed. For our work, and due to the limited spatial resolution, we selected the closest grid-point to the locations of both CTA sites, namely 29.0 • N 18.0 • W for the northern site and 25.0 • S 70.0 • W for the southern site.
ECMWF ERA-Interim
The ECMWF offers the re-analysis (ERA-Interim) of data based on the 2006 release of the Integrated Forecasting Systen (IFS). It consists of a dataset that spans from 1979 to present, provided with 2 months delay with respect to the present date.
The data are structured in 37 pressure levels, from surface level up to 1 hPa. Among all available parameters, for the purposes of this work we selected the geopotential, temperature, u-and v-components of the wind and relative humidity. The spatial resolution of the dataset is 0.75 • and its temporal resolution is 6 h.
Data can be downloaded for a specific grid point through a python script. We selected the closest gridpoints available, namely 28.5 • N 18.0 • W for the Northern Hemisphere and 24.75 • S 70.5 • W for the Southern Hemisphere site.
Results
We downloaded data from 2012 to 2016 (5 complete years) for both CTA sites and for each one of the two selected DAS. Our scope was to compare these datasets and study the evolution of the molecular profile data at each site along these five years.
In all cases, we computed the density for each pressure level and for each moment in time. We then multiplied the density by the standard atmosphere density (N S ) and an exponential of the height divided by the standard height (H S ), in order to obtain plots with good visibility of the relative differences. After that, we computed the average density, standard deviation and peak-to-peak extreme values for each pressure level. We also compared the obtained results to those used in recent simulations [9], labelled PROD3 in the figures.
Density at 15 km
First we looked at the density as a function of time for a fixed value of height (or fixed pressure level). We chose to look at 15 km height, at the onset of the lower stratosphere, where deviations are expected to be largest. The comparison between both sites (see Figure 1) shows that the annual variations are smoother at the southern site and that the amplitude is larger at the northern site.
With this in mind, we can define different seasonal periods for each site, within which the density profile can be averaged (see Figure 1). We define three of these seasonal periods for the northern site (that we call winter, summer and intermediate) and two for the southern site (that we call winter and summer).
The El Niño and La Niña phenomena are to be taken into account for the southern site, since they strongly affect the climate at this particular region of the planet. Taking a look into the historical data, we observe that El Niño was active during 2014-2016 and it was one of the strongest El Niño events ever registered. Despite that, Figure 1 shows no visible deviation for this particular period, with respect to other years. This might indicate that El Niño will not affect much the state of the molecular profile of the atmosphere at the southern site.
Density over altitude
We compared the two data-sets by looking at the density over altitude. In Figure 2, we plot the mean density and its standard deviation as a function of altitude, for each DAS seasonal period. The behaviour is very similar for ECMWF and GDAS, and the relative differences are small between them. However, they are bigger in the southern site than in the northern one, probably due to the fact that the southern site has less coverage of close-by radio-sonde data compared to the northern one. This fact is reflected in Figure 4 where relative differences between the two different DAS used are displayed as a function of altitude. In both cases the differences are centered around zero value, but in the south the dispersion is higher than in the north. It is also visible the fact that these differences are more important at altitudes above ∼ 16 km in the north and ∼ 12 km in the south, probably due to the fact that GDAS has less information at the tropopause level. Nonetheless, looking at Figure 2, the different seasonal periods can be clearly distinguished above about 12 km altitude. The bottom parts of Figure 2 show the behaviour of the density standard deviation divided by its mean, for each seasonal periods. That relative deviation lies always below 2%, hence fulfilling the CTA requirements [10]. Moreover, we clearly see that there is a significant difference with respect to the PROD3 simulation profile, reaching up to 12% on average for one altitude bin (see Figure 3) at the northern CTA site. We find our respective summer DAS profiles closer to the profiles used for the PROD3 simulations than the winter ones. Normally, telescopes take more data during winter, when the nights are longer. However, the differences are small enough not to alter the sensitivity comparisons of [9]. Figure 3 also justifies the selection of our suggested three separated seasonal periods for the north, instead of using only one average model. In the case of the southern site, these differences are more subtle, but still visible by eye.
Finally we used these data to produce CORSIKA input card files that will be used for future simulations of air showers.
Conclusions
We investigated the long term variation of the molecular density profiles above the two selected sites for the CTA. In this work, we observe that in the southern hemisphere the transitions between seasons are smoother than in the northern hemisphere and that the amplitude in density variations is larger in the latter. Moreover, we identify two distinct seasonal periods at the southern and three at the northern site.
Comparing the density profiles, we can distinguish different behaviours depending on the seasonal period and that the profiles used for the PROD3 simulations so far resemble a summer profile (for the northern site) and a winter profile (for the southern site). The differences between our profiles and those used for the simulations can be as larger as 9% on average for a selected altitude bin.
The comparison between GDAS and ECMWF reveals that both datasets are very similar. ECMWF offers more pressure levels and is easier to download, but aside of that, we can conclude that for the purposes of this study they are almost equivalent. | 2,489.2 | 2019-09-01T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
Differences in the morphology and vibrational dynamics of crystalline, glassy and amorphous silica – commercial implications
Quartz, bentonite, sodium silicate, precipitated and pyrogenic synthetic amorphous silica (SAS) were compared using high-resolution transmission electron microscopy and neutron vibrational spectroscopy. These materials span the full gamut of structures of silica: crystalline, disordered crystalline, glassy and completely amorphous respectively. Traces of water, together with the silanol groups, are of paramount importance for commercial applications, particularly the catalytic influence on the reaction kinetics of surface silanisation of SAS in tyre technology. The reaction of a bifunctional silane, (bis(3-triethoxysilylpropyl)-tetrasulfide), with precipitated silica eliminates accessible surface-silanol groups to improve SAS/polymer interaction and removes traces of water. The degree of reaction of ethoxy-functions was quantified. The removal of reactive protons by the silanisation reaction is even more effective than drying of SAS at 120 °C, however, residual traces of isolated silanols were detected in the interior structure of SAS, even after silanisation or high temperature (750 °C) drying. Some minor residual translational periodicity present in the bulk piece of amorphous solid water glass is completely missing in amorphous SAS powders. The presence of residual water and the silanol groups in silicas are of crucial importance for their commercial applications.
Introduction
Synthetic amorphous silicas (SAS) are widely used on an industrial scale. They are produced via water glass by wet precipitation processes or by the controlled high temperature hydrolysis of silicon chlorides in hydrogen/oxygen flames. 1,2 These finely divided materials are widely used in free-flow regulation, anti-caking of numerous products, as a thickening agent, for thermal insulation and as reinforcing filler materials for lacquers and paints, various polymers, silicon rubber (pyrogenic SAS) and many other applications. Precipitated silicas have become the most important white reinforcing fillers for the rubber industry. 1(b) A major application of precipitated SAS in the rubber industry is for 'green tyre' technology, where the tread compound contains grades of silica that are surface-modified by sulphur-containing organosilanes to simultaneously improve the tyre tread abrasion resistance, to lower the rolling resistance but also to improve the wet skid behaviour, all of which translates into better long term performance, fuel economy and safety.
The major difference between these SAS and many of the silica minerals that are encountered in nature is that they are completely X-ray amorphous and show only short range order effects in the range 0.8 -1.2 nm. [3][4][5][6][7] In previous work, 8 we have used inelastic incoherent neutron scattering (IINS) to follow the variations of the vibrational density of states of amorphous and crystalline silicas, and changes in the spectra of pyrogenic SAS after different post-treatments including wetting, pelletisation and calcination, in the production of high purity catalyst supports. High resolution transmission electron microscopy (HR-TEM) and, especially, electron tomography, 9 together with 3D-TEM, [9][10][11] have been shown to be well-suited to complement the information on amorphicity from X-ray diffraction data by direct imaging of the nano-structure. The short range geometry, down to the molecular scale of two-dimensional mono-/bilayers, of amorphous and crystalline silica and a crystalline/vitreous interface were studied by means of atomic force microscopy (AFM) and evaluated in order of Si-O-Si ring sizes, [12][13][14][15] confirming the Zachariasen model. 5 The relative energies of hydroxylated double silica rings of different geometry were reported. 13 Electron imaging at varying focus planes by successive small steps in z-direction is able to qualitatively study and compare amorphicity/short range order in SAS. 9 Owing to the surface area of hydrophilic and hydrophobic SAS being in the range 25 -700 m 2 /g, the adsorption properties are of major relevance. An important parameter is the silanol group density and the specific interaction with water in precipitated SAS. For the application of SAS as a reinforcing agent in tyres, the dispersibility in the polymer matrix has to be adjusted. This can be chemically improved by conversion of the polar silanol groups to non-polar entities by reaction with sulfur-containing bifunctional silanes. These also contribute to the vulcanisation reaction by cross-linking the silica via the silane to the polymer chains via sulfur bridges. The ca. 4 -7 % of residual water in dried SAS from the wet-production process is essential to, and has a beneficial catalytic influence on, the reaction kinetics of the silanisation processes. To better understand, how the water and silanols interact and are distributed in the silica, a bulk, non-destructive technique with good sensitivity and selectivity to hydrogen is needed. Previous work on carbon blacks revealed that, because of the very high sensitivity to protons, IINS is particularly suitable for spectroscopic studies on the hydrogen bonding of strongly adsorbed water at the surface and even inside micropores of finely divided, post-oxidized, electrically conductive gas blacks. 16 Disordered water in confined geometry of strongly oxidized gas blacks can be discriminated from ordered water structures at the outer surfaces of these materials by the position of the leading edge of the intermolecular librational modes of condensed water in the ice-like state (as measured by IINS at T < 20 K). 17 The sensitivity of IINS 18 to hydrogen arises from the dependence of the measured intensity on the incoherent cross section of the scattering atom. For 1 H, this is almost an order of magnitude larger than for any other atom of interest (C, O, Si). This means that it is possible to observe the entire 0 -4000 cm -1 range and allows the low energy silanol deformations and water librations to be detected. This is not usually possible by infrared because of the intense absorption by the silica. Methyl torsions are rarely detected by infrared or Raman spectroscopy; with IINS they are often the strongest bands in the spectrum. IINS is also sensitive to all wavevectors (k, Å -1 ), thus allows a complete vibrational density of states to be obtained. This is in contrast to infrared and Raman spectroscopy that only detect modes with k ~ 0.
Fumed and precipitated SAS as nanostructured materials show different amounts of silanol groups. These may enhance the interactions between humidity picked up accidentally from ambient air or which is retained from the precipitation/drying process: It was anticipated that, as in the case of carbon blacks, IINS measurements would allow direct study of the interactions between traces of water and the active sites of precipitated silica. This would include the outer surface and the intergranular volume of silica aggregates and agglomerates as well as trace amounts of trapped water. According to R.K. Iler, 6,7 it is generally agreed that on the smooth, non-porous heatstabilised amorphous precipitated SAS surface that is fully hydroxylated, there are about 4 -5 Si-OH groups / nm 2 which remain when the sample is dried at 120 -150 °C. For pyrogenic SAS only about 1.5 SiOH / nm 2 are detected, mostly isolated SiOH, since high temperatures are involved in the production process from the hydrogen/oxygen flame: However, for the topmost atomic layers this value can change with time and storage conditions due to adsorption phenomena. 1 The silanols are also important in a different context: namely for gas storage. Methane hydrate is an enormous energy reserve 19 that is preferentially formed in sandy i.e. siliceous environments. 20 Experiments show that conversion to methane hydrate was found to be minimal in solid silica and maximal in hollow silica. 21 This is presumably a surface area effect: the hollow silica enables more significant hydrogenbonding interactions with the water. Computational studies of a hydroxylated silica interacting with methane hydrate confirm the crucial role of hydroxyls. 22 Molecular dynamics simulations show pore size dependent behaviour of methane in montmorillonite, which is used as a model for shale. 23
Materials
Quartz sand (Nivelstein, Herzogenrath, Germany, sieve fraction, Fig. 1 images 1 and 2) SiO 2 content 99.77 %, containing only traces of Al. The bentonite clay (Clariant, Germany) was a pure grade for catalyst support purposes. A bulk piece (100 g) of glass-like solid sodium silicate with a high alkaline element content was chosen for comparison (SiO 2 > 75.0 mass%, Na 2 O > 22.4 mass%, measured by potentiometric titration). The total hydrogen content was 220 ppm (0.022 %). For the topmost atomic layers X-ray photoelectron spectroscopy (XPS) revealed the presence of some surface-shielding of the silicate by enrichment with Na 2 O, surface carbonate (289.2 eV, C1s) and humidity/surface hydroxide (ca. 17 % of the surface-oxygen, ca. 535 eV, O2p) by partial surface-hydrolysis of this hygroscopic alkaline material. Commercial quality SAS of Evonik Industries were used: dry pyrogenic Aerosil ® and precipitated ULTRASIL ® grades. Table 1 summarizes results from surface-related characterisation of three different ULTRASIL ® grades. The values illustrate that typical rubber technology grades of SAS are used in this study. A COUPSIL ® powder sample, Silica I, surface-modified by silanisation treatment with the bifunctional silane Si 69 ® , was also studied. The pure silane reference was prepared as a liquid in a Viton-sealed square-shaped Al-sample holder, shock-frozen and measured as a macroscopic plan-parallel sheet of solid matter (14.2 g sample). Two commercial hydrophobic SAS products based on Aerosil ® 200, after silanisation with dimethyldichlorosilane and octylsilane were also investigated. For comparison, pure polydimethylsiloxane (PDMS) was also measured 2.2 Methods 2.2.1 High-resolution-transmission electron microscopy HR-TEM. Compact SiO 2 particles were carefully fractioned and small pieces were transferred onto standard TEM-sample holders (200 mesh copper grids, coated with Holey carbon foil). Using a precision tilt sample manipulator in the TEM column electrontransparent areas of such entities were characterized. The aggregates of pyrogenic and precipitated SAS 9 were dispersed in isopropanol/water, treated in an ultrasonic bath together with energy input by an ultrasonic finger (sonotrode) for three minutes and afterwards transferred onto Holey carbon foil Please do not adjust margins Please do not adjust margins Fig. 1 Light microscopy survey images of the sequence: quartz crystals bulk sodium silicate piece finely divided precipitated SAS powder, dried (105 °C). (a) quartz sand particles, crystalline as indicated by (b) polarization contrast imaging, (c) solid sodium silicate as obtained from crystalline quartz by alkaline-treatment and (d) finely divided, amorphous silica powder (SAS) obtained by precipitation from liquid sodium silicate solution, after elution, drying via chamber filter press and finely ground.
. using an Eppendorf pipette. A Jeol 2010F HR-TEM was operated at 200 keV primary electron beam energy. The quality, stability and calibration of the field emission TEM system were maintained by the use of the Magical No. 641 standard (Norrox Scientific Ltd., Beaver Pond, Ontario, Canada). HR-TEM images were taken by measuring electron-transparent ultrathin regions of fragments of crystalline, partly amorphous and glass-like bulky silica and of very finely divided aggregates of SAS at the nanoscale on Holey carbon foil.
Inelastic incoherent neutron scattering (IINS), infrared
and Raman spectroscopy. IINS spectra were recorded using the TOSCA, 24 MARI 25 and MAPS 26 spectrometers at the spallation neutron source ISIS of the STFC Rutherford Appleton Laboratory, Chilton, U.K. TOSCA is complementary to MARI and MAPS. As explained in detail elsewhere, 26 TOSCA is optimum for the 0 -1600 cm -1 , while MARI and MAPS provide access to the region above 2000 cm -1 , which allows observation of the crucial O-H stretch modes. The samples were sealed into thin-walled aluminium cans (wall thickness < 0.5 mm) and evacuated by a turbomolecular pump at room temperature. Details on cuvette design are given in Ref. [27]. The highly penetrating nature of neutrons enables information on silica-filled aluminum-cans to be obtained non-destructively from macroscopic samples. Sample weights were: 170 g for the quartz sand, 60 g for the bentonite, 100 g for the single solid piece of sodium silicate, and ca. 20 -30 g for the powder type SAS samples. A sample was quenched with liquid nitrogen to 77 K followed by cooling to T < 20 K using a closed cycle helium cryostat. The spectra have been normalised to 1 g SiO 2 , thus relative intensities are directly comparable: For the COUPSIL ® sample, the amount of bound Si 69 ® (estimated as 0.42 g by subtraction of a Si 69 ® reference spectrum) was first subtracted from the sample weight. Note that the normalization assumes that the samples dried at 750 C are hydrogen-free. We will show later that this assumption is not completely correct for at least one of the samples.
Infrared spectra of the glass-like solid sodium silicate were measured at room temperature using a Bruker Vertex70 FTIR spectrometer, over the range 150 to 4000 cm -1 at 4 cm -1 resolution with a DLaTGS detector using 64 scans and the Bruker Diamond ATR. The use of the ultra-wide range beamsplitter enabled the entire spectral range to be recorded without the need to change beamsplitters. Raman spectra were recorded at 7 K using an upgraded version of a Renishaw InVia spectrometer, that has been previously described. 28 The upgrade consists of an additional laser operating at 532 nm and improved laser rejection filters with a low energy cut-off of ~ 40 cm -1 .
Results and Discussion
The samples, see Fig. 1, were chosen so as to cover the full range of silica: crystalline, glassy and amorphous. Quartz sand was used as an example of a pure, crystalline silica material, (the IINS spectra of this material have been previously shown. 8 ) Bentonite as a natural clay-type silicate mineral with a layered structure, containing mostly crystalline but also some disordered structures, was selected to probe for the influence of moderate bulk mismatch. The glass-like solid sodium silicate was used as an example of a glassy, predominantly amorphous silica with only trace amounts of silanols and water. Commercial quality SAS were measured as examples of very finely divided completely amorphous powders consisting of aggregates 1,2,6-10 to compare with dry pyrogenic (Aerosil ® 200) and precipitated SAS. Thus Fig. 1 visualizes the evolution of the macromorphology from quartz crystals (digestion by alkaline oxide) over bulky solid sodium silicate glass to a final amorphous precipitated dried pure SAS powder (by removal of sodium and water). TEM results on the characteristic micro-and nanostructure of SAS aggregates are available. 9 The water and hydroxyl content as a function of temperature is of particular interest. In Table 2 the weight losses are compared: a) 120 °C (standard: 105 °C) and b) 750 °C (which is still below an ignition loss test at T > 1000 °C). This includes the removal of water and, partly, of silanol groups by formation of siloxane bonds. 6,7 Some of the SAS samples were also measured after silanisation.
HR-TEM
In Fig. 2 examples of HR-TEM imaging of crystalline and amorphous silica at the nanoscale are compared. The average amplitudes between maximum and minimum brightness (peak/valley) show considerable differences with increasing disorder/amorphicity. Some results of grey scale analysis by digital evaluation of TEM images via line scans (software tool "analysis Add-Inn", Fa. M.A.S, Freiburg, Germany) are included. Numerical values which were obtained by accumulating the results of 10 line scans per sample from different areas of an image as taken at a certain focus plane are summarized in Table 3. The distance between two adjacent maxima in 13.95 (total, 750 °C) brightness are determined as a first rough approximation in the qualitative comparison of apparent electron optical differences between crystalline and amorphous silica. Note that this is only a qualitative two-dimensional estimate derived from one single focus plane in imaging electron optical sections of threedimensional objects (crystallites or amorphous entities). A detailed numerical evaluation as shown for the ideal case of a flat sheet, a two-dimensional bi-layer of vitreous amorphous silica as reported in Ref. [13 -15] by analyzing AFM-images, and for 2D-models derived from aberration-corrected TEM, 29 for the case of three dimensions is out of the scope of this study. This would require further developments in high resolution electron tomography, aberration-corrected TEM and atomic coordinate analysis dedicated to amorphous matter.
In Fig. 2 the TEM images of an ordered crystalline lattice is apparent (a) (bentonite, (220)) and (b) (quartz (211)), whereas (c)-(e) have an amorphous appearance with different fine features and a decrease in average distance (sodium silicate, precipitated and pyrogenic SAS) in the grey scale analysis, Table 3. However, the numerical differences are very small. Considering the bulk chemical composition a tentative interpretation could be: the higher average value for (c) compared to (d) or (e) can be explained by the high sodium oxide content in the network of the sodium silicate Table 3.
Materials Advances Accepted Manuscript
Open
and Evans and King 4 and to the
Zachariasen 5 model appears, however, there are some differences in fine-structure and short-range order. A qualitative difference in the nanostructure of (d) and (e) can be explained by varying silanol group densities and traces of residual water on/in the highly dispersed powders from wetprecipitation and high-temperature hydrogen/oxygen flame processes, respectively. This is supported by the IINS results in the next section.
Residual water and silanols in precipitated SAS.
To probe the properties of traces of water and of silanol groups in 20 -30 g quantities of freshly prepared precipitated SAS and their behavior as a function of temperature (although still below the conversion temperatures of amorphous to crystalline silica) three grades were used: Silica I -III. Each SAS was run on TOSCA: as received, after drying at 120 °C and after drying at 750 °C, Table 1 and Fig. 3. The three Silica I samples were also run on MARI, Fig. 4. The different treatments were expected to provide a mixture of adsorbed water and silanols, just silanols and a hydrogen-free reference material, respectively.
For each stage of the drying process, the spectra show a remarkable similarity. The as received "original" samples all show bands characteristic of water (65, 540, 1650 cm -1 ) and hydroxyls (1100 cm -1 ). After drying at 120 °C, there is a considerable reduction in the integrated intensity (see Table 4 and the scale factors on the figures), however, most of the features still remain. The intensity of the hydroxyl bending mode at 1100 cm -1 has increased relative to that of the water librational modes at 540 cm -1 , consistent with removal of water by drying but it is clear that there is still residual water present. With increasing Sears number (indicating silanols, Table 1) the 1100 cm -1 band increases as well. Drying at 750 °C apparently removes all the hydrogenous material and just leaves a spectrum typical of silica. The MARI spectra for the original and 120 °C samples are consistent with the TOSCA spectra. MARI allows access to the O-H stretch region and a broad stretching band at 3400 cm -1 and a (stretch + libration) combination at 4100 cm -1 are apparent consistent with the presence of water and hydroxyls. On drying there is a loss of intensity at 3400 cm -1 and a band at 3730 cm -1 typical of isolated hydroxyls is discernible. In Fig. 4 (top, (a) and (b)) the water scissors mode at 1650 cm -1 is clearly seen in both spectra. This mode is characteristic of water, so provides unambiguous evidence for the presence of residual water. This band is not apparent in Fig. 3 (top) and this is explained in the ESI. The dry sample provides a surprise in that a small number of isolated hydroxyls are still present, Fig. 4 (top, (c)). An isolated terminal hydroxyl has three modes: an O-H stretch, and in-plane and out-of-plane Si-O-H bends (torsions). In Fig. 4 (bottom) there is a weak band at 820 cm -1 which is assigned to the in-plane bend. In Fig, 4 (top), there is a very weak feature at 4031 cm -1 which is assigned to the (stretch + out-of-plane bend) combination. This would predict that the out-of-plane Si-O-H bend fundamental is at 290 cm -1 and a band at this energy is apparent in the TOSCA spectrum, Fig. 3 (top, (c)).
Comparison of the shape of the broad librational water band extended from about 500 cm -1 in Fig. 3 ((a) in top, middle and bottom), with literature data on high density amorphous (HDA) ice 31 indicates the presence of a sloping leading edge with an asymmetric broadening and shift to lower wavenumbers (ca. 430 cm -1 ). It follows that the water traces on/in the SAS are disordered 32,33 and increased oxygen-oxygen distances are weakening the hydrogen bonding between water molecules. This can be explained by the influence of silanol groups and distinct differences in the silanol/water and water/water interactions on the precipitated silica. The weakening of the water/water interactions due to the silanol groups explains the catalytic effect and diffusivity of residual traces of water on the kinetics of silanisation: the water molecules associated with the silanol group reaction centers is readily available for the chemical reactions and formation of new covalent Si-O-Si bonds in hydrolysis of the ethoxy-functions in Si 69 ® , see the following section. This is in agreement with observations on the interaction of single water molecules with silanols in mesoporous silica: it is reported that the hydrogen bond of the water proton with the oxygen of the silanol group is much stronger than the hydrogen bonds of bulk water. 34 Also, the impact of surface silanol density and, therefore, the average silanol spacing 35 on surface water diffusivity showing a sharp change at 2.0 -2.9 nm -2 is of relevance, together with differences in the pHvalues of pyrogenic and precipitated SAS. 1(b), 36 The result that after drying at 120 °C (standard 105 °C) residual water and hydroxyls are still present and even after high temperature drying (750 °C) isolated hydroxyls are detected (Fig. 4) supports the interpretation of differences in the nano-morphology of precipitated and pyrogenic SAS (Fig. 2, TEM-images (d), (e) and Table 3). The corresponding surface-related differences in drying characteristics, pH value, 1 hygroscopic behaviour have been discussed in terms of Sears number. 37
Effect of silanisation.
Si 69 ® (bis(3-triethoxysilylpropyl)tetrasulfide) is a bifunctional compound that is used to improve the dispersion of silica in rubber. 38 The thermoxidative stability, ageing properties and service life of the rubber matrix are considerably improved. 39 In Fig. 5a the spectrum of the pure silane (sample of 14.2 g) is dominated by the low energy methyl torsional modes of the six ethoxy functions at 265 cm -1 and the rocking mode of these CH 3 -groups at 809 cm -1 . In the range of 600 -1600 cm -1 the CH 2deformational modes of the ethoxy-and propyl-segments occur (with increasing wavenumber: rocking, wagging, twisting and scissor modes). All these signals are missing on the untreated SAS-sample, Fig. 3(top). Due to the strong methyl torsion of the ethoxy-groups at 265 cm -1 a semi-quantitative evaluation of the spectrum in Fig. 5b is possible in order to detect residues of non-hydrolysed ethoxy functions in the topmost atomic layers of the very finely divided SAS. We find that for the given macroscopic sample, about 10 % of the Comparison of Fig. 5c and Fig. 3 confirms that the reaction with silica removes accessible surface silanol groups and traces of water. From Table 4 and Figure 5, it can be seen from the normalized integrated intensity that, in total, its effect is even larger than that of drying SAS at 120 C. However, both water and hydroxyls are still present and this supports the idea that there is a population of intranetwork traces of -OH and H 2 O that is largely inaccessible to chemical agents of significant molecular size. This will contribute to the small differences in appearance of the HR-TEM-images Fig. 2 (d) and (e) in at the nanoscale for precipitated as compared to pyrogenic SAS.
Materials Advances Accepted Manuscript
As shown elsewhere, 40 the methyl torsion transition energy is sensitive to both the intra-and intermolecular environment. Table 5 For the crystalline and partly amorphous bentonite sample and the highly alkaline silica glass the dominating 77 cm -1 signal is much lower in relative intensity than the 130 cm -1 signal. An out-of-plane Si-O-H bend mode of Si-OH can be expected at this energy. 44 However, the hydrogen content of the bulk water glass is very low (220 ppm, section 2.1), making detection very difficult. Neutron scattering studies of vitreous silica led to the conclusion that bands Fig. 6 From top to bottom: bentonite, sodium silicate, quartz, precipitated silica, pyrogenic silica; traces green, red and blue taken from Ref. [8].
at about 800 -1200 cm -1 are due to Si-O stretching modes and those at 300 -400 cm -1 to Si-O bending modes. 45 With decreasing crystallinity (Figs. 1 and 2, Table 3) the low energy modes are affected by the decrease in long range and short range order. In the IINS-spectra of the SAS powders these sharp vibrational lines of the crystalline silica are either completely missing or strongly altered by distinct broadening and shifting towards very low energy (< 60 cm -1 ) to form a broad band of low energy translational motions. The IINS spectra now resemble the vibrational density of states previously seen of a vitreous i.e. amorphous silica sample. 45 From the comparison of the spectra in Figs. 6 and 7, it follows that the threedimensional network of the SiO 4 units of the fumed and of the precipitated silica is not rigid and ordered enough to accommodate the sharp vibrational modes observed for crystalline silica or as calculated for glass-like SiO 2 . According to calculations by Taraskin and Elliot, 46 Fig. 7d, does not show these features as the compact shape (Fig. 1, image (c)) ensures that mostly bulk properties of the silicate are detected by the highly penetrating neutrons. This suggests that the hydroxyls are located on the outer surfaces of the material as indicated by XPS and consistent with the limited penetration depth of infrared radiation with ATR.
Figs. 6 and 7 indicate that some residual translational periodicity 47,48 in the bulk piece of amorphous solid water glass is completely missing in amorphous SAS powders ( Fig. 1 image (d) and It is still a challenge for future work to specify and evaluate a "degree of "amorphicity" in bulk solid glasses, "vitreous silica" and highly dispersed, amorphous high purity silica powders of fluffy appearance from varying production technologies in three dimensions, following the work on 2D silica (e.g. Ref. [12][13][14]). HR-TEM at the nanoscale, together with spectroscopic evidence from INS on bulk quantities of silica are needed, synergetically, to evaluate differences of nanostructure and low frequency translational dynamics .
Comparison to computational studies.
Unsurprisingly, owing to the commercial importance of silicas, the silanol-water interaction has been extensively investigated by computational methods. Attempts to simulate silicas by using clusters are generally unsuccessful 49 because of the amorphous nature of the material; small parts of the material are not representative of the totality. This means that large scale simulations are required. [50][51][52][53][54] Simulation of the infrared spectrum in the O-H stretch region 52 shows the isolated silanol O-H shifting to progressively higher energy as the surface is dehydroxylated. However, the calculated shift is small: 3735 to 3780 cm -1 . This is in good agreement with that found here, 3650 to 3730 cm -1 (Figure 4 (top) and Table 4).
Conclusions
Crystalline, disordered crystalline, glassy and completely amorphous silica show differences in the macro-and nanostructure that are visible to optical and electron microscopy. However, they are also expressed in the low frequency vibrational and translational dynamics, as measured by IINS.
The use of IINS enables a coherent picture of the properties and interactions between traces of water and silanol groups to be obtained. Due to the influence of surface silanols a weakening of the hydrogen bonding between the adsorbed water molecules is indicated by enhanced oxygen-oxygen distances and disorder. More isolated water molecules are present which have a positive catalytic influence on the reactivity of the associated silanol groups of SAS with organosilanes. Accessible surface silanol groups and traces of water are reacted. In total, the effect of surface silanisation is even stronger than that of drying SAS at 120 °C. Residual intranetwork traces of isolated hydroxyl-groups are left, largely inaccessible to chemical agents of significant molecular size.
As described in the Introduction, SAS have many, and varied, industrial applications: from pharmaceutical additives to reinforcing agents in composites. All of the applications depend on the interaction between the surface of the SAS and the product. The interactions are all mediated by the silanols in some way; either directly via hydrogen-bonding or by providing sites for chemical derivatisation. This work provides new insights into these interactions. | 6,647.2 | 2020-07-21T00:00:00.000 | [
"Materials Science"
] |
Majorana excitations in the anisotropic Kitaev model with an ordered-flux structure
We investigate the anisotropic S = 1/2 Kitaev model on the honeycomb lattice with the ordered-flux structure. By diagonalizing the Majorana Hamiltonian for the flux configuration, we find two distinct gapped quantum spin liquids. One of them is the gapped state realized in the large anisotropic case, where low energy properties are described by the toric code. On the other hand, when the system has small anisotropy, the other gapped quantum spin liquid is stabilized by the ordered-flux configuration. Since these two gapped quantum spin liquids are separated by the gapless region, these are not adiabatically connected to each other.
Introduction
Quantum spin liquids in the Kitaev model [1,2] have been studied intensively in recent years because of their potential applications in topological quantum computation and spintronics. It is known that in the Kitaev model, spin degrees of freedom are split into itinerant Majorana fermions and localized fluxes due to the spin fractionalization. The ground state belongs to the flux-free state and low energy properties are described by itinerant Majorana fermions. In fact, the Majorana edge current has been observed in the half-quantized plateau in the thermal quantum Hall effect [3,4]. Furthermore, the Majorana-mediated spin transport has been theoretically suggested [5], which should stimulate further investigation to realize quantum devices using Majorana fermions.
In our previous study [6], we have found that the gapped quantum spin liquid is realized in a certain flux configuration. This indicates that the flux configuration is a key role in controlling Majorana excitation. On the other hand, it is known that the large anisotropy in the exchange couplings stabilizes the gapped quantum spin liquid, which should be described by the toric code [7]. Then a question arises. Are the above gapful states with distinct origins are adiabatically connected to each other? To answer this question, we focus on a flux configuration to investigate how the anisotropy in the exchange couplings affects the Majorana excitations.
Model and Hamiltonian
We consider the Kitaev model on the honeycomb lattice, which is given by where ⟨i, j⟩ α shows the nearest-neighborhood pair on the α(= x, y, z)-bonds. S α i (= 1 2 σ α i ) is the α-component of the S = 1/2 spin at the ith site and σ α is the α-component of the Pauli matrix. honeycomb lattice. The operator W p is defined on a plaquette p as where p i (i = 1, 2, · · · , 6) is the site on the plaquette p [see Fig. 1 The corresponding eigenvalue w p takes ±1. The eigenstate of the Hamiltonian can be specified by the set of the eigenvalues w p . It is known that the ground state is realized with w p = +1 for all plaquettes. Therefore, a plaquette with w p = −1 can be regarded as an excited flux.
In the study, we discuss Majorana excitations in the Kitaev system with certain fluxconfigurations. To this end, we first use the Jordan-Wigner transformation, where c † i and c i are the creation and annihilation operators of the fermion at the ith site. The Hamiltonian is rewritten into where c rb (c rw ) is an annihilation operator of the fermion at the black (white) site on the rth z-bond. ⟨rb, r ′ w⟩ α indicates the nearest-neighbor pair connected by the α-bond. Here, we define Majorana fermion operators γ,γ [8,9,10] as where Majorana operators satisfy The Hamiltonian is given as where η r = iγ rbγrw . Since η r satisfies [H, η r ] = 0, [η r , η r ′ ] = 0, and η 2 r = 1, η r is a local conserved quantity and the corresponding eigenvalue takes ±1. The local operator W p is represented as W p = η pl η pr , where η pl and η pr are defined on the left and right z-bonds on the plaquette p [see Fig. 1(b)].
Figure 2.
Flux configuration in the Kitaev model studied here. Each plaquette with orange shade shows w p = −1 and that without shade shows w p = +1. Each rectangle including 6 z-bonds represents the unit cell, which is marked with thin yellow shade.
In the present study, we focus on an ordered-flux structure shown in Fig. 2 as a simple example. This flux-configuration is represented by the set {η r 1 , η r 2 , · · · , η r 6 } = {1, 1, 1, −1, −1, −1} and its periodic arrangements. From the previous study [6], it is known that in the isotropic case with J x = J y = J z , the excitation gap appears in the Majorana excitations. Here, we consider the Majorana excitations in the anisotropic Kitaev model under the conditions J x + J y + J z = 3, and J x = J y .
Result
By diagonalizing the Hamiltonian of the Kitaev model with the flux configuration, we obtain the Majorana excitations. The results for the Majorana gap are shown in Fig. 3. When J z = 0, the system is reduced to the quantum spin chain composed of x-and y-bonds. In this limit, the flux configuration never affects the Majorana excitation, and ∆ = 0. This gapless state is stable against the introduction of the exchange J z , as shown in Fig. 3. Beyond (J z ) c1 ≃ 0.7129, the Majorana gap appears, takes a maximum at J z = 1, and finally closes at (J z ) c2 ≃ 1.2834. Thus, we can conclude that the gapped quantum spin liquid is stabilized by the flux-configuration since the Kitaev system in the flux-free state is gapless in this parameter regime, as shown in Fig. 3. When (J z ) c2 < J z < (J z ) c3 ≃ 1.2889, the gapless quantum spin liquid state is realized, although the region is very narrow, as shown in the inset of Fig. 3. For J z > (J z ) c3 , another gapped quantum spin liquid state emerges with ∆ ≃ J z . In this case, the flux configuration plays no essential role and the ground state should be described by the toric code. Therefore, our results suggest that origins for these two gapful quantum spin liquids are distinct from each other.
Summary
We have studied the anisotropic S = 1/2 Kitaev model on the honeycomb lattice with the ordered-flux structure. In this flux configuration, two distinct gapped quantum spin liquids appear when J z varies. With large anisotropy, the system is well described by the toric code. On the other hand, when the system has small anisotropy, another gapped quantum spin liquid is realized. These two gapped quantum spin liquids are separated by the gapless region. Therefore, these are not adiabatically connected to each other and thus we conclude that they have different origins. In the sense, one can potentially control the motion of the Majorna excitations using both the flux configuration and anisotropy in the exchanges [11]. The systematic study for other ordered-flux structures are left for future work. | 1,675.8 | 2022-03-01T00:00:00.000 | [
"Physics"
] |
Global well-posedness in Sobolev space implies global existence for weighted L^2 initial data for L^2 -critical NLS
The L^2 -critical defocusing nonlinear Schrodinger initial value problem on R^d is known to be locally well-posed for initial data in L^2. Hamiltonian conservation and the pseudoconformal transformation show that global well-posedness holds for initial data u_0 in Sobolev H^1 and for data in the weighted space (1+|x|) u_0 in L^2. For the d=2 problem, it is known that global existence holds for data in H^s and also for data in the weighted space (1+|x|)^{\sigma} u_0 in L^2 for certain s, \sigma<1. We prove: If global well-posedness holds in H^s then global existence and scattering holds for initial data in the weighted space with \sigma = s.
Introduction
Consider the initial value problem for the L 2 critical nonlinear Schrödinger equation for u : R × R d → C, This problem is called defocusing for λ > 0 and focusing for λ < 0. In dimension d = 2, equation (1.1) reduces to the cubic nonlinear Schrödinger equation, i∂ t u + ∆u = λ|u| 2 u, which appears widely as a model equation in Physics [9]. This problem is locally well-posed in L 2 or in any H s with s ≥ 0. That is, given initial data u 0 ∈ H s (R d ) with s ≥ 0 there is a local existence time, T lwp , and a local in time solution u : [t 0 − T lwp , t 0 + T lwp ] × R d → C such that u solves (1.1) and the function u ∈ C t H s . For s > 0, T lwp is a decreasing function of the H s norm of An open problem is to prove global well-posedness in L 2 in the defocusing case and under an appropriate smallness condition in the focusing case. For the focusing case, it is believed that solutions with L 2 norm smaller than the ground state 1 mass Q L 2 do not blowup and in fact scatter. Explicit blow-up solutions with Schwartz class initial data and with the mass of the ground state are known to exist in the focusing case. By finding the optimal constant in the Galiardo-Nirenberg estimate, [11] proved that H 1 initial data with L 2 norm less than the ground state mass evolves globally in time. In the defocusing case, L 2 solutions are expected to exist globally in time and scatter. Although the L 2 norm of u(t) is constant on the local well-posedness time interval, this norm does not control the length of the local well-posedness time T lwp and Date: July 5, 2018. 1 The ground state is the unique (up to translations) positive solution of −Q + ∆Q = Q 3 .
can not be used to prove global well-posedness in L 2 . In H 1 , this problem has an additional conserved quantity, the energy, E[u] = 1 2 |∇u| 2 + λ 4 |u| 4 dx. In the defocusing case, the energy is positive and dominates the H 1 norm. Since the local well-posedness time is a function of the H 1 norm, at any time the solution persists for a uniformly long local well-posedness time, and, hence, globally in time.
Bourgain was the first to prove [2], for the cubic problem in R 2 , global wellposedness below the energy threshold H 1 by proving global well-posedness for data in H s in the defocusing case for s > 3 5 . The method in [2] involved a decomposition of the data into high and low frequencies with a sharp cut-off function in the Fourier variables. Later, the "I-method" [6] was used to improve this to s > 4 7 for the cubic problem on R 2 .
The nonlinear Schrödinger equation (1.1) has a discrete pseudoconformal symmetry, This is a symmetry in the sense that, if u(t, x) is a solution to the nonlinear Schrödinger equation on (t, x) ∈ [t 1 , t 2 ] × R d , then v(τ, y) is a solution on τ ∈ [−t −1 1 , −t −1 2 ], y ∈ R d . Throughout this paper, we shall use u, t, and x to refer to a solution, the time variable, and the spatial variable respectively, to use v as the pseudoconformal transform of u, and to use τ and y as the arguments of v which will be called the transformed time and space variables. In particular, τ = t −1 .
We introduce the space H 0,1 . The energy on the left side of this equation is already known to be independent of τ . This property was used to prove global well-posedness in L 2 for initial data u 0 ∈ H 0,s with s > 3 5 in the defocusing case [2]. The proof involves a spatial decomposition analogous to the Fourier decomposition used in proving the H s global existence result. As a further consequence, [2] also proves scattering, that is the existence of functions u ± ∈ L 2 such that lim t→±∞ u(t) − e ±it∆ u ± L 2 = 0. This paper establishes that global well-posedness in H s for (1.1) implies global existence and scattering in L 2 for initial data in H 0,s for (1.1). Thus, the link between H s global well-posedness and the evolution properties of H 0,s initial data found in the R 2 case in [2] is in fact common to all pseudoconformal or L 2 -critical nonlinear Schrödinger initial value problems (1.1). Proposition 1.1. Assume that the nonlinear Schrödinger equation (1.1) is globally well-posed in H s (with the additional hypothesis that the initial data has L 2 norm bounded by Q L 2 in the focusing case).
If u 0 ∈ H 0,s (and u 0 L 2 < Q L 2 in the focusing case), then there is a function u : R × R d → C which solves the nonlinear Schrödinger equation (1.1) for all time. Furthermore, there are functions u ± ∈ H 0,s such that Based on the H s global well-posedness result in [6], we obtain as a consequence of Proposition 1.1 that for s > 4 7 , initial data in H 0,s evolve globally in time and scatter in L 2 under the cubic nonlinear Schrödinger flow on R 2 .
Global existence in L 2 for initial data in H 0,s means initial data u 0 ∈ H 0,s , which is also in L 2 , evolves as a solution in L 2 and that this solution exists for all time. This is significantly weaker than global well-posedness in H s , which means that initial data in H s continuously evolves in H s , that this solution exists for all time, and that the time evolution map S N LS (t, 0) : The asymmetry between H s and H 0,s in Proposition 1.1 is a consequence of the local theory. The initial value problem (1.1) is locally well-posed in H s ; whereas, in H 0,s , (1.1) is ill-posed, so global existence in L 2 for initial data in H 0,s can not be extended to H 0,s global well-posedness.
In the remainder of the introduction, we review L 2 and H s local-well posedness, H 0,s ill-posedness, and some properties of the pseudoconformal transform. In Section 2, we prove Proposition 1.1 by showing that, for a solution with initial data in H 0,s , the pseudoconformal transform is in H s . This is done by taking regularized approximators and showing their transforms converge in H s at a particular transformed time −T −1 lwp . In Section 3, we show that scattering is a consequence of the construction in Section 2.
We use the notation S N LS (t 2 , t 1 ) to denote the nonlinear Schrödinger evolution map from time t 1 to time t 2 , F [•] for the Fourier transform, and ℜα and ℑα for the real and imaginary part of α respectively. 1.1. Local well-posedness theory. The local well-posedness theory (see [3], [8] for a review) begins with the presentation of the nonlinear Schrödinger equation as an integral equation through Duhamel's principle To prove that (1.6) has a unique solution, it is sufficient to show that Φ u0 is a contraction in an appropriate space. This space will be the Strichartz space defined below.
The L 2 Strichartz norm, or simply the Strichartz norm, is The homogeneous H s Strichartz norm is where D is the Fourier multiplier defined by For s > 0, the H s Strichartz norm is For an interval I, the spaces S 0 (I),Ṡ s (I), and S s (I) are the spaces with the above Strichartz norm where the t integration is taken over the interval t ∈ I.
With this notation, we record the Strichartz estimates: For s > 0, (1) u solves the nonlinear Schrödinger equation Since u can be extended to any interval on which the L 2 Strichartz norm is finite, the L 2 Strichartz norm must diverge on intervals approaching the maximal forward time of existence.
, and let χ |x|<r(a,t) be the characteristic function with support on |x| < r(a, t). Given A > 0 and a > 0, let ψ[A, a] be solutions to the linear Schrödinger with initial data These are given by and with a j chosen sufficiently large so that Given a fixed t, let r k = r(a k , t) and χ k = χ |x|<r k . The first condition, (1.11), ensures that, for sufficiently large k, on a length scale of |x| < r k , the function ψ k dominates all the later ψ j with j > k: The second condition, (1.12), ensures that A j a s 2 j grows at least exponentially. It also ensures that, for a 1 2 j > t −2 , in H 0,s ,ψ j dominates all of the previous ψ k with k < j: Since Ψ is the sum of the ψ k , and since, at a given time t, for sufficiently large j, on a length scale of r j , ψ j dominates all the other ψ k , the H 0,s norm of Ψ(t) is bounded below by arbitrarily large numbers and must diverge: A similar sequence of Gaussian initial data shows that the nonlinear Schrödinger equation is also ill-posed in H 0,s .
This construction follows from the closeness of nonlinear Schrödinger and linear Schrödinger evolutions for small initial data and from the existence of linear solutions with arbitrarily fast H 0,s growth.
norm is sufficiently small, the difference between the linear Schrödinger and nonlinear Schrödinger evolutions is small. From Duhamel's principle and an extension of the local well-posedness theory, it is known that there is a δ ′ such that, if u 0 L 2 ≤ δ ′ , then u is defined for all time, and For the linear Schrödinger solutions, the notation from the previous lemma will be used. In addition, u [k] will denote the nonlinear Schrödinger evolution of u 0 [A k , a k ] with A k decreasing to zero and a k increasing to infinite, but with rates to be chosen. The index k will be chosen sufficiently large so that, for i > k, As in Lemma 1.6, the H 0,s norm can be estimated by localizing on a length scale of r k .
1.3. The pseudoconformal transform and Strichartz norms. The pseudoconformal transform is a symmetry of both the linear Schrödinger equation and the pseudoconformal nonlinear Schrödinger equation (1.1) and is also an isometry on L 2 x and the Strichartz admissible L q L r spaces.
. u 0 is given in H 0,s ⊂ L 2 , and u ′ 0 and u ′′ 0 in H 0,s ∩H 1 are then chosen in a H 0,s neighborhood of u.
L r x . (4) Up to a reflection, the pseudoconformal transform is its own inverse: These facts may be validated through explicit calculations.
Global existence for initial data in H 0,s
The goal is to prove global existence for initial data in H 0,s from the assumption that there is global well-posedness in H s .
Heuristically, initial data u 0 ∈ H 0,s at t 0 = 0 can be transformed to initial data v 0 ∈ H s at τ 0 = −∞. Under the H s global well-posedness hypothesis, v can then be defined for all time, and u can be defined for all time by the inverse pseudoconformal transform. To make this heuristic rigorous, u 0 can be evolved to u(T lwp ) and then pseudoconformally transformed to v. Following this, it is sufficient to show that v(−T −1 lwp ) is in H s to apply the H s global well-posedness hypothesis.
In terms of the nonlinear Schrödinger evolution map, S N LS (t 2 , t 1 ), which was introduced earlier, the map Since the pseudoconformal transform commutes with the nonlinear Schrödinger evolution, this map can also be constructed in a different way, which is illustrated in Figure 1 By the L 2 local well-posedness Theorem 1.3 and the properties of the pseudoconformal transform, in a L 2 neighborhood of u 0 , F is continuous with respect to the L 2 norm.
To prove Proposition 1.1, it is sufficient to show that F can be restricted to F : H 0,s → H s . This is done by initially restricting to regularized data in H 0,s ∩ H 1 , showing that each of the three steps of F t is continuous with respect to the regularized data, and then removing the regularization. H 0,s ∩ H 1 is a useful auxiliary space because it is preserved by the nonlinear Schrödinger evolution and pseudoconformally transforms to H s .
Because the H s local well-posedness Theorem 1.4 uses L 2 Strichartz norms to control the divergence of nearby solutions, if u ′ and u ′′ start near u in L 2 , their separation in H 0,s ∩ H 1 can not grow by more than a constant factor. Similarly, if v ′ and v ′′ start near v in L 2 , then their H s separation can not increase by more than a constant factor. Thus the divergence of the approximators, from each other, is controlled if they start in a sufficiently small L 2 neighborhood of u 0 .
This L 2 neighborhood of u 0 is illustrated by the oval in the left diagram in Figure 1. It is taken to be a ball of radius δ, and this is the δ which appears in the following subsections. The value of δ is dictated by the H s local well-posedness Theorem 1.4.
Since F = F t is independent of t, it is possible to take the infimum in t of the H s norm estimates for v ′ (−T −1 lwp ) and v ′′ (−T −1 lwp ). This eliminates the dependence on the H 1 regularization and corresponds to the original heuristic of transforming H 0,s data at time t 0 = 0 to H s data at τ 0 = −∞. As a result F : H 0,s ∩ H 1 → H s is continuous with respect to the H 0,s norm alone and extends uniquely to F : H 0,s → H s in a neighborhood of u 0 . As a result, and contrary to the image in Figure 1, v(−T −1 lwp ) is in H s not merely L 2 . In Subsection 2.1, it will be shown that the nonlinear Schrödinger evolution is continuous in H 0,s ∩ H 1 . In Subsection 2.2, real interpolation will be used to show that the pseudoconformal transform takes H 0,s ∩ H 1 to H s with a t dependent coefficient on the H 1 part of the norm. In Subsection 2.3, the H s local wellposedness theorem will be used to show that H s data evolves continuously in H s from transformed time −t −1 to transformed time −T −1 lwp . In Subsection 2.4, the infimum in t will be taken to eliminate the H 1 dependence. < 2δ + 1 2 δ 3 . The function u ′ can now be taken as the solution to estimate and from which u ′′ is a perturbation. If 2δ + 1 2 δ 3 ≤ δ 3 , then the H s local well-posedness Theorem 1.4 provides the estimates on the growth and separation in H 1 .
Differentiation in time and the Cauchy-Schwartz estimate gives the growth of the weighted norms.
This proves that In this section, it is shown that the pseudoconformal transform takes a function u(t) ∈ H 0,s ∩ H 1 to v(−t −1 ) ∈ H s . This is done by interpolation between L 2 and H 0,1 ∩ H 1 using the K method of real interpolation. To begin, the arguments for L 2 and H 0,1 ∩ H 1 are presented. The L 2 result is part of Theorem 1.8. The H 0,1 ∩ H 1 result leads to equation (1.5), which was stated in the introduction.
where it is understood that the norm on the right is infinite (and the inequality trivial) if u(t) does not belong to the appropriate space.
Inequality (2.2) follows by direct computation with a change of variables. This computation is simplified by recalling the notation τ = −t −1 and ty = ≤ |tyu(t, ty)| 2 t d dy + 2|∇ y u(t, ty)| 2 t d dy The K method of real interpolation is now summarized from [1]. The s interpolation norm of a ∈ A 0 + A 1 is defined by the following, if this norm is finite, Since only the K method of interpolation will be introduced, the K index in the norm will be omitted a s,(A0,A1) = a s,(A0,A1);K . If a ∈ A 0 ∩ A 1 , then a s,(A0,A1) ≤ a 1−s A0 a s A1 .
The interpolation space (A 0 , A 1 ) s is defined as the set of a ∈ A 0 + A 1 for which a s,(A0,A1) is finite. There are some technical issues, but since only spaces A 0 and A 1 which are subsets of L 2 will be considered, (A 0 , A 1 ) s will be well-defined, a Banach space, and the closure of A 0 ∩ A 1 . The K method of real interpolation is an exact interpolation method of exponent It is known that
By interpolating the results of Lemma 2.3, it follows that
Unfortunately, because of the inf in (2.3), it is not clear that a s,(L 2 ,H 0,1 ∩H 1 ) = a s,(L 2 ,H 0,1 ) + a s,(L 2 ,H 1 ) = a H 0,s + a H s ; although, we expect this is true. We will instead prove the simpler result that a s,(L 2 ,H 0,1 ∩H 1 ) a H 0,s + a H 1 .
To ensure that the t dependent coefficients only appear on the H 1 norm, the t dependence is kept in the interpolation calculations rather than being estimated by (2.5).
where the constants C 1,s and C 2,s depend only on s and d.
Proof. Let u : {t} × R d → C with u(t) in Schwartz class. Using the K-method, it will be shown that the H s norm is dominated by the H 0,s and H 1 norms. The Kmethod of interpolation involves taking an infimum over all possible decompositions of u. This infimum is dominated by any particular choice of decomposition. The decomposition which is optimal for balancing L 2 with H 0,1 will be used. This will give the H 0,s part of the estimate. There is no reduction in the regularity required for the estimate, since this decomposition ignores the Using Lemma 2.3, K(λ, C[u]) can be estimated in terms of the L 2 , H 0,1 , and H 1 norms of u. (Note that in this proof, u 0 refers to part of the interpolation decomposition in (2.3), not the initial data.) In the proof that (L 2 , H 0,1 ) s = H 0,s [1], it is shown that the optimal decomposition for u 0 L 2 + λ 2 u 1 2 H 0,1 is u.
This decomposition will be used for λ < 1 to bound K(λ, C[u]) from above.
This decomposition can be used to bound the H s norm of v = C[u] for λ < 1. The decomposition u 0 = u and u 1 = 0 will be used for λ ≥ 1.
At this stage, the first term is evaluated by the substitution λ ′2 = (1 + x 2 )λ 2 and Fubini's theorem. The other two pieces are estimated by direct integration and estimated using the assumption t < 1.
Since the pseudoconformal transform preserves Strichartz admissible norms, v is controlled by u (2) if u ′ and u ′′ are solutions to the nonlinear Schrödinger equation on [0, T lwp ] with initial data u ′ 0 ∈ L 2 and u ′′ 0 ∈ L 2 respectively, with u 0 − u ′ 0 L 2 < δ and u 0 − u ′′ 0 L 2 < δ, and with a t ∈ [0, Proof. From the L 2 local well-posedness Theorem 1.3, T lwp can be chosen small enough so that u is less than half of δ 3 from the H s local wellposedness Theorem 1.4. In this case, by the L 2 local well-posedness Theorem 1.3, < 2δ + 1 2 δ 3 . The function u ′ or v ′ can now be taken as the solution to estimate. Since the pseudoconformal transformation preserves the Strichartz admissible norms, v ′ Thus, if 2δ + 1 2 δ 3 ≤ δ 3 , then, by the H s local well-posedness Theorem 1. 0) takes H 0,s ∩H 1 to H s . Furthermore, an explicit t dependence on the H s norm will be found in Lemma 2.7 and then removed in Proposition 2.8 to show that F takes a H 0,s neighborhood of u 0 into H s .
We will first show that for u 0 ∈ H 0,s , if we restrict attention to initial data which is both in a H 0,s neighborhood, N , of u 0 and in H 1 , then F maps this initial data in N ∩ H 1 to H s . (1) if u ′ is a solution to the nonlinear Schrödinger equation with initial data (2) if u ′ and u ′′ are solutions to the nonlinear Schrödinger equation with initial data u ′ 0 ∈ H 0,s ∩H 1 and u ′′ 0 ∈ H 0,s ∩H 1 respectively and with u 0 −u ′ 0 L 2 < δ and u 0 − u ′′ 0 L 2 < δ, then for all t ∈ (0, In other words, there is an open set N ∈ H 0,s containing u 0 for which and F t is continuous with respect to the H 0,s ∩ H 1 topology.
Proof. Conditions on δ and T lwp will be found. To begin, assume T lwp < 1.
Since u 0 ∈ H 0,s and u ′ 0 and u ′′ 0 are H 0,s ∩ H 1 approximations, by Lemma 2.1, if δ is less than the δ in Lemma 2.1, then, for t ∈ [0, T lwp ], Lemma 2.4, the linearity of the pseudoconformal transform and the triangle inequality, Since u 0 is a solution with initial data in H 0,s , u ′ 0 and u ′′ 0 are H 0,s ∩ H 1 approximations, and v ′ and v ′′ are in H s at transformed time −t −1 , if T lwp and δ are less than the corresponding values in Lemma 2.5, then Since F t : u 0 → v(−T −1 lwp ) and the set u 0 − u ′ 0 L 2 < δ is open in H 0,s , this set is the N given in the statement of the theorem. By (2.7), F t is continuous from The infimum in t can be taken when estimating the H s norm of is independent of t, this eliminates the H 1 dependence. Eliminating the H 1 dependence shows that F is continuous from H 0,s to H s . If u 0 is approximated in H 0,s by a sequence of regularized initial data u Since H 0,s ∩H 1 is dense in H 0,s , and F is continuous with respect to the H 0,s norm, if a sequence u Since the nonlinear Schrödinger evolution and the pseudoconformal transform both preserve the L 2 norm, if u 0 L 2 < Q L 2 , then v(−T −1 lwp ) L 2 < Q L 2 . Therefore, in the defocusing case, from the assumption of global well-posedness in H s , v extends to a function v : For t > 0, u can be defined by u = C −1 [v]. By Theorem 1.8, this extension of u is a solution to the nonlinear Schrödinger equation on [0, ∞). For t < 0 all the arguments of the paper can be reproduced to define u on (∞, 0]. Thus, u is a solution to the nonlinear Schrödinger equation, has initial data u 0 , and is defined for all t. In the focusing case, since the nonlinear Schrödinger evolution and the pseudoconformal transform both preserve the L 2 norm, if u 0 L 2 < Q L 2 , then v(−T −1 lwp ) L 2 < Q L 2 and the same argument can be applied with the additional L 2 norm hypothesis.
Remark 2.9. The process of taking u on [0, T lwp ], applying the pseudoconformal transform to get v on (−∞, −T −1 lwp ], and then extending v globally in time provides a function v which is defined for positive time. We remark that there is no clear relation between v at positive time and u at negative time. In some sense, v at positive time corresponds to the evolution of u "beyond infinite", and, unless it is known a priori that the scattering states u + and u − satisfy there is no reason to believe that v at positive transformed time corresponds to u at negative time. Proof. The construction in the proof of Proposition 2.8 shows that both u and v exist globally, and hence u scatters forward in time by Lemma 3.1. As noted in Remark 3.2, the same occurs backwards in time. This establishes the existence of u ± ∈ L 2 . We now introduce linearly advanced and retarded versions of u and v. These have two time variables, one to record the time variable associated with the nonlinear Schrödinger evolution, and one for the advancement or retardation by the linear Schrödinger evolution.
A scattering lemma
The function φ(t, •) is a linear solution with initial data u(t) at time t ′ = t. The function ψ(τ, •) is the analogous function with initial data v(τ ) at time τ ′ = τ . Since v(−t −1 ) is the pseudoconformal transform of u(t) at time t, and the pseudoconformal transform preserves the linear Schrödinger evolution, the pseudoconformal transform of φ(t, •) with respect to the spatial variable and the second time variable is ψ (−t −1 , •).
Denoting a solution to the linear Schrödinger equation by φ and its pseudoconformal transform by ψ, it is known that ψ(0) is the Fourier transform of φ(0). If u 0 ∈ H 0,s , by the construction in Section 2, then v(τ ) ∈ H s . Since the linear Schrödinger evolution preserves the H s , for all τ ′ , ψ(τ, τ ′ ) ∈ H s , and, in particular ψ(τ, 0) ∈ H s . This proves that the linearly retarded version of u evolves in H 0,s .
The same argument holds as t → −∞, with the usual remark on the difference between the pseudoconformal transforms for positive and negative times.
Errata
In this paper, we consider the initial value problem for the L 2 -critical nonlinear Schrödinger equation for u : R × R d → C: i∂ t u + ∆u = λ|u| 4 d u u(t 0 , x) = u 0 (x), (3.1) with initial data u 0 in H 0,s , the weighted L 2 space with norm ψ H 0,s = | 6,812.4 | 2005-07-29T00:00:00.000 | [
"Mathematics"
] |
A path-dependent analysis of the effect of location on the development of new universities
This article examines the effect of location on the development of new universities. The study was conducted in seven new higher education institutions (HEIs) established in India during 1996–2008. I collected the data by conducting semi-structured interviews with 73 faculty members in the HEIs and from official documents, media reports and opinion pieces about the HEIs. Using the conceptual framework of path dependency, I investigated the tensions and challenges faced by the HEIs in their initial years. I find the placement of the HEIs in their respective locations to be a contingent event that can make the development of HEIs path dependent. I find that the initial conditions and decisions of the HEIs were influenced by the location and led to reactive sequential events in their initial years with effects that were hard to shake off, making their development path dependent. I show that having to develop their infrastructure and constrained by resources, the HEIs started their academic programmes first, followed by their research activities, and outreach and regional engagement.
Introduction
In the past few decades, several countries, such as China, India, Russia and Singapore, have established new universities at the top end of their stratified higher education systems. These universities are aspirational, entrepreneurial, innovative and have access to the essential resources and networks to become leading national or global universities (Altbach et al. 2018;Marginson 2016). The rise of such universities has gained so much attention that global rankings like the QS and THE World University Rankings have specialised rankings for universities that are younger than 50 years. This article aims to provide an understanding about the development of such new universities in their initial years.
The main question that I investigate in this article is, how does the location of new universities shape their development in the initial years? The initial years of new universities is critical for them to chart new paths, generate momentum and cement their position in the national higher education system. During this period, they take their most import decisions that lay their foundations for entering into higher education competition and developing their reputation (Perkin 1969;Stensaker and Benner 2013). They can achieve a lot more during this period than their established counterparts can in the same period (Altbach et al. 2018). Their location is significant to this development since in absence of certain specific characteristics of the local environment, their development can become ritualistic, making them more ambiguous, eventually dislodging them from their starting position (Stensaker and Benner 2013). Thus, the eventual positioning of new universities can be shaped by their considerations of their location in the initial years (Brennan et al. 2018;Harris and Holley 2016).
A small number of studies on new universities claim that their location and considerations for the surrounding region influence their uniqueness (Huisman et al. 2002), trajectory (Stensaker and Benner 2013) and pace of development (Altbach et al. 2018). Other studies (Goddard and Puukka 2008;Weerts 2014)-although not specific to new universitiesprovide credence to these claims by showing that the local context can help universities in image building, attracting faculty and increasing student enrolment, and by providing funding opportunities and social capital support. However, the nature of development of new universities in the initial years varies from the established ones. Sans the inertia and rigid academic boundaries of established universities, they experiment with new ideas of teaching and research, launch innovative academic programmes and develop distinctive features (Altbach et al. 2018). Their initial years is thus a period of change and building up of momentum and is noticeably different from the period of sustaining the change in a steady state .
The development of new universities in the initial years is steered by the decisions and activities of a small number of individuals instead of a coordinated strategic approach (Clark 1998). Their nature of development during this period can be described as "evolutionary" that is emergent or unintentional (Mezias and Glynn 1993). Such development is summoned from within, not as much a response to external demands and expectations, and precedes their development of culture, reputation and management ideals. It has unclear boundaries, individually motivated, opportunistic and interactive in development. Thus, I analyse the development of new universities as an evolutionary path, on which they traverse from one point, starting from scratch, on this path to another. These decisions and activities get institutionalised over time as policies, structures and management practices. I investigate how the location of new universities influences their decisions and activities and shape their evolutionary development path.
The context for this study: the Institutes of National Importance in India The Institutes of National Importance (INIs) are a cadre of national-level universities in India that are established and funded by the government and specialise in a single disciplinary area of study. Started in the 1950s, the established INIs are amongst the few research-intensive universities in India and have grown to be highly reputed and selective. They attract the best faculty and students in their respective disciplines and are often the only universities from India to feature in global higher education rankings. The government rapidly increased the number of INIs from nearly 10 in 1995 to 101 in 2018. One of the key features of this expansion is that many of the new INIs were situated in rural and semi-urban locations. This study was conducted on the new INIs in India.
Single disciplinary universities, like the INIs, have gained popularity in recent times in Asian countries, such as Singapore, Hong Kong and South Korea, as a model to establish new universities. However, the growth of INIs differs from their Asian counterparts on few dimensions. First, unlike their Asian counterparts that have grown with the combined support of private philanthropy and government (Altbach et al. 2018), the new INIs are entirely funded and governed by the government. Second, the INI model has been replicated across several disciplines, such as management, medicine and technology, accommodating the idiosyncrasies of each discipline. Third, there are multiple INIs in each discipline in various locations. They follow a common admission process and comply to several standardised norms and policies prescribed by the government. However, each INI is governed and managed independently and have their own reputation and positioning in the Indian higher education. Thus, the new INIs are likely to experience the legacy of their established and more reputed counterparts. Given the above, this study can provide insights into the growth of new single disciplinary universities as a model of expansion in other higher education systems.
Using path dependency theory for analysis
Path dependency is characterised by nonergodicity-an inability to shake free of history (Martin and Sunley 2006). Its identification involves tracing back the current state to a set of historical events and decisions and showing how these events and decisions themselves are contingent occurrences (Garud et al. 2010;Martin and Sunley 2006). Contingent occurrences are those that were not expected to take place and cannot be explained based on theoretical conditions or what is already known about the institution. They can only be explained by an accident or chance and have substantive lingering effects on the future paths of institutions (Mahoney 2000;Sydow et al. 2009).
One type of analysis of path dependency considers sequence of events as self-reinforcing, i.e. initial setups in a direction induce further movements in the same direction such that there are increasing returns and it becomes difficult to reverse the direction. These paths have critical junctures that are adoption of a particular institutional arrangement from amongst two or more alternatives (Martin, 2010). The other type of analysis of path dependency includes reactive sequences that are chains of temporally ordered and causally connected events. In a reactive sequence, each event in a sequence is both a reaction to antecedent events and a cause of subsequent events. Whereas self-reinforcing sequences reinforce early events, reactive sequences transform or may even reverse early events. Institutions that are path dependent can get locked-in to a particular trajectory or situation that is sub-optimal or inefficient (Sydow et al. 2009).
Universities can be considered as path dependent in the usual sense that their directions for future development are foreclosed or inhibited by decisions taken in past . Studies on path dependency (Garud et al. 2010;Martin and Sunley 2006;Sydow et al. 2009) indicate that various effects such as learning effects (i.e. accumulated skills and expertise of faculty members), complementary effects (i.e. synergies between activities of the region with those of the universities), and coordinate effects (i.e. coordination between the university and the region that develops specific routines and rules) may result in universities getting locked-in with their region. However, for new universities with less academic standing, getting locked-in with the region will lead to strategic inertia with few alternative development paths (Stensaker and Benner 2013). Krücken (2003) finds that the discourse for university reforms is not met at the level of organisational practices and call for using path dependency in higher education to explain continuity of practices within universities. Other studies apply path dependency to explore specific aspects of higher education such as student representation (Chirikov and Gruzdev 2014), knowledge transfer to industries (Delfmann & Koster 2012) and policy formulation (Feeney and Hogan 2017).
I use path dependency as the conceptual framework to examine the decisions and activities of the new INIs and determine the influence of location on their development path. Instead of examining the effect of location on the various aspects of their development, I examine the cascading effect of location in their subsequent development in a path-dependent manner through the following questions. First, I analyse if the placement of the new INIs in their respective locations can act as a contingent event. Second, I identify the initial conditions of the INIs arising due to their placement in their respective locations. Third, I analyse if and how these initial conditions can set off a series of sequential or self-reinforcing events for the new INIs that are hard to undo.
Research design
I chose a research design that allowed me to focus on identifying and tracing back the key decision arenas of the new INIs pertaining to their three main functions-teaching, research, and outreach. I selected a qualitative design with multiple cases that is suited where contextual conditions are relevant to the phenomenon under study or the boundaries are not clear between the phenomenon and context (Yin 2017). It has gained acceptance in many studies where universities in their entirety are analysed, such as this one. In such studies, including four to ten case studies allows for exploring differences between case studies and replicating the findings from one case across cases and provides a good basis for generalisation (Eisenhardt 1989).
I used three criteria to shortlist the INIs to be approached for participation. The first criterion was the age of the INIs. Although the initial period of new universities depends on the people involved and the dynamic interplay of external environments with internal resources and challenges, most scholars have suggested this to be around 10 to 15 years (Perkin 1969;Clark 2003). Thus, I longlisted 63 INIs that were established between 1996 and 2008 and, thus, were aged between 8 and 20 years by the time of this study. The second criterion was the disciplinary focus of the INIs. I further shortlisted the INIs in four disciplines-architecture and planning, management, science and technology-since institutions in these disciplines are likely to be entrepreneurial, include a broad range of teaching and research activities, and engage with their external stakeholders (Clark 1998). The third criterion was to include the INIs to represent the various types of region (i.e. urban, semi-urban, and rural) that they were situated so as to gather evidence from cross-regional and multi-site fieldwork. This allowed for data triangulation by collecting data from different contexts and ensured the validity and reliability of the findings (Pinheiro et al. 2012;Peck 2003;Hudson 2003). Based on the above criteria, I found 26 INIs to be eligible for inclusion in this study. I sent requests for participation to the Directors-the Heads of the INIs-of these INIs, of which seven INIs agreed to participate. I refer to them as HEIs in the remainder of this article. Each HEI represented a unique combination age, disciplinary focus and location (see Table 1).
Data collection and method of analysis
I collected the required data from three sources. First, a total of 185 faculty members were approached to opt-in to participate in a semi-structured interview, of which 73 agreed to participate ( Fig. 1). Given the higher likelihood of senior faculty members to participate in institute development and external engagement activities (Demb and Wade 2012;Glass et al. 2011), I approached only those faculty members with Associate Professor or above appointments and/or, held a senior administrative role (see Figs. 1 and 2). 1 I developed and followed an interview schedule to uncover the sequence of events in the HEIs and the condition and decisions proceeding and preceding the same. I asked questions to understand the alternative options that were considered in their development path and the rationale for choosing one over another.
Second, I obtained the vision or mission documents, annual plans and reports of the HEIs either from their websites or by writing to their concerned authorities. Third, in order to understand the macro-environment related to the expansion of INIs, I obtained the following documents from publicly available sources corresponding to the overall expansion of the HE system in India during the same period: government legislations and planning documents for higher education in India, official documents pertaining to the expansion of INIs, and media articles and opinion pieces.
I used descriptive coding technique to identify the major decision arenas of the HEIs since their establishment. In the second cycle coding, I adopted axial coding that described the properties and dimensions these major decision arenas, such as the people involved in these decisions, their motivations and dilemmas faced for taking such decisions, and the institutional rationale and logic of such decisions. I analysed the aspects of contingency and sequential or self-reinforcing events, i.e. how each decision or event of the HEIs was linked to the subsequent one.
The placement of the HEIs as a contingent event
The first decision that I analysed was about the location of the HEIs. The reports of various government committees on revamping higher education in India had no mention about deciding the location of the new INIs. I could not find any analysis or conceptual framework in the literature that could explain the location of the HEIs. Besides, none of the documents pertaining to the HEIs had any detail about the process or criteria used to decide their location. The parliamentary debates indicated that the location of the HEIs was subjected to political negotiations. One of MPs commented as below: After bifurcation of Bihar [one of the states in India that was divided in to two states: Bihar and Jharkhand], almost all institutes of excellent education and research went into Jharkhand. Establishment of IISER in Bihar will be a step towards minimisation of prevailing regional imbalance in the distribution of educational institutes across the country. (Parliamentary Debate, MP, 27 November 2006) A member of the Committee set up for expanding the IITs also emphasised that the final decision about the location was based on "both the quality of the college and the political push and pull". Having an institution of the cadre of an INI in their constituency gave the opportunity to the MPs to showcase it as their contribution to the location, as is reflected in the narrative of the Foundation Stone laying ceremony of one of the IITs: The Union Minister of State of External Affairs emphasised the efforts put by him and other leaders in sanctioning of the IIT. He also expressed hope that the IIT will open up opportunities for receiving quality technical education in the state.
These were reinforced by the interview participants of the HEIs, as is seen in the following comments: "Influential persons [from this location] in 1990s wanted that it should not go elsewhere. They persuaded the state, central government" (P 43, IIM-SO); "In the beginning, there was a proposal to have four IISERs. Then the state government came in an added one more [in this location]" (P 20, IISER-SN).
The above comments indicate that the decision about the location of the HEIs was not based on the higher education context or deliberations around the appropriateness of the location to host the HEIs. Instead, it was a consequence of the political negotiation between the government and the representative MPs of the states. Such negotiations are likely to depend on the relationship between the ruling party in the states and the centre and the potential for the politicians and political parties to leverage the HEI for patronage and prestige (Lall and House, 2005). Thus, the decision about the location of the HEIs was random and exogenous to the context of the HEI. It was a contingent occurrence in the sense it could not be explained based on theoretical conditions or what was already known about the HEIs.
Consequently, many of the new INIs were situated in rural or semi-urban regions, which varied in their appropriateness to host a new INI. The (urban) region of IIT-UN had a vibrant technology industry making it suitable for the HEI to forge industry collaborations. Similarly, the establishment of IISER-SN coincided with a regional development plan to establish several national-level HEIs that quickly turned the region into an educational hub. However, the other HEIs did not have any such inherent locational advantage. Such heterogeneity in the higher education environment of the region in which the HEIs were situated further accentuated the uncertain local conditions that they found themselves in.
Initial conditions due to contingent placements of the HEIs
The initial contingent event in path dependency may not be a single well-defined decision or event but can be a set of conditions that act as an impetus to stimulate further action. I discuss below three initial events or conditions of the HEIs that arose due to their contingent placements in their respective locations. These initial conditions are distinguishable from the subsequent events or actions and cannot be considered causal determinants (Sydow et al. 2009). However, the contingent placement of the HEIs and these initial conditions imply that if the HEIs were located in a different location, their current state would not have been reached, and thus, resulting in lasting and unique effects of the location on their future paths (Garud et al. 2010).
The first condition was the uncertain operational situations that the HEIs found themselves in. The HEIs were planned on campuses with nearly hundred acres of land, which was difficult for the state to find within the main city locations. Thus, the land allocated to the HEIs was in peri-urban locations with poor connectivity to the main city. IIT-SN and SPA-SN went through a troublesome period to obtain a legally hassle-free land. All the HEIs experienced significant delays in construction of the campuses; of the seven HEIs in the study, only IIM-SO had completed its campus construction. In responding to criticism for such delays, the government cited various reasons including delay in handing over land by the states, preparation of master plans and appointment of architects. As a result, the initial funding allocated to the HEIs was revised to meet cost escalation and unplanned expenditures.
Such uncertainty was further exacerbated due to the lack of preparedness of the local stakeholders (e.g. state government officials, and urban and local bodies) to host an institution of the cadre of INI. Many interview participants indicated that since the local stakeholders were not consulted before deciding the location, the HEIs had to negotiate with them for the required operational resources and support. The comments below describe the contrasting experiences of two HEIs with their local stakeholders: As far as State Government is concerned IIT is a very big deal. They really look up to IIT and bend over backwards to help us. The only thing they cannot do is give us money. (P 67, IIT-RN) Being a national institute, [state] ministry people were not giving any weightage to us. When you have a Director of the institute who is not answerable to the state government, it becomes a problem. …It was always a friendly relationship; whenever we invited them [the state government officials], they came. (P 53, SPA-SN) Hence, the HEIs faced several procedural hurdles in starting their operations, as described in the following comments: "There have been continuous requests going [to the local stakeholders for support] but these were out of control otherwise we would have taken much pace" (P32, IIT-SN); "The new institute, the new campus if you see, we are struggling with everything right now. There were debates on land, now there is no water supply; there is no electricity, for everything there is negotiation with the government and localities" (P7, IIM-SN). These experiences indicate that the location and surrounding region were caught off guard about hosting the HEIs, and their preparedness and willingness to support establishing them differed significantly.
The second event was the hiring of the initial group of faculty members in the HEIs. Several factors related to the location of the HEIs, such as proximity to parents, opportunities for spouses to work, education for kids and other lifestyle-related preferences, and the state of campus development were the main considerations for the faculty members to join the HEIs in the initial years. The Director of IIT-RN described the situation as "On an average, one or two faculty members leave us every year due to lack of work opportunities for their spouses". Thus, the location of the HEIs was instrumental for the initial group of faculty members to join the HEIs. The initial group of faculty members was subsequently instrumental in shaping the future paths of the HEIs through various ways including hiring of subsequent faculty members and shaping various institutional developmental aspects. Such people involved during the initial period can set the HEIs on a course to either academic excellence and international competitiveness or to mediocrity and oblivion (Morozov and Shchedrovitskiy 2018).
The third event was the involvement of the "Mentor Institute"-one of the established INIs that was assigned by the government to mentor the HEIs and to help them start their operations at the earliest. It was chosen to be an INI that was closely located to the HEIs to ensure convenience of commuting. However, their role went beyond just operational assistance to the HEIs. The faculty members from the Mentor Institutes helped in the development of the new HEIs in several ways including designing policies, teaching, and advising in campus development. In some cases, a faculty member from the Mentor Institute was appointed as an interim Director until a full-time permanent Director was appointed. Emphasising the significance of the Mentor Institute, one of the participants mentioned: "I think that's been a struggle so to come out of the shadow of the established IIMs". In this way, the Mentor Institutes were involved in deciding which ideas and practices of the established INIs need to be adopted by the HEIs and also trained the HEIs on adapting these ideas.
The above described initial conditions were characterised by the archetypical practices and policies that the HEIs had to comply with for being set up as INI, funded by the government. For instance: all the INIs in a given discipline admitted students through a common admission criterion; the number of students to admit and faculty to be recruited were approved by the government; and the recruitment of faculty and staff members had to comply to standardised norms at the national level. I discuss below how these aspects, along with the initial conditions, determined the decision arenas and their significance for the development of the HEIs.
Reactive sequences for path development
Reactive sequences are temporally ordered and causally connected events, where each event in the path is both a reaction to antecedent events and a cause for subsequent events (Sydow et al. 2009). The connection between two sets of events is established due to the lack of alternatives and constraints in moving forward in any other direction. I analyse below how the location and the resultant initial conditions set off reactive sequential events for the HEIs to move them on an emergent path.
Balancing teaching and research: teaching first, research later
One of the main objectives of the government to establish the HEIs was to expand access to quality higher education offered by the established INIs. The emphasis on teaching and academic programmes was seen at multiple levels of the HEIs. At the macro-level, commitment to teaching was often the first one on the mission statements of the HEIs. This was reflected in the mission statement of SPA-SN: "committed to produce best Architects and Planners of the Nation to take up the challenges of physical and socio-environmental development of global standards". IISER-SN was the first of its kind INI focussing on undergraduate teaching in sciences. Several of its faculty members mentioned that "education" precedes "research" in the name of the institution to reflect its focus on teaching. Education underpinning the rationale of establishing the new IITs was reflected in the speech by the Prime Minister of India (Prime Minister's Office, 2008): This [the intake capacity constraint] is highly regrettable because it denies opportunity to thousands of deserving young men and women. … such talent must not go un-utilised. Many more such institutes are needed. Realising this, our government decided to increase the capacity by creating eight new IITs in the 11th Five Year Plan" (Para. 3).
At a normative level, many academics believe that teaching, rather than research, is their primary commitment (Hattie and Marsh 1996). The relationship between teaching and research activities are also shaped by the management of available resources at the institutional level (Coate et al. 2001). The teaching activities in the HEIs were managed and monitored through collective efforts of the faculty members and institutional arrangements such as the academic council. However, research activities were managed by individual faculty members, with the HEIs being a facilitator to support and monitor the same and promote a culture of research excellence. The faculty members were encouraged to search for research grants and collaborators on their own. Hence, they considered teaching and research as distinct and competing goals.
Notwithstanding the above, almost all the faculty members interviewed were involved in institutional development efforts of the HEIs in the initial years. However, the HEIs were required to adhere to standardised prescribed norms of evaluating faculty performance that was applicable to the INIs. These norms gave far more importance to the research and teaching activities for performance evaluations and promotions. Hence, several faculty members regretted in getting too involved in such activities, as reflected in the following comment: Yes 100 percent, there is a loss of research. … I know for sure, there are lots of people who also came from abroad would say looking at the facility I don't want to come here. This will take away five years of my life doing nothing, or I would struggle. ..So they decided not to come back. For those who can face this problem, five full years have gone by, nothing has happened. (P 45, IIT-UN) Such conflict faced by the faculty members to balance their administrative, research and teaching functions are represented by "the requirements of curricula versus scholarly interests of the faculty, the focus of graduate versus undergraduate programs, the disciplinary versus institutional identity of the faculty, and the publicly declared versus the actual operating functions of universities" (Hattie and Marsh 1996, p. 508). Although the research and teaching functions in universities can be complementary, the potential conflicts in time, energy and commitment faced by faculty members can lead to negative relationships between research and teaching functions (Serow 2000). Hence, faculty members in the HEIs treated administration, research and teaching as distinct activities that were managed, assessed and funded separately.
The conflict between teaching and research was further accentuated by the government's requirement of the HEIs to begin their academic sessions within a year of their establishment. Although the HEIs started their academic sessions in a year in their temporary campus, their subsequent activities were characterised with constraints such as inadequate campus infrastructure in temporary campuses, delays in construction of permanent campus and reliance on visiting or guest faculty members from the Mentor Institute. Under such circumstances, the HEIs continued to prioritise teaching over research, as reflected in the following comment: It takes five to six years to get established and by that time you need to also get your own infrastructure in place. When you have both in place -the classes are running and infrastructure is in place -probably then you will think of doing research. I think this comes naturally also, initially teaching, putting the class in order, getting faculty, starting our own courses, getting our own executive programmes then the research excellence comes. (P 46, The analysis indicates that the location and the initial conditioned mentioned earlier influenced the infrastructure and faculty available in the years leading to starting of academic programmes first and research later.
Developing teaching paths
The HEIs begun their operations in a temporary campus in the same location-typically another government-owned facility-until their permanent campus was constructed. The size, location and quality of infrastructure of their temporary campus were key considerations for the HEIs to decide the nature and scale of programmes that could be started in the initial years. Faculty members at the IIM-SN and IIM-SO indicated that they could have offered more management development programmes if the permanent campus was completed. Due to space constraints in the temporary campus, IIT-SN, IIT-UN and IISER-SN began their academic sessions with programmes that needed less laboratory space. One of the participants at IIT-SN described: We started with Computer Science and Electrical Engineering as the kind of infrastructure that is required for these are not very heavy... We didn't want to start with Civil, Mechanical and Metallurgy, all of these required very heavy labs. … So, once we established that we thought now we can expand to other disciplines. (P32, IIT-SN) Besides the above, the initial set of programmes and courses was also influenced by the interests of the faculty members recruited in the initial years and those available from the Mentor Institutes to teach in the HEIs. In many cases, the HEIs considered the pedagogical practices of the Mentor Institutes as a starting point or benchmark to design their own teaching practices.
The HEIs considered offering niche programmes only after a group of faculty members joined in a given field of study. Once the faculty members joined and infrastructure was completed, the HEIs considered developing programmes to suit local demands and expectations, which helped them gain local support and engage with the surrounding region. For instance, the IIM-SN developed customised programmes for the government sector available in the location whereas the IITs had developed part-time programmes for the locals.
Developing research paths
The initial conditions above also impacted the sequencing of research activities in the HEIs. The funding by the government was restricted to establish the infrastructure needed for teaching and the students, with only a limited portion available to the faculty members for their research. As a result, the faculty members had to compete for resources-not only amongst themselves but also with the teaching and administrative priorities of the HEIs-to be able to progress their research. Many of them recounted their struggle in the initial years to establish their research laboratories due to limited space and funding, impacting the nature of research projects taken up. A faculty member described how having external funding in the initial years helped him overcome the infrastructural limitations.
I have been fortunate in funding that before I joined the Institute, the director of my previous institute was very keen to continue the collaboration through a bilateral partner research programme. It has been a very significant source of third-party funding. We didn't have any basic facilities. The primary contribution came from the institute, but it is supplemented about 50% or 75% through third-party funding. (P18, Due to the uncertain operational conditions, the faculty members whose research did not rely on laboratory infrastructure could start their research projects sooner than those whose research involved experimental work. Such faculty members had to operate in makeshift arrangements, or they initiated new research projects with less infrastructure requirements. Similarly, faculty members who could get external funding through grants could establish their laboratory sooner.
Likewise, without much institutional resources for research and the uncertain operational conditions, the faculty members in the initial years preferred to continue collaborations with their pre-existing colleagues, instead of starting new projects. Thus, not just the research activities but also the nature of research projects that were started were also shaped by the uncertain operational conditions in the and the faculty recruited initial years.
Outreach and regional engagement in development paths
Conducting outreach activities for the local community did not emerge as a major decision arena from the analysis. One of the participants mentioned, "none of these IIMs have anything to do with the local city or the local place, and the state". Many participants who were engaged in such activities described them as an individual pursuit to give back to the society, as seen in the following statements: "It is just individual. At the moment, I think no other IIT or at least I can guarantee my IIT or new IITs there is no such cell that promotes collaboration between the institute and region" and "This is a drive [referring to outreach activities] which is coming from inside".
Sans the outreach activities, the HEIs faced several challenges to engage with the region in a mutually beneficial manner in the initial years, as reflected in the following comment: "I think we needed to have some kind of evangelist for each of these SIGs [an initiative to engage with the region]". However, once the research and teaching agendas were established in a given area, the HEIs were able to combine both to develop a comprehensive agenda to engage with the region. Describing his approach towards setting one such entity, one of faculty members expressed the need to bring together multiple colleagues together, as below: So when we got the infrastructure, then it was like, when we have the expertise why not have the Centre of Excellence. …Then we pumped up and consolidated. Now that the centre is available, we are providing the industrial consultancy and industrial problem solving. (P 35, IIT-SN) Regional engagement activities are not structured as mutually exclusive to either teaching or research (Uyarra 2010;Weerts and Sandmann 2010). They require faculty members to go beyond their individual disciplinary research and teaching activities. Hence, the sequencing of teaching and research activities of the HEIs made it challenging for the HEIs to initiate regional engagement in the initial years. Thus, such initiatives were feasible once a critical mass of faculty members had joined and were willing to explore broad thematic areas of research, which followed from the preceding research and teaching activities of the HEIs.
The development paths of the HEIs
The analysis shows that the placement of the HEIs in their respective locations to be a contingent event that can make the development of HEIs path dependent. The HEIs developed through a set of reactive sequences that followed from this contingent event. The first decision for the HEIs was that of their location followed by appointment of the Director and the Mentor Institute. This was followed by an emphasis on constructing their permanent campus. Faculty recruitment followed, being significantly influenced by the earlier steps. The faculty members, the location and the availability of the infrastructure in the initial years added up to shape the sequence of the initial academic programmes launched. This was followed by initiating research activities by the HEIs that required external funding and were dependent on the state of the infrastructure and the support from the local stakeholders. Outreach and regional engagement activities were initially restricted to individual faculty members, aligned with their research and teaching interests. However, once a threshold number of faculty were hired, academic programmes and research projects were started and campus construction was completed, the HEIs could initiate efforts to institutionalise regional engagement. I have depicted the development paths of the HEIs in Fig. 3.
The findings discussed are limited to single disciplinary new universities and hard to generalise to new universities with multiple disciplines. While the nature of development paths of the HEIs was common across the disciplines, there were also variations, as per their discipline, in their pace of development and their specific trajectory of development. The HEIs in the science and technology disciplines had larger infrastructural and funding needs than the HEIs in other disciplines. Thus, their location and temporary campus in the initial years are likely to be more significant to start academic programmes first and research later. Similarly, due to the applied nature of their disciplines, the IITs and the SPAs were more engaged with the region for fresh research ideas and developing pilot research projects compared with the IISERs which focussed on theory development and experimental studies. These aspects need to be fully explored to bring out disciplinary differences in the development path of the HEIs.
Discussions
The findings discussed above indicate that various conditions, such as the nature of initial funding, autonomy of the universities, and the infrastructure and support available at the local level, can influence this path. The government set the overall vision for the HEIs to expand high-quality education in India. It provided them significant funding, status and autonomyyielding significant authority over the HEIs in the initial years through means such as appointment of the Director, allocation of land, release of funding and sanctioning of faculty numbers. The HEIs thus focussed on education programmes and infrastructure development first to respond to the authority but subsequently developed their research areas and regional engagement. However, as new universities grow, their development will be characterised by their reputation, identity and competitiveness instead of being opportunistic, indeterminant and evolutionary. This opens up discussions about various other factors or conditions that need to be considered to fully understand the development of new universities.
The HEIs in this study would be subjected to institutional forces-coercion, cognitive or normative-to adopt archetypical practices and ideas from the established INIs. These can push the HEIs on to a development path characterised by their identity that will be determined their belongingness to the INIs and their anticipated role in the national HE system as an INI. In such cases, their local context will play an important role in deciding the level of discretion that they can apply in adopting and modifying dominant practices and ideas. The HEIs on such development paths will need to balance the pressures to mimic their established counterparts, and gain legitimacy, with their need to differentiate and form a unique identity (Huishman et al., 2002;Brennan et al. 2018). Further analysis is needed to examine the different levers available to the HEIs, such as the autonomy to deviate from taken-for-granted norms and practices, the diversity of the funding base that they have built, and their relationships with the local stakeholders, to resist such institutional forces and avoid being in a path they can trap them in conformity.
A related processual question in the Indian context is how did existing ideas from the established INIs circulate to the HEIs? Most of the faculty members of the HEIs were young, for whom working at the HEI was their first academic appointment. As a result, the Director of the HEIs, along with their Mentor Institute, is likely to have a significant influence in deciding which ideas are legitimate and steer the HEIs on to an alternative development path. However, they would need organisational capacity and support at the local level to put these ideas into practice (Stensaker and Benner 2013). Therefore, further analysis is needed about the role of the Director and the governance norms of the HEIs (e.g. collegium, bureaucracy, corporation and enterprise). Such an analysis would need to go beyond the experiences of the senior faculty members and examine the motivations and goals of the young faculty members who are likely to put existing ideas to practice, and are also more likely to introduce new and innovative ideas (Coate et al. 2001;Weerts & Sandmann 2010).
The HEIs perceived the other INIs as competitors for attracting faculty and students, and for acquiring resources. Such competitiveness can move them on to an alternate development path, where they can find themselves in the same reputation race as that of the established INIs. Such a development path would depend on their globally referenced quality of research and their research productivity, the image of their surrounding region and the availability of potential resources and networks locally or elsewhere. These can help the HEIs offset the reputation lag from their established counterparts that they experienced in the initial years (Glass et al. 2011). Hence, further analysis is needed to examine the motivations of students and young faculty members to select the HEIs, the constraints faced by faculty for their research projects, and the role of location in shaping graduate attributes and job market outcomes of students.
The network positioning of the HEIs at the local, national and global levels can also shape their development paths. The HEIs were established within the pre-existing network of the INIs. In such cases, the alignment between their national and global aspirations with the local expectations and demands will influence their embeddedness and positioning in the local network (Brennan et al. 2004). This alignment will be determined by their organisational strategy and capacity, and the proactiveness of their leadership to engage with the local leaders, and the challenges and motivations of the HEIs to institutionalise such engagement (Pinheiro et al. 2012). Analysing the above would need evidence from the local stakeholders about their openness, motivations and challenges to engage with the HEIs.
The above conditions or factors indicated above will shape the identity, competitiveness and network positioning of the HEIs. These can move the HEIs on to an alternate developmental path that will either make their development agnostic of their location or reinforce the effect of location leading to a situation of lock-in. | 9,879.8 | 2019-12-07T00:00:00.000 | [
"Education",
"Economics"
] |
Histological and immunohistochemical characterization of the inflammatory and glial cells in the central nervous system of goat fetuses and adult male goats naturally infected with Neospora caninum
Background Neospora caninum is an apicomplexan protozoan that is considered one of the main agents responsible for abortion in ruminants. The lesions found in the central nervous system (CNS) of aborted fetuses show multifocal necrosis, gliosis, and perivascular cuffs of mononuclear cells, but the inflammatory and glial cells have not been immunophenotypically characterized. The lesions in the CNS of infected adult animals have rarely been described. Therefore, in this study, we characterized the lesions, the immunophenotypes of the inflammatory and glial cells and the expression of MHC-II and PCNA in the CNS of goats infected with N. caninum. The CNS of eight aborted fetuses and six adult male goats naturally infected with N. caninum were analyzed with lectin histochemistry (RCA1) and immunohistochemistry (with anti-CD3, −CD79α, −GFAP, −MHC-II, and -PCNA antibodies). All animals were the offspring of dams naturally infected with N. caninum. Results The microscopic lesions in the CNS of the aborted fetuses consisted of perivascular cuffs composed mainly of macrophages (RCA1+), rare T lymphocytes (CD3+), and rare B lymphocytes (CD79α+). Multifocal necrosis surrounded by astrocytes (GFAP+), gliosis composed predominantly of monocytic-lineage cells (macrophages and microglia, RCA1+), and the cysts of N. caninum, related (or not) to the lesions were present. Similar lesions were found in four of the six male goats, and multinucleate giant cells related to focal gliosis were also found in three adult goats. Anti-GFAP immunostaining showed astrocytes characterizing areas of glial scarring. Cysts of N. caninum were found in three adult male goats. The presence of N. caninum was evaluated with histopathology, immunohistochemistry, and PCR. Immunohistochemistry demonstrated anti-PCNA labeling of macrophages and microglia in the perivascular cuffs and the expression of MHC-II by microglia and endothelial cells in the CNS of the aborted fetuses and adult male goats. Conclusions Macrophages and microglia were the predominant inflammatory cells in the CNS of aborted fetuses and healthy adult male goats infected with N. caninum. Activated astrocytes were mainly associated with inflamed areas, suggesting that astrocytes were involved in the resolution of the lesions.
The main lesions found in tissue sections of the central nervous systems (CNS) of the aborted fetuses are multifocal necroses, glioses, and perivascular mononuclear cell cuffs, together with N. caninum itself [11,[13][14][15]. Similar lesions to those found in fetuses were observed in a sheep [16] and cow [17] diagnosed with neosporosis by the isolation of the parasite and PCR, respectively.
Although many cases of neosporosis have been reported in ruminants, the inflammatory and glial cells within the CNS lesions have not been characterized. Therefore, the aim of this study was to characterize the inflammatory response and the glial cells in the CNS lesions in fetuses aborted by N. caninum infection and in healthy male goats naturally infected with the protozoan. This is the first report of N. caninum cysts in the CNS of adult goats.
Methods
The experiment was conducted in the Laboratory of Veterinary Pathology at the Federal University of Lavras (UFLA) in the state of Minas Gerais, Brazil. The study was approved by the Ethics Committee for Animal Use at UFLA, under protocol number 081/13.
Animals
We selected 14 goats for this study from our institutional herd: six healthy adult males, aged from 6 months to 3 years, and eight aborted fetuses (90-150 days' gestation). The goats' dams were naturally infected with N. caninum, identified by the detection of specific antibodies with an indirect fluorescent antibody test (IFAT; initial serum dilution, 1:50), and seronegative for Toxoplasma gondii by IFAT (initial serum dilution, 1:64). The congenital infection of the adult male goats was confirmed by the detection of specific antibodies with IFAT (1:50) in sera obtained from blood samples collected before the ingestion of colostrum and by the detection in the dams' placentas of N. caninum DNA with PCR and DNA sequencing. The male goats were animals scheduled for disposal that had been kept in pens since birth to avoid exposure to sporulated N. caninum in the environment. All the male goats were seronegative for T. gondii by IFAT. Neospora caninum infection in the fetuses was confirmed with PCR and DNA sequencing of their placentas and CNS, with the methodology described by Mesquita et al. [12]. Four fetuses and one adult male that were seronegative for N. caninum and T. gondii according to PCR and IFAT were used as the negative controls.
Sample collection and processing
The fetuses were necropsied shortly after abortion, and the adult males after euthanasia under anesthesia with thiopental and a subsequent intravenous infusion of potassium chlorate solution. Tissue samples from all the animals were collected in 10% neutral-buffered formalin. Samples of heart, lung, kidney, liver, skeletal muscle, brain (cerebral cortex, thalamus, hippocampus, rostral and caudal colliculi, cerebellar peduncle, cerebellum, and obex), and spinal cord (cervical, thoracic, and lumbar) were processed routinely for histopathology and immunohistochemistry. The lesions were classified as discrete, moderate, or severe. Samples of the cerebral cortex, thalamus, and cerebellum were also collected and stored at −20°C for PCR analysis.
Immunohistochemistry
To evaluate the lesions and cellular immunological response in the CNS, the following antibodies were used: anti-CD79α (Dako) for B lymphocytes; anti-CD3 (Dako) for T lymphocytes; anti-glial fibrillary acidic protein (GFAP; Dako) for astrocytes; anti-G-H42a (Washington State University) for major histocompatibility complex II (MHC-II) molecules; and anti-proliferating cell nuclear antigen (PCNA; Dako) for proliferating cell nuclear antigen, at dilutions of 1:50, 1:500, 1:1000, 1:500, and 1:1000, respectively. To confirm the presence of N. caninum in tissue slices, an anti-N. caninum antibody (VMRD, Inc., Pullman WA, USA) was used. Antigen retrieval for N. caninum and GFAP was performed in citrate buffer (pH 6.0), whereas Tris-EDTA buffer was used for the other antibodies; all slices were irradiated for 6 min at full power in a domestic microwave. Samples of normal CNS, lymph nodes, tonsils, and tissues that contained N. caninum were used as the positive controls. As a negative control, the antibody was substituted with phosphate-buffered saline. Additional brain sections from the infected animals were subjected to immunohistochemistry using an anti-T. gondii antibody (VMRD Inc.).
Immunolabeling was classified according to the number of stained cells in a single field at 400× magnification, as discrete (+), fewer than 10 stained cells per field; moderate (++), 10-30 stained cells per field; and severe (+++),more than 30 stained cells per field.
The immunolabeled cells in the lesions were morphologically characterized. Immunolabeled astrocytes in the unaffected areas were not considered (Tables 1 and 2).
Lectin histochemistry
Biotinylated Riccinus communis agglutinin (RCA1; Vector), diluted 1:1000 (2 μg/ml), was incubated with the CNS samples overnight to identify the microglia and macrophages. The antigen was retrieved in citrate buffer (pH 6.0) after irradiation of the samples for 6 min at full power in a domestic microwave.
Molecular analysis
Samples of cerebral cortex, thalamus, and cerebellum were collected and stored at −20°C until analysis. DNA was extracted from them with a commercial kit (Wizard® SV Genomic Purification System, Promega, Madison, WI, USA) after lysis with proteinase K. To detect N. caninum, primers based on chromosome XII of N. caninum were used (forward 5′-CTGTTAGAAGGTGCGGCGAA-3′ and reverse 5′-TCTCTTGCTGCGGTGGAAAT-3′), as described by Orlando et al. [18], to amplify an expected fragment of 168 bp. The PCR products were resolved by electrophoresis in 1% agarose gel at 100 V for 1 h. The amplicons of the positive samples and the positive control were quantified spectrophotometrically and sequenced with the dideoxy chain termination technique [18].
Results
Tables 1 and 2 show the ages of the goats and the gestational stages at which abortion occurred, the occurrence and intensity of the histopathological lesions, the method of diagnosis of N. caninum infection, and the intensity of the cellular immunolabeling in the lesions. Fetuses 1-4 were the products of a single gestation (born to the same mother), as were fetuses 5 and 6, and fetuses 7 and 8. Fetuses 2, 3, 9, and 10, and the male goat 7, which were all negative for N. caninum, were used as negative controls. Goat 1 exhibited clinical neurological signs at birth, with moderate paresis, lack of coordination of the pelvic limbs, and difficulty in standing. These clinical signs had decreased a week after birth, and normal development proceeded until 12 months of age, when the animal was euthanized.
Only two of the aborted fetuses showed lesions in the myocardium and skeletal striated muscle. These consisted of varying degrees of mononuclear inflammatory infiltration, and in one fetus, some tachyzoites were observed with immunohistochemistry in samples of the heart and skeletal muscles.
Two adult goats (male goats 2 and 3) had focal lymphoplasmacytic myositis in their skeletal muscles (semitendinosus and semimembranosus), but these lesions could not be associated with the parasite.
Lectin histochemistry
The majority of cells within the areas of gliosis were positive for RCA1. Staining occurred in the thalamus (fetuses 4, 6-8 and goat 1) (Figure 2A
Neospora caninum
Parasitic cysts and tachyzoites of N. caninum were immunolabeled in fetuses 1 and 4-7 and parasitic cysts in the adult male goats 2-4 ( Figure 2B). The parasitic structures were negative for T. gondii.
GFAP GFAP immunolabeling was observed in the cells within the glial foci in the cerebral cortex (fetuses 5 and 6), in the colliculi (fetus 4), and in an extensive area of gliosis in the cortex associated with a parasitic cyst (fetus 6), with characteristic astrocytosis (increased sizes and numbers of astrocytes) and astrogliosis (astrocyte hypertrophy: increased synthesis of intermediate filaments causing increased length and branching of the astrocytic processes). GFAP immunolabeling was also intense in the astrocytes adjacent to the glial foci in the cerebral cortex (fetus 1) and in the lumbar spinal cord (fetus 8).
In the adult goats, GFAP immunolabeling occurred in the glial foci in the cerebral cortex (goats 1-3) and the thalamus (goat 1), and goats 1 and 2 displayed numerous and extremely dense astrocytic processes (glial scarring) ( Figure 2C).
MHC-II MHC-II immunolabeling occurred in the adult goats: in the cytoplasm of the endothelial cells of the meningeal blood vessels (goats 1 and 6) and the vessels of the cerebral parenchyma; in macrophages of the perivascular cuffs in the cerebral cortex (goats 1 and 3), obex (goat 4), pons, cervical spinal cord, cerebellum, thalamus, and caudal colliculus (goat 3). MHC-II immunolabeling was also seen in the glial foci in the cerebral cortex (goats 1 and 3) ( Figure 2D), obex (goat 4), and the cervical spinal cord (goat 3). In fetuses 7 and 8, MHC-II labeling was observed in the glial foci, endothelia, and the perivascular cuffs.
CD3 Rare immunolabeled T lymphocytes were observed in the perivascular cuffs of the thalamic meninges in fetus 7, in the perivascular cuffs and foci of gliosis in the thalamus of fetus 4. In the adult goats, CD3 immunolabeling occurred in the perivascular cuffs in the meninges close to the cerebral cortex (goat 6), in the thalamic parenchyma (goat 1), and in the cerebral cortex (goats 2, 3, and 5).
CD79α Rare immunolabeled B lymphocytes were observed in the perivascular cuffs and glial foci in the pons, cervical spinal cord, and thalamus of goat 3.
PCR and sequencing
Neospora caninum DNA was detected with PCR in the CNS samples of the fetuses (1, and 4-8) and goats (1-6) (Tables 1 and 2) and sequenced. The nucleotide sequences showed 99.9% homology with the corresponding sequence in N. caninum.
Discussion
The CNS is an immunologically privileged tissue, and the control of the immune responses there depends on the relationships between various internal factors because the blood-brain barrier restricts the migration of many cells and molecules of the immune system [19]. The gliosis, necrotic lesions, and mononuclear perivascular cuffs found in the aborted fetuses have been described previously in fetuses with neosporosis [11,[13][14][15]. However, the gliosis and perivascular cuffs associated with parasitic cysts of N. caninum in the adult goats have not been described. Bishop et al. [16] described similar lesions in an adult sheep, with infection confirmed by PCR and the occurrence of protozoan tachyzoite-like structures in the vascular endothelium. However, this is the first report of cysts in the CNS of adult male goats. Sawada et al. [17] described gliosis and severe perivascular cuffs in a cow whose infection was confirmed by isolating the infective agent in cell culture.
Multinucleate giant cells were present in the CNS of the adult male goats in this study, probably associated with the phagocytosis of parasitic structures. Similar findings in an aborted goat fetus were described by Corbellini et al. [10]. Several studies of N. caninum infection have described perivascular cuffs, but have not described the phenotypes of the cells in those lesions [11,[13][14][15]. In this study, lectin histochemistry with RCA1 allowed us to identify the cells in the perivascular cuffs and the glial foci, which we characterized as a monocytic lineage [20]. Anti-PCNA labeling also suggested the activation of the resident microglia in the CNS, and the possible migration of blood monocytes, corresponding to the macrophages in the perivascular cuffs.
Although RCA1 also stains endothelial cells and reactive astrocytes, when the morphologies of the cells labeled with both RCA1 and GFAP were compared, there was no doubt as to their origin (monocitic cells) and numbers.
GFAP is the most important marker of astrocytes [21]. Astrocytes were observed in the glial foci in the fetuses and adult goats, and on the borders of glioses located in the transition zone between the gray matter and white matter in two fetuses.
These findings are characteristic of astrogliosis, and demonstrate the participation of astrocytes in the lesions associated with N. caninum infection. This was reinforced by the observation of an agglomeration of astrocytes close to an N. caninum cyst. These lesions suggest glial scarring, in which astrocytes attempt to isolate a focal lesion to ensure local homeostasis in the CNS [22]. Drogemuller et al. [23] demonstrated the activation of astrocytes in T. gondii infections, together with the expression of a protein (gp130) that is important in the resolution of infection.
There were few labeled B or T lymphocytes in the lesions, in either the fetuses or adult animals, which could reflect the incomplete activation of lymphocytes in the CNS, which probably culminated in their rapid destruction through apoptosis [19].
The expression of MHC-II molecules in the CNS was clearly established in the adult goats and in one fetus. The presence of the parasite in the CNS probably triggered the inflammatory response that stimulated the expression of MHC-II molecules by endothelial cells and activated the microglia in the CNS. This was probably mediated by interferon γ, in accordance with the theory proposed by Aloisi et al. [19]. These findings suggest that a predominantly Th1 immune response was induced against the parasite. Becher et al. [20] proposed that activated astrocytes in CNS lesions express MHC-II molecules, but this was not observed in the present study.
Another important finding was the occurrence of encephalitis, sometimes severe and with focal granulomatous inflammation, associated (or not) with the parasitic cysts in the CNS of clinically healthy adult goats.
Conclusion
Our results show that macrophages and microglia were the predominant inflammatory cells in the CNS of aborted fetuses and healthy adult male goats infected with N. caninum. Activated astrocytes were mainly associated with inflamed areas, suggesting that astrocytes were involved in the resolution of the lesions.
advisor of the student, coordinated the study, and collaborated in writing the manuscript. All authors read and approved the final manuscript. | 3,609.2 | 2014-12-14T00:00:00.000 | [
"Biology",
"Medicine"
] |
A Plug and Play Transparent Communication Layer for Cloud Robotics Architectures
: The cloud robotics paradigm aims at enhancing the abilities of robots by using cloud services, but it still poses several challenges in the research community. Most of the current literature focuses on how to enrich specific robotic capabilities, overlooking how to e ff ectively establish communication between the two fields. Our work proposes a “plug-and-play” solution to bridge the communication gap between cloud and robotic applications. The proposed solution is designed based on the mature WebSocket technology and it can be extended to any ROS-based robotic platform. The main contributions of this work are the definition of a reliable autoconnection / autoconfiguration mechanism as well as to outline a scalable communication layer that allows the e ff ective control of multiple robots from multiple users. The “plug-and-play” solution was evaluated in both simulated and real scenarios. In the first case, the presence of users and robots was simulated with Robot Operating System (ROS) nodes running on five machines. In the real scenario, three non-expert users teleoperated, simultaneously, three remote robots by using the proposed communication layer with di ff erent networking protocols. Results confirmed the reliability at di ff erent levels: at startup (success_rate = 100%); during high-rate communications (message_lost = 0%); in performing open-loop spiral trajectories with enhancement, with respect to similar works; and in the quality of simultaneous teleoperations.
Introduction
Over the past decades, robotics solutions have been applied to several real-world problems in a broad list of contexts, like unmanned search and rescue [1], healthcare [2] and medical applications [3]. If, from one side, there is the requirement of improving the autonomous capability of the robotic platforms, on the other side, there is the need of being in control of the platforms, adopting a teleoperating approach. As described by [4], several robotic systems are teleoperated based on the Internet, by using appropriate communication architectures and network infrastructures. In recent years, constant improvements in telecommunication infrastructures and the recent growth of cloud technology have led the birth of a new branch of research, namely cloud robotics, where cloud solutions are used to enhance the abilities of robots. The cloud robotics paradigm can be defined as "the combination of cloud computing and robotics" [5]. The concept is "not related to a new kind of robot but to the way in which robots access and store information". Cloud robots have recently been defined as "any robot or automation system that relies on either data or code from a network to support its operation, where not all sensing, computation, and memory are integrated into a single standalone system" [6]. Nowadays, cloud robotics solutions are applied in different applications [7][8][9].
Beyond these specific applications, several works focus on the definition and implementation of the architecture for communication and interaction between physical robots and virtual resources hosted on cloud infrastructure [10][11][12][13][14]. Different solutions have been provided, but the problem of bi-directional communication among agents (i.e., from a user interface to a mobile robot platform and vice-versa) on different networks is often not addressed or underestimated. Virtual Private Networks (VPNs) are usually introduced as an operative solution to solve visibility issues, but this does not take into account the effort needed for the configuration of each agent in the virtual network.
This paper aims to describe a "plug and play" communication layer which is untied from the hardware composing the system (no further setup work when new hardware is integrated) and which guarantees stability in any situation where a set of robots has to be remotely controlled by agents (e.g., user interfaces) outside the robots' network. As in the case of VPN configurations, the "plug-and-play" solution relies on configuration methods rather than on the implementation of new software. The rationale beyond this choice is related to the goal of achieving a reliable system starting from mature technologies already available. This paper represents and describes a use-case in using technologies from the world of telecommunication networks into the robotic world. This could represent a interesting point of view for the robotics community. The proposed approach is based on: • The WebSocket protocol, which allows a full-duplex communication client-server, solving the bi-directional visibility issue; • Reverse tunneling, which is a popular technique to establish a connection with remote devices; • Port remapping, to automatically request the connection on dedicated ports. (To manage the presence of multiple involved agents, the information related to the association between devices and ports is stored on a central server.) Consequently, the challenge addressed in our work is to define a reliable autoconnection/ autoconfiguration mechanism as well as to outline a scalable communication layer that allows the effective control of multiple robots from multiple users.
Related Works
Cloud robotics provides an efficient solution for migrating intensive computation from the robot side to the cloud computing infrastructure. Among the several cloud-based applications proposed in the literature, the challenges of bi-directional visibility and communication among agents (i.e., robots and cloud resources) are often underestimated or based on non-"plug-and-play" solutions such as VPNs. Besides, in [11], the authors state three challenges of computation offloading:
1.
Traditional approaches do not consider the characteristics of networked cloud robotics (NCR) (e.g., heterogeneity and robotic cooperation); 2.
They fail to capture the characteristics of tasks in a robotic streaming workflow (RSW) (e.g., strict latency requirements and varying task semantics); 3.
They do not consider Quality-of-Service (QoS) issues for cloud robotics.
In the aforementioned paper, a QoS-aware robotic streaming work-flow allocation algorithm for networked cloud robotics, with joint optimization of latency, energy efficiency, and cost, while considering the characteristics of both robotic streaming workflow and networked cloud robotics, is proposed. Focusing on networked robotics (i.e., agents are part of the same network), the problem of agents' visibility is not included as an issue to be tackled. Similarly, in [12], authors focus on the managing of cloud resources arguing the presence of technical challenges for multi-robot systems on accessing the cloud and retrieve resources in near real-time. A general framework for setting up a cloud robotics system with a novel resource management strategy is presented, and the problem is formally described as a Stackelberg game, proposing an optimal solution with proof. QoS criteria are then defined and evaluated, without describing the operational method for providing connections among cloud and robot resources. In [13], cloud robotics is claimed as "one of the most promising applications in the robotics world", but its growth is still below expectations due to the risks associated with security and privacy. Therefore, the paper focuses on security for cloud-based robotic services and it explains a framework that provides authentication and key agreement using Elliptic Curve Cryptography (ECC) for accessing the robotic services hosted in the cloud. Nevertheless, despite the interesting results on robustness against various security attacks, it does not detail the bi-directional communication infrastructure for agents in different networks.
In [14], authors aim to integrate the cyber world and the physical world by bringing up the idea of "Robot Cloud" to combine the power of robotics and cloud computing. To make this possible, they design a novel Robot Cloud stack and adopt the service-oriented architecture (SOA) to make the functional modules in the Robot Cloud more flexible, extensible and reusable. In particular, at a functional level, the last command is retrieved by sending an HTTP request periodically (i.e., polling) to the scheduling service and receiving their commands in the XML format as the response, in a similar way to what most SOA applications do. However, it can be argued and demonstrated that polling solutions are not feasible for low-level remote control (as teleoperation), due to the high rate of commands. Furthermore, the continuous requests could overload the use of the bandwidth. Focusing the attention on the "robot side", several frameworks have been presented to foster the development of robotic applications, but the Robot Operating System (ROS) [15] can be considered as the de facto operating system for robots. As a consequence, several works focus on cloud-ROS case studies rather than more generic cloud-robots.
In [16], a framework that helps users to work with ROS in a remote master was presented, based on the use of SpaceBrew [17]. SpaceBrew is defined as "an open, dynamically re-routable software toolkit for choreographing interactive spaces". A web-based visual switchboard can be used to connect or disconnect publishers and subscribers to each other. Unfortunately, the documentation provided on SpaceBrew is still under development and it is not clear if visibility among agents is requested or managed by SpaceBrew. However, in the architecture presented in [17], some limitations are presented but not discussed, as ''it is not possible to use SpaceBrew with two computers connected at the same network". Moreover, the results presented are limited to a qualitative analysis.
In the work presented in [18], ROS packages are encapsulated as web services in cloud virtual machines and a middleware based on web service technology is designed as the core of the whole cloud robotics system. This is responsible for parsing the cloud robotics task requests and scheduling ROS nodes in a distributed network. The communication is based on a proxy virtual machine and the presented results are limited to one robot, with claims that the situation with multiple robots will be considered in subsequent research.
In [19], the authors propose a software cloud architecture for cloud robotics which is intended for three subsystems in the cloud environment: the middleware subsystem, the background tasks subsystem, and the control subsystem. The architecture invokes cloud technologies such as cloud computing, cloud storage, and other networking platforms arranged with the assistance of congregated infrastructure and shared services for robotics, for instance, the Robot Operating System (ROS). Bi-directional communications are presented in the paper, but it is not detailed how these are achieved. Furthermore, the proposed architecture is built upon the ROS Multimaster FKIE [20], which requires bi-directional visibility among all the agents in the system.
RoboEarth [21] is one of the most important European funded projects on cloud robotics and focuses on the collection, storing, and sharing of data independent of specific robot hardware. With RoboEarth, the Rapyuta Cloud Engine [22] has been developed; it is based on the WebSocket protocol to guarantee the bi-directionality of the flow of data, but neither RoboEarth nor Rapyuta detail the visibility requirements between the agents. Furthermore, case studies did not involve the analysis of forwarding control from remote users to the robot. Nowadays, Rapyuta is a company with its headquarters in Tokyo [23], and provides a cloud robotics framework at an enterprise level. However, it is still limited in an early developer program.
Since the beginning of the 2010s, the rosbridge protocol has been introduced. It is a specification for sending JavaScript Object Notation (JSON) based commands to ROS (and in theory, to any other robot middleware). The specification is a programming language and is transport agnostic. The idea is that any language or transport that can send JSON can talk the rosbridge protocol and interact with ROS. The protocol covers subscribing and publishing topics, service calls, getting and setting parameters, and even compressing messages. Upon the rosbridge protocol, several tutorials and applications have been developed [24][25][26][27]. Among these, Robot Web Tools [28] represents the most successful and widespread implementation. Since its official introduction in 2012, the Robot Web Tools project has grown tremendously as an open-source community, enabling new levels of interoperability and portability across heterogeneous robot systems, devices, and front-end user interfaces. At the heart of Robot Web Tools is the rosbridge protocol as a general means for messaging ROS topics in a client-server paradigm suitable for wide area networks, and human-robot interaction at a global scale through modern web browsers. On the other hand, while the rosbridge library provides the JSON<->ROS conversion, it leaves the transport layer to others. This can be overcome through the use of rosbridge server, which provides a WebSocket connection. As a common use, a rosbridge server runs locally on the robot platform allowing to reach ROS topic and services through the rosbridge protocol from the outside. Nevertheless, this requires the visibility of the robot machine from the outside, a requirement that is hardly satisfied, especially for platforms that operate in a wireless local network.
In 2017, a new protocol has been introduced in [10] to integrate Robot Operating System (ROS) enabled robots with the Internet of Things, arguing for a lack of ROS functionality in monitoring and controlling robots through the Internet. Actually, the proposed protocol is very similar to the rosbridge protocol; the tests are limited to a few simulated robots, and the results obtained in the performing of a spiral trajectory with an open-loop control using commands from the remote server are not promising.
The solution developed in our work is proposed to endow the ability of remote control to a set of generic robots controlled using the ROS framework. As introduced, the rosbridge server provides a layer that can run on the local machine upon the ROS framework, providing an interface for communications outside the local machine using the rosbridge protocol. Therefore, the problem issued in this paper can be described as the definition of a method to remote control the desired robot (chosen from among a not-defined set) from a device (smartphone, tablet, laptop, or desktop PC). Both controlled robots and controlling devices are in different private networks; in other words, there is no visibility between robot and user device.
The current state of the art, described in this paragraph, is summarized in Table 1. Table 1. Summary of other available solutions in the current state of the art. [11] QoS-aware robotic streaming work-flow allocation algorithm Problem of agents' visibility not included [12] Problem formally described as Stackelberg game
Features of the System Used Limitation of the Presented Work
No description of the operational method for providing connections among cloud and robot resources.
[13] Security for cloud-based robotic services Bi-directional communication infrastructure for agents in different networks not detailed [14] Last commands retrieved by sending an HTTP request periodically (i.e., polling) Polling solutions are not feasible for low-level remote control (as teleoperation) [16] Based on the use of SpaceBrew It is not clear if visibility among agents is requested or managed by SpaceBrew. Some limitations are presented in the paper but not discussed Table 1. Cont. [18] Communication based on a proxy virtual machine Listed results are limited to one robot [19] Based on three subsystems in the cloud environment: middleware subsystem, background tasks subsystem, and control subsystem Built upon the ROS Multimaster FKIE [20], which requires bi-directional visibility among all the agents in the system [21] Based on WebSocket protocol to guarantee the bi-directionality of the flow of data Visibility requirements between the agents not detailed. Still limited in an early developer program [10] New protocol, similar to rosbridge protocol Tests limited to few simulated robots
Features of the System Used Limitation of the Presented Work
Presented system Approach based on WebSocket communication, reverse tunnelling and port remapping according to information stored on a central server. Able to manage lack of visibility among agents
System Description
The proposed system implements a communication layer for cloud-based robotic scenarios where multiple-users-multiple-robots are involved. Due to the variety of technologies, interoperability between heterogeneous devices (e.g., smartphone, tablets, computers) and different kinds of robots is required. Besides, the communication mechanism should guarantee real-time performance to enhance the user's experience with the robot, even remotely.
The configuration of the system is depicted in Figure 1. It is composed of a central server, namely a virtual machine hosted on a cloud service infrastructure, characterized by a static public IP that allows gateway porting as a Secure Shell (SSH) configuration 1 . We explicitly set GatewayPorts parameter in the file SSH configuration file. Concerning the architecture described in [29], the cloud resource is a Linux virtual machine running on FIWARE [30], which executes a LAMP server (Linux Apache MySQL PHP). The robotic platforms introduced in the system are based on ROS middleware for local control. On each robot, both the rosbridge server [31] and the ROS web video server [32] are running to allow incoming connections on default ports, one handled by the rosbridge protocol [33] and one dedicated for video streaming through video server. Since multiple agents are involved, a database is implemented on the LAMP server where each robot is mapped to a port number. The port number is defined as the MAC (Media Access Control) address of the robot network hardware, since it is unique information that can be used to characterize it. This strategy guarantees the flexibility of the system, since any ROS-based robot can be integrated into the scenario by saving its own information on the database. Human users can remotely control the robotic platforms through a web page running on the central server and accessible from any device.
Based on this configuration, each agent (user or robot) can be in a different private network, while only the server, characterized by a static public IP address, is reachable from outside. The bi-directional visibility between the elements of the system is obtained as described in the following paragraphs. 1 In file /etc/ssh/sshd_config it should be explicit GatewayPorts yes.
User Remote Control
User remote control is performed through a web page hosted on the public server. The user can select the use of a specified robot and this choice is used to instantiate a rosbridge client and a video client pointing to server IP and the ports specified in the database. Therefore, the communication is forwarded through the opened reverse tunneling to the selected robot. This sequence is summarized in the UML diagram shown in Figure 3.
Once the communication between the user and the robot is established, the user can send commands to the robot by using web pages stored on the server. By dedicated web pages, the user receives feedback on the requested commands (e.g., video streaming, executing velocity).
Robot Port Forwarding Configuration
At the startup phase, the robot can reach the server by executing "reverse SSH tunneling". This technique is an alternative strategy to Virtual Private Networks (VPN) to securely access a remote server that is behind a firewall. In this work, the robot automatically queries the database of the dedicated ports (for rosbridge controls and video streaming) related to its MAC address. If a record with the robot MAC address exists in the database, reverse tunneling is performed from the robot towards the server. The sequence of commands is summarized in the Unified Modeling Language (UML) diagram shown in Figure 2. The use of SSH reverse tunneling overcomes the problem of lack of bi-directional visibility: communication is opened by the robot (instead of by the server), creating a tunnel that allows bi-directional communication.
User Remote Control
User remote control is performed through a web page hosted on the public server. The user can select the use of a specified robot and this choice is used to instantiate a rosbridge client and a video client pointing to server IP and the ports specified in the database. Therefore, the communication is forwarded through the opened reverse tunneling to the selected robot. This sequence is summarized in the UML diagram shown in Figure 3.
User Remote Control
User remote control is performed through a web page hosted on the public server. The user can select the use of a specified robot and this choice is used to instantiate a rosbridge client and a video client pointing to server IP and the ports specified in the database. Therefore, the communication is forwarded through the opened reverse tunneling to the selected robot. This sequence is summarized in the UML diagram shown in Figure 3.
Experimentation
In this paper, five experimental setups were developed to investigate and to demonstrate the reliability and effectiveness of the proposed approach. In detail, tailored experiments have been performed to verify the following characteristics: Requirement 1 (RE1): reliability of autoconnection/autoconfiguration at startup, without any specific action to the single machine; Requirement 2 (RE2): scalability on several robots; Requirement 3 (RE3): effectiveness of the remote control (e.g. teleoperation) from multiple users to multiple robots.
The experimental setups differ from each other based on the number of agents (users and robots), network infrastructures and scenarios. For the works described in [10,18], the number of user-robot pairs is always equal to or higher than three.
The first four proposed experimental setups evaluate the system in simulated scenarios, in which the agents involved (i.e., robots and users) have been approximated with a computer running several ROS cores. These tests reflect a practical scenario where the number of agents is high (e.g., more than three robots are involved). The final experimental setup recalls a real situation, where multiple users interact with multiple robots. The robots used in this experimental phase are two Astro robotic platforms [34] and one Coro robotic platform [35]. Both robots are based on the SCITOS G5 mobile platform (Metralabs GmbH, Germany). They are equipped with a front and rear laser scanner to safely navigate the environment. Cameras for video streaming are also mounted. Astro and Coro platforms implement a teleoperation service, which allows a remote user to send velocity commands to the robot and to receive images of the environment where the robot is moving. Both robotic systems are developed based on the ROS framework. This section aims to describe the different setups, while the results are detailed in Section 5. Once the communication between the user and the robot is established, the user can send commands to the robot by using web pages stored on the server. By dedicated web pages, the user receives feedback on the requested commands (e.g., video streaming, executing velocity).
Experimentation
In this paper, five experimental setups were developed to investigate and to demonstrate the reliability and effectiveness of the proposed approach. In detail, tailored experiments have been performed to verify the following characteristics: Requirement 1 (RE1): reliability of autoconnection/autoconfiguration at startup, without any specific action to the single machine; Requirement 2 (RE2): scalability on several robots; Requirement 3 (RE3): effectiveness of the remote control (e.g. teleoperation) from multiple users to multiple robots.
The experimental setups differ from each other based on the number of agents (users and robots), network infrastructures and scenarios. For the works described in [10,18], the number of user-robot pairs is always equal to or higher than three.
The first four proposed experimental setups evaluate the system in simulated scenarios, in which the agents involved (i.e., robots and users) have been approximated with a computer running several ROS cores. These tests reflect a practical scenario where the number of agents is high (e.g., more than three robots are involved). The final experimental setup recalls a real situation, where multiple users interact with multiple robots. The robots used in this experimental phase are two Astro robotic platforms [34] and one Coro robotic platform [35]. Both robots are based on the SCITOS G5 mobile platform (Metralabs GmbH, Germany). They are equipped with a front and rear laser scanner to safely navigate the environment. Cameras for video streaming are also mounted. Astro and Coro platforms implement a teleoperation service, which allows a remote user to send velocity commands to the robot and to receive images of the environment where the robot is moving. Both robotic systems are developed based on the ROS framework. This section aims to describe the different setups, while the results are detailed in Section 5.
Test 1: Reliability at Startup
The first test aims to demonstrate the reliability at startup (RE1) by analyzing the eventual occurrence of problems in the sequence depicted in Figure 2.
A set of five local machines were used to locally instantiate 10 ROS cores for each machine. Multiple ROS cores can work on the same machine by changing the local port, running the dedicated command. A total of 50 ROS cores were set up. A rosbridge server was instantiated for each ROS core. For this specific purpose, the default port of the rosbridge server has been changed to allow multiple rosbridge servers on the same machines. Programs automatically started at the starting of the Ubuntu system. One simple std_msgs::String was sent from a remote machine to each ROS core through the described architecture to test the opening of communication, while a subscriber is already running waiting for receiving the message. The success rate of communication opening was computed through the check reception of the message sent. Test 1 was repeated 1000 times, each time using 50 ROS cores at the same time.
Test 2: High Rates Communication
The second test aimed to demonstrate the reliability of communications that involve messages at high frequency (RE2). A set of 100,000 std_msgs::String messages was sent at 100 Hz (i.e., one message every 0.01 s for 1000 s was sent) from five machines (users) to five different machines (robots), through the described architecture, using the server on FIWARE as in Test 1. A subscriber was already running, waiting to receive the messages. The metric used to evaluate the reliability of communication was data loss. It was computed as the count of receptions of the total messages sent. Furthermore, we evaluated the communication bandwith by using iperf [36] and we limited the server bandwidth through Wondershaper [37]. Test 2 was repeated 10 times, each time using five user-robot pairs.
Test 3: Different Network Hardware
In Tests 1 and 2, all the machines were connected to the same network infrastructure. To unbind the results from the hardware used, Tests 1 and 2 were repeated subdividing the groups of five machines into three sub-groups:
1.
Machines connected to the building network infrastructure (fixed line); 2.
Machines connected to a router with a 4 g connection; 3.
Machines connected to another router with a 4 g connection (same phone company).
The evaluation metrics adopted in Tests 1 and 2 were used to evaluate the reliability of the system. This allows the comparison of the performance in the different scenarios.
Test 4: Open-Loop Spiral Trajectories
The effectiveness of the communication layer (RE3) was evaluated based on the experiment described in [10]. The experiment assesses real-time control of motion of Turtlesim robot (default simulator in ROS) following an open-loop spiral trajectory. A spiral trajectory is defined as a combination of an increasing linear velocity over time and a constant angular velocity. Speed commands have been remotely sent from five machines (users) to five different machines, each one running a Turtlesim robot, through the described architecture, using the server on FIWARE as in Test 1. Since a spiral trajectory is sensitive to delays and jitters, the final pose of the simulated platform was used as a qualitative metric to evaluate real-time performance, namely the variability of the received commands along the path. Test 4 was repeated 10 times, each time using five user-robot pairs. Unfortunately, in [10] details of the network configuration are not provided.
Test 5: Qualitative Evaluation of Simultaneous Teleoperation
While the previous setups were performed in simulated scenarios, Test 5 involved the presence of real robots. In detail, remote teleoperation was evaluated for three users simultaneously controlling three different real robots (RE3). This experiment extends the results already obtained in [29], where only one-user-one-robot was considered. Since received speed commands were evaluated in Test 4, the analysis of performance in varying image resolution was computed. Namely, the image quality rate was tuned ranging from 90% to 50%. The teleoperation experiment setup is described in the following: During the test, all the robotic platforms and one operator's laptop were connected to the WiFi network of the building, except for one operator that was connected with an Ethernet cable and another operator that was connected to a personal 4G router. The network speed specifications were computed using a speed test by the Ookla website and they are reported in Table 2. The network specifications reported in Table 2 are useful to assess the quality of connection at each robot side. The ping response time can be used to assess the latency; the download and the upload speeds provide additional information for evaluating the performance at the robot side. Due to this experimental design, the metrics used to evaluate the performance were the throughput in terms of packets per second(pps) and Mb/s Besides, the average delay between consecutive received packets at each teleoperation center was computed. The WireShark tool was used to analyze the HTTP packets received at each teleoperation center on the dedicated port. the image quality rate was tuned ranging from 90% to 50%. The teleoperation experiment setup is described in the following: During the test, all the robotic platforms and one operator's laptop were connected to the WiFi network of the building, except for one operator that was connected with an Ethernet cable and another operator that was connected to a personal 4G router. The network speed specifications were computed using a speed test by the Ookla website and they are reported in Table 2. The network specifications reported in Table 2 are useful to assess the quality of connection at each robot side. The ping response time can be used to assess the latency; the download and the upload speeds provide additional information for evaluating the performance at the robot side. Due to this experimental design, the metrics used to evaluate the performance were the throughput in terms of packets per second(pps) and Mb/s Besides, the average delay between consecutive received packets at each teleoperation center was computed. The WireShark tool was used to analyze the HTTP packets received at each teleoperation center on the dedicated port.
Results
The results obtained in all the performed tests demonstrate both the reliability and the feasibility of the "plug-and-play" system. Specifically, communication was successfully established in every trial of Test 1 (success_rate = 100%), and no message was lost through the network in Test 2 (message_lost = 0%). Additional analysis was performed to evaluate the influence of bandwidth. Bandwidth between the local machine and the FIWARE server was 39.4 Mbit/s during Test 2, and the obtained results highlight that the proposed architecture can work until the sum of all communications through the server (i.e., communications to every agent) is smaller than the server bandwidth. Communications bandwidths were measured through the analysis of ROS topic bandwidths. This confirmed that port forwarding through reverse tunneling does not increase the amount of bandwidth requested for communication. Test 3 confirmed the results obtained in Tests 1 and 2, demonstrating that the "plug-and-play" system does not depend on the network hardware. In Test 4, the parameters used to evaluate open-loops spiral trajectory were a rated frequency equal to 2 Hz, an initial velocity of [10], the trajectories were correctly accomplished following spiral trajectories. The performance was not influenced by the presence of delays and jitters at the communication layer, which caused misbehaviours in [10], as shown in Figure 5. The results of Test 5 show that three operators can simultaneously control three robots without experiencing any significant delay able to corrupt their own experience. Quantitative results on their experience were collected by measuring the video streaming parameters during the teleoperation. In detail, five teleoperation modes were tested, each of them characterized by a different encoding quality of the image. A comparison between the packet size received by each operator at the teleoperation center is shown in Figure 6. As expected, the packet size decreased as the resolution of the image sent by the robot camera decreased. One misbehaviour was recorded for the video streaming at 70% sent by the Astro 1 robot, in which the packet size was bigger than 80% resolution. The average (µ) pps exchanged by FIWARE and the operator was almost stable (µ th = 29 pps and σ th = 0 pps for wireless connection, µ th = 20.60 pps and σ th = 2.41 pps for 4G connection), except for the experiment involving Coro robot (and the wired connection), in which the pps varied accordingly to the image resolution (µ th = 10.80 pps and σ th = 4.55 pps). The throughput reported in the graph has been calculated as the amount of http data sent over a certain period of time. It was thus influenced by the network speed both at the robot and operator's side. For the mentioned reason, the throughput value of images recorded by the Coro's camera (with the wired connection) was extremely low. As shown in the bottom-right graph in Figure 6, the most significant delay between consecutive packets was perceived in the case of the 4G connection (up to 190 ms in case of max resolution of the video streaming). The significant delay for the 4G connection derived both from the high ping value of the 4G connection (see Table 2) and from the low upload values at the robot's side, which was located at San Giovanni Rotondo, as shown in Figure 4.
Discussion
The rationale behind the development of the proposed architecture for cloud robotics relies on the need for a reliable solution that can allow the inclusion of new agents in the system without any specific configuration of the local machines. The term "plug and play" refers to the two main features of the system. First, a new robotic platform can be easily integrated into the system by adding a new record in the database. This strategy allows non-technical users to easily change the system configuration and it can also be performed remotely. Consequently, it is possible to increase or decrease the number of agents in the system in a flexible way. The second feature regards the types of technical elements involved in the architecture. The development relies on mature and dependable technologies, such as LAMP servers, WebSockets, SSH reverse tunneling, the rosbridge protocol, and servers. With respect to other approaches in the state of the art, the bi-directional visibility issue has been moved to a "configuration problem" rather than an "implementation problem". Due to the introduction of rosbridge technology, no new code or communication protocol is introduced to deal with the presented issue.
The experimentations and results reported in Sections 4 and 5 confirm the reliability and effectiveness of the proposed solution. On one side, the simulated scenarios validate the stability of the communication layer in case of a large number of agents involved (50 ROS cores in Tests 1 and 3) and in case of a high quantity of messages exchanged (100,000 messages in Tests 2 and 3). On the other side, the experimentation in the real scenario shows the efficiency of the approach with concrete agents due to the higher number of challenges, which are discarded in the simulated scenario. It is often the case that cloud-based applications' performances are affected by network glitches, bandwidth fluctuations which provoke irregular robot mobility [11]. Testing the communication layer in both kinds of scenarios provides precise evidence of its efficiency. Although our system satisfies the requirements detailed in Section 4, a few limitations have to be highlighted.
One concerns the network architecture topology. Since every communication travels through a central server, the resulting configuration recalls a typical start topology. This leads to the limitation that the sum of the bandwidth of every communication has to be smaller than the maximum server bandwidth. Besides, in the presence of robot reboots, the SSH tunneling is interrupted at robot shutdown and it is restarted after robot startup. This requires that, on the server, the port has to be released in this interval of time. By releasing the port in a certain interval of time, it may be possible to handle multiple incoming requests. In the context of a high number of agents connected through the cloud, the communication system should integrate a planner to allocate the available services in an efficient way. In the real scenario of the presented work, each agent directly accesses a dedicated robot, because the number of robots is limited. By increasing the number of elements in the networked cloud robotics, the presence of a service planner becomes essential.
As already introduced, the importance of untying the communication layer of architecture from the network currently used by the agents become more important in the scenario of mobile platforms, where the agents can use different WiFi connections. At the operator side, the reliability of the proposed solution despite the hardware used is confirmed by the results achieved in Tests 4 and 5. In the conducted experimentation, the delay between consecutive packets at the teleoperation center strictly depends on the network latency and the dimension of the information exchanged. The tradeoff between the data-rate of the communication channel and the quantity of data can be a limitation for cloud-based robotics applications. This leads to the challenge of defining the types of robotic tasks that should be kept on board. Cloud robotics provides a solution for intensive computation and storage of a high quantity of data, with no clear evidence of which types of information can be exchanged so that the delay at the receiver side (i.e., robot or operator side) is minimized.
As described in [37][38][39], ROS suffers from significant security issues. In our work, we address this issue by introducing an authentication system for the users that want to access the web pages hosted on the public server (as shown in Figure 3). This design choice can be improved by integrating secure HTTP and/or by enabling optional features of the rosbridge. It is worth noticing that one of the improvements of the protocol is Transport Layer Security (TLS) support for WebSocket connections, an authorization mechanism to restrict Application Programming Interface (API) calls and to limit available topics.
In the current approach, the actual problem of bi-directional visibility has been solved by using a mature networking strategy (e.g., Websockets, reverse SSH, etc.) in a robotic domain. Indeed, to the best of the authors' knowledge, few robotics research works have used this approach in their research ( [4,40]), still with a low number of robotic platforms. One possible direction of this work is to improve the proposed solution by adopting the current trends of networking approaches, such as the named-data-networking approach [41].
Conclusions
This paper aims to describe and evaluate an approach for the cloud robotics system to overcome the issue of bi-directional visibility, often not dealt with in related works. Even if virtual private networks represent an effective solution, their implementation requires specific interventions and configurations for each agent involved in the system. This kind of activity requires the involvement of technicians or experts.
The proposed system offers a "plug and play" solution, meaning that the configuration is automatically retrieved from a public database and that reverse tunneling allows any kind of protocol for the local connection (local WiFi, public WiFi, mobile 3G/4G, etc.). The main contributions of the proposed work are: • The implementation of a reliable autoconnection/autoconfiguration mechanism; • The design a scalable communication layer that allows the effective control of multiple robots from multiple users; • The effective control of multiple heterogeneous robots from multiple users, since the communication layer is untied from the hardware components.
As a consequence, our work facilitates the setup operations needed to install a robotic system in a real scenario (outside the lab environment). Moreover, in the specific case of mobile platforms, a robot can travel among different WiFi networks in a large area. The proposed solution avoids managing particular configurations for each network, basing the communication on a public resource.
The developed system is based on mature technologies that allow achieving encouraging results on reliability and feasibility for remote control applications. Few limitations arose, mainly related to server performance in port-management and network architecture star topology. For instance, aspects of communication bandwidth have to be taken into account related to the specific application and to the number of agents simultaneously involved.
In conclusion, a deeper introduction of already developed technologies in the context of cloud robotics can strongly enhance the readiness of the technology, in particular by providing solutions that can reduce the need for intervention by expert users or developers. | 9,317 | 2020-03-22T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
ST-D3DDARN: Urban traffic flow prediction based on spatio-temporal decoupled 3D DenseNet with attention ResNet
Urban traffic flow prediction plays a crucial role in intelligent transportation systems (ITS), which can enhance traffic efficiency and ensure public safety. However, predicting urban traffic flow faces numerous challenges, such as intricate temporal dependencies, spatial correlations, and the influence of external factors. Existing research methods cannot fully capture the complex spatio-temporal dependence of traffic flow. Inspired by video analysis in computer vision, we represent traffic flow as traffic frames and propose an end-to-end urban traffic flow prediction model named Spatio-temporal Decoupled 3D DenseNet with Attention ResNet (ST-D3DDARN). Specifically, this model extracts multi-source traffic flow features through closeness, period, trend, and external factor branches. Subsequently, it dynamically establishes global spatio-temporal correlations by integrating spatial self-attention and coordinate attention in a residual network, accurately predicting the inflow and outflow of traffic throughout the city. In order to evaluate the effectiveness of the ST-D3DDARN model, experiments are carried out on two publicly available real-world datasets. The results indicate that ST-D3DDARN outperforms existing models in terms of single-step prediction, multi-step prediction, and efficiency.
Introduction
With the progress of urbanization, the urban population and traffic flow are increasing rapidly.Accurate and efficient prediction of urban traffic flow holds significant importance in areas such as traffic management, public safety, and travel planning [1][2][3].For instance, according to data released by the Chinese Ministry of Transport, the losses caused by traffic congestion account for 20% of the per capita disposable income in urban areas.On October 30, 2022, a Halloween party held in Itaewon, Seoul, South Korea, resulted in a massive stampede, leading to 159 casualties.This incident marked the ninth-largest stampede of the 21st century.Major cities like Beijing and New York face daily traffic congestion, causing severe economic losses, environmental pollution, and public safety issues.If the traffic management department obtains the results of traffic flow prediction [4] in advance and guides the traffic flow and crowd in time, it can reduce the occurrence of congestion, stampede and other events to a certain extent.Therefore, achieving accurate and efficient predictions of the city's traffic flow holds crucial practical significance.
This study aims to predict future urban traffic flow through the analysis of extensive offline GPS data, including data from bicycles, taxis, and other sources.However, these GPS data possess temporal and spatial attributes, and efficiently and comprehensively mining the spatiotemporal correlations within them poses a significant challenge in establishing high-performance traffic flow prediction models.In recent years, numerous researchers have employed data-driven methods [5] for traffic prediction modeling, which can be broadly categorized into two types: traditional machine learning methods and deep learning methods.Traditional machine learning methods demand high requirements for feature engineering and struggle to handle high-dimensional traffic flow data, leading to limited overall applicability.
Fortunately, with the development of deep learning, modeling high-dimensional traffic data has become achievable, enabling the capture of complex features through a hierarchical approach.The earliest deep learning method used for traffic flow prediction is Recurrent Neural Network (RNN) [6].However, when dealing with data featuring long-term dependencies, RNN encounters the issues of gradient explosion or vanishing gradients [7].Consequently, researchers proposed its variants: Long Short-Term Memory (LSTM) [8] and Gated Recurrent Unit (GRU) [9].Nevertheless, they require continuous time series as input and, when dealing with spatial data, necessitate data dimensionality reduction, thereby overlooking the spatial correlations in the data.Convolutional Neural Networks (CNN) can automatically and hierarchically capture the spatial features of traffic flow through convolution operations.Therefore, many researchers have built upon the two-dimensional convolutional neural networks(2D CNN) to propose deep composite networks [10][11][12][13][14] for extracting the spatio-temporal correlations of traffic flow.However, regional traffic flow may be influenced by the traffic flow in different regions at adjacent time points.Methods based on 2D CNN have certain limitations in establishing spatio-temporal correlations.
Due to the capability of 3D CNN in capturing spatio-temporal features, it has been widely applied in video analysis [15][16][17].We posit that video analysis and the analysis of urban traffic flow changes share similar spatio-temporal correlations, as illustrated in Fig 1 .Therefore, it is a feasible method to apply 3D CNN to urban traffic flow prediction.In this study, we process taxi GPS data and shared bicycle rental data into traffic frames, where each frame represents traffic flow at fixed time intervals (shown in Fig 2).However, in our experiments, we observed that 3D CNN faces challenges in extracting spatio-temporal correlations, including high computational complexity and difficulties in convergence.
Additionally, in comparison to video analysis, traffic flow prediction exhibits the following distinctive characteristics: • Multiple Temporal Correlations: The traffic flow in a given period is not only related to the traffic flow in the nearby periods but also correlated with the traffic flow in the corresponding periods of previous days and weeks.(i.e., closeness, period, trend).; • Complex Spatial Correlations: Traffic flow across all regions of the city mutually influences each other.With the advancement of urbanization, transportation becomes more convenient and people can quickly get to their destinations.Consequently, the traffic flow in one area is influenced by various regions within the city; • Heterogeneity: Different regions around a location have different influences on it, and the influence of the same location on the surrounding regions is also different at different times.For example, road segments around school experience traffic congestion during students' dismissal, while suburban areas are minimally affected by traffic flow from surrounding regions.Additionally, the importance of areas with tourist attractions is higher during holidays, diminishing during the tourism off-season; • External Factors: Urban traffic flow is also influenced by various external factors, such as weather conditions, holidays, and special events.
• To address the challenges in urban traffic flow prediction mentioned above, the ST-D3DDARN has been proposed.The predictive model built upon this network exhibits high forecasting accuracy and low computational complexity.The innovations and primary contributions are as follows: • Four branches, namely closeness, period, trend, and external factors, are established to extract multi-source information for traffic flow.Additionally, a unique keyframe construction is designed specifically for the period and trend branches; • A dense decoupled 3D CNN is introduced.Experimental results indicate that the decoupled 3D CNN effectively addresses the challenges of long training times and slow convergence associated with 3D CNN.The dense connection network captures multi-level features, establishing a multi-scale spatio-temporal dependency.In essence, this network captures multiscale spatio-temporal correlations with fewer parameters; • A residual network incorporating both spatial self-attention and coordinate attention mechanisms is devised.The spatial self-attention mechanism dynamically captures global spatial correlations, while the coordinate attention quantifies the regional contributions between channels to address the heterogeneity of traffic flow; • Through single-step prediction, direct multi-step prediction, recursive multi-step prediction, model efficiency evaluation, model prediction error visualization and other experiments, the superiority of the proposed method is verified.
In summary, the network design encompasses multiple innovative components, such as the unique keyframe construction, dense decoupled 3D CNN, and the integration of spatial and coordinate attention mechanisms in a residual network.Comprehensive experiments demonstrate the effectiveness and superiority of the proposed methods in various aspects of traffic flow prediction.
Traffic prediction
In recent years, traffic congestion has gradually become a prominent societal issue, prompting in-depth research into traffic flow prediction.This section reviews the research achievements in traffic prediction over the past few years.
Traditional methods for traffic flow prediction primarily relied on Historical Averages (HA), Support Vector Machines (SVM), AutoRegressive Integrated Moving Average (ARIMA) [18], and similar techniques.With the development of deep learning, Recurrent Neural Networks (RNN) [6] gradually replaced traditional methods.Some researchers explored variants of RNN, such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), for traffic flow prediction [19][20][21][22].While these methods effectively capture temporal correlations, spatial correlations between regions have not been fully utilized.
In order to capture the complex spatial correlations in traffic flow, researchers employed different modeling strategies to simulate various spatial interactions.Zhang et al. [23] transformed historical trajectory data into image data, proposing a spatio-temporal data prediction model (DeepST) that predicts the population flow throughout the city by overlaying convolutional layers.Subsequently, Zhang et al. [10] utilized multiple residual convolutional unit branches to model spatial properties from closeness, period, and trend aspects, demonstrating good prediction performance.Lin et al. [11] introduced DeepSTN+, employing 2D CNN and ResPlus modules to extract remote dependencies in traffic flow effectively.Wang et al. [12] aggregated features like closeness, period, and trend into multi-channel features input to a single-branch residual network, showing high simplicity and superior predictive performance.Ding et al. [13] designed a Deep Interlaced Training Network (MS-ResCnet) to improve traffic flow prediction performance further.To capture richer spatio-temporal correlations, Dai et al. [14] proposed multi-perspective convolution, convolving spatio-temporal features from the front, side, and top perspectives.The above methods all use 2D CNN to model traffic flow prediction, which has certain limitations in establishing spatio-temporal correlation.
In order to better simulate the spatio-temporal correlations of traffic flow, researchers began exploring methods that combine CNN and RNN.Yao et al. [24] introduced a Spatiotemporal Dynamic Network (STDN), combining LSTM, 2D CNN, and periodic attention transfer to dynamically simulate the spatio-temporal correlations of traffic flow.3D CNN has a stronger ability to capture spatio-temporal features than 2D CNN and is widely used in video analysis.Guo et al. [25] used 3D convolution to simultaneously capture the temporal and spatial correlations of traffic flow.Chen et al. [26] proposed a Multi-Gated Spatio-temporal CNN (MGSTC), extracting various spatio-temporal features from traffic data through multiple gated 3D CNN branches.Zhou et al. [27] established a residual network using 3D CNN and constructed a filtering space attention block to dynamically adjust spatial weights.He et al. [28] introduced a Long Short-Term Spatio-temporal Feature Extraction Module, 3D-Con-vLSTMNet.This method captures short-term spatio-temporal correlations through 3D CNN, utilizes ConvLSTM to capture long-term spatio-temporal correlations, and employs a residual structure to obtain long-distance spatial dependencies.However, the above methods did not consider multi-scale spatio-temporal dependencies.Additionally, using 3D convolution to establish spatio-temporal correlations incurs high computational costs and slow convergence.Our proposed method effectively addresses these shortcomings and efficiently learns the complex spatio-temporal dependencies in traffic flow.
Attention mechanism
The attention mechanism is an inherent ability in human vision that helps us quickly focus on key content, and it is now widely applied in the field of deep learning.SE-Net [29] captures inter-channel information through two-dimensional global pooling, significantly enhancing model performance with lower computational costs.However, SE-Net only focuses on interchannel information, neglecting intra-channel spatial information.CBAM [30] integrates both channel and spatial information, demonstrating superior performance compared to SE-Net.Nevertheless, CBAM's attention to spatial information is based on the average and maximum values of each channel at each position, potentially resulting in the loss of inter-channel information.In the context of traffic frame-based traffic flow prediction tasks, it is essential to extract global dependencies in both time and space simultaneously.CA-Net [31], as an innovative and efficient attention mechanism, captures spatial information across channels, enabling the model to accurately identify the features of interest.
In the field of traffic prediction, attention mechanisms are employed to address the insufficient extraction of spatio-temporal features from traffic data.For example, Shi et al. [32] proposed a attention-based periodic time network, effectively capturing the spatial and periodic features of road networks through an encoder attention mechanism.Zheng et al. [33] introduced a self-attention graph convolutional network that dynamically captures spatial correlations on a global scale.The Transformer [34] is a highly parallel self-attention mechanism that efficiently handles sequence data.Pu et al. [35] introduced a multi-view spatio-temporal Transformer (MVSTT) network, which dynamically captures spatial correlations and longterm dependencies in traffic flow from multiple perspectives.
In summary, attention mechanisms enhance the model's nonlinear fitting capability, effectively and comprehensively establishing spatio-temporal correlations in traffic data.
Problem definition and analysis
In this section, we will introduce the core definition of urban area traffic flow prediction and analyze several key characteristics associated with it.
Definition 1 (Grid Area M i,j ): The city is divided into an i × j grid map based on latitude and longitude, where each grid represents a specific area of the city (see Fig 3(A)).M i,j denotes the grid located in the i-th row and j-th column of the grid map.
Definition 2 (Traffic Time Slice M t i;j ): It records traffic data within a fixed time interval (1 hour or 30 minutes) and transforms it into image data M t i;j 2 R P�I�J , where P, I, and J represent the channels, width, and height of the image, respectively.Since traffic flow consists of inflow and outflow (see Fig 3(B)), the number of channels is fixed at 2. Definition 3 (Inflow and Outflow M t in;i;j ; M t out;i;j ): Inflow and outflow represent the number of units entering (leaving) a specific area within a fixed time interval.The inflow and outflow of the grid M i,j are defined as follows: Where Tr : g 1 !g 2 !g 3 ; . . .; !g jTrj represents trajectory data in U, U is the set of trajectory data in a fixed time interval, and g α represents the latitude and longitude coordinates of GPS or sensor locations.
Definition 4 (Urban Area Traffic Flow Prediction): Urban area traffic flow prediction is divided into single-step prediction and multi-step prediction.• Single-step prediction: Predict the inflow and outflow of each area in the next time period.
• Multi-step prediction: Predict the inflow and outflow of each area in the n-th future periods, where N = {M 1 ,M 2 ,M 3 ,� � �,M t } represents historical observed values, and M t+n (n>1) represents the target prediction value, and n is the number of prediction steps.Multi-step prediction is further divided into direct multi-step prediction and recursive multi-step prediction.
1. Direct multi-step prediction: Obtains the target prediction values by training a new model.However, this approach has significant drawbacks, such as the need to train multiple models for different time steps, resulting in a waste of computational resources; 2. Recursive multi-step prediction: Uses the prediction value of the previous time step as the input for predicting the next time step.This method only requires training a single model, saving computational resources through recursion.
Analysis 1 (Complex Spatial Correlations):
The traffic flow in a specific area is influenced not only by nearby regions but also by regions at a considerable distance.
Analysis 2 (Multiple Temporal Correlations):
The traffic flow at a specific moment is influenced by the traffic flow from past moments, exhibiting multiple dependency relationships, including closeness period and trend.
• Closeness: Predicting traffic flow is influenced by the traffic flow from the most recent moments.For example, the traffic flow at 8 PM is affected by the traffic congestion at 7 PM.
• Period: Traffic flow at the same time on consecutive workdays tends to be similar.For instance, there is a morning rush hour on every workday.As observed in, consecutive workdays exhibit distinct periodicity.
• Trend: Traffic flow exhibits a peak shift phenomenon every week.For instance, with the onset of winter, the earlier sunset time leads to an advance in the evening peak hours.
Analysis 3 (External Factors):
External factors such as weather conditions, holidays, and special events have a significant impact on urban traffic flow.As depicted in Fig 4 , there are different trends in traffic patterns between weekdays and weekends.Weekdays exhibit morning and evening rush hours, while traffic flow on weekends tends to be relatively smooth.Additionally, adverse weather conditions, leading to slippery roads and reduced visibility, contribute to lower traffic flow.Processing External Factor Data: External factor data comprises holiday and weather data.Holiday data includes various categories such as weekends, National Day, Labor Day, etc.
Weather data encompasses conditions like weather status, temperature, wind speed, and atmospheric pressure.Categorical variables are digitized through one-hot encoding, while continuous variables are normalized within the [-1, 1] range.
The proposed method
In this section, we introduce the various modules of the ST-D3DDARN model, and the overall structure of the model is illustrated in Fig 6 .The ST-D3DDARN model initially employs three branches (Closeness Branch, Period Branch, Trend Branch and External Factors Branch) to extract features related to closeness, period, trend and external factors of traffic flow, respectively.Subsequently, an Attention Residual Network is used to integrate these features and dynamically establish global spatial correlations.The closeness branch period branch and trend branch extract multi-scale spatio-temporal correlation through densely connected decoupled 3D convolution, and the external factor branch extracts external information through LSTM and fully connected layers.The specific input features for the closeness, period, trend and external factors are as follows: Closeness: The traffic frames for the preceding c time intervals of the target prediction period, represented as X C ¼ ½M tÀ c ; M tÀ ðcÀ 1Þ ; M tÀ ðcÀ 2Þ ; � � � ; M tÀ 1 �, where c represents the length of the closeness sequence; Period: Due to the translation phenomenon in traffic flow (which refers to the external events causing the trend of traffic flow to advance or lag), using only a single traffic frame makes it difficult to learn the periodicity of traffic flow.Therefore, we supplement the previous and subsequent frames, considering the three frames together as the traffic state at a specific moment.The input to the periodic branch can be expressed as The input to the attention residual network comprises features extracted from four branches.It dynamically establishes global spatial correlations through a spatial self-attention, quantifies the contribution of features between channels using CA-Net, and establishes residual connections.This network is effective in handling the heterogeneity of traffic flow.
3D convolutional neural network
Learning sufficient spatio-temporal information is crucial for traffic prediction.When using 2D CNN for convolution on traffic frames, the operation is limited to the spatial dimension, making it challenging to establish spatio-temporal correlations across consecutive frames.In contrast, 3D CNN has proven effective in extracting spatio-temporal features in video analysis [15,16].CNN, equipped with three-dimensional filters, can capture spatio-temporal features across consecutive frames.Therefore, treating traffic flow as traffic frames and utilizing 3D CNN to capture spatio-temporal correlations proves to be an effective approach.The computation process of 3D convolution is outlined as follows: Where l, m, n are the dimensions of the 3D convolutional kernel, M in i is the three-dimensional feature volume of the i-th channel of the input feature, and W j (l,m,n) represents the parameters of the j-th 3D convolutional kernel.The output feature after 3D convolution for the j-th channel is given by: Where f is the activation function.The computation process is as follows:
Decoupled 3D DenseNet(D3DD)
The principle of DenseNet [36][37][38][39] is to use the outputs of all previous layers as inputs for the next layer, enhancing the data flow by obtaining features from multiple levels.Compared to ResNet, DenseNet can acquire more feature maps with fewer filters, which can reduce the number of parameters to some extent.Therefore, in this paper, we adopt densely connected decoupled 3D CNN to capture multi-level features of traffic flow, extract multiscale spatiotemporal dependencies, as illustrated in Fig 9 .However, DenseNet has some drawbacks: its structure is not conducive to increasing the number of network layers.As the number of layers increases, the parameter count also sharply rises, leading to increased training difficulty.The computation process of the D3DD module is as follows: Where F represents the operation of decoupled 3D CNN, � denotes the concatenation operation, and X L represents the output of the L-th layer of the D3DD module.
Where f represents all operations within the ARN Unit, X L A represents the output of the L-th layer of the ARN module.and ω L−1 denotes all learnable parameters of the L-th ARN unit.
Spatial self-attention mechanism.The spatial self-attention structure, as shown in Fig 11 , takes X S 2 R C�H�W as input, where C being the number of channels, and the shape is initially transformed to R C×N , where N = H×W.Next, three fully connected layers are used to map from N-dimension to N-dimension separately, resulting in finally reshape it to R C×H×W .The calculation process is as follows: Where W QS ,W KS , W VS are the parameter of the full connection layer, d K denote the dimension of K S .
Coordinate attention mechanism.The structure of coordinate attention is shown in Fig 12 .The input for coordinate attention is , where X C 2R H×W represents the feature of the c-th channel of X spatial .The input features are pooled along both the horizontal and vertical directions.For the horizontal direction, average pooling is performed with a window size of (1,W), and the output at position h for c-th channel is given by Eq (13).Similarly, for the vertical direction, average pooling with a window size of (1, H) is applied, and the output at position w for c-th channel is expressed by Eq (14).
Where M C (h,i) represents the value at the coordinate (h,i) for c-th channel, i = 1,2,3� � �W.
Where X C (j,w) represents the value at the coordinate (j,w) for c-th channel, j = 1,2,3� � �H.Combine the pooled features from both horizontal and vertical directions, reduce the channel dimension to C/R through a 1x1 convolution operation, and finally apply the ReLu activation function to achieve thorough fusion of cross-channel spatial information.The process is as follows: Where, Next, the interacted feature F is separated into two independent features F h and F w based on their original sizes.Through a 1x1 convolution operation, the dimensionality of these features is restored, resulting in attention weights A h and A w .The process is as follows: Finally, the attention weight values A h ,A w , and the input feature X spatial are multiplied to obtain the output Y.The process is as follows: External factors branch.The prediction of urban traffic flow is influenced by numerous external factors, thus incorporating these factors into the model can enhance the accuracy of traffic flow prediction.However, most existing methods only consider external factors at the predicted time, neglecting their continuous impact on traffic flow.For instance, after a rainstorm, the traffic flow does not immediately return to normal levels due to road flooding caused by heavy rain that continues to affect traffic.To address this issue, we incorporate multiple consecutive periods of external factors and capture their temporal information using long short-term memory networks (LSTMs).Subsequently, we map these features onto a traffic flow matrix shape through fully connected layers.The branch structure representing the incorporation of external factors is illustrated in Fig 13.
Experiment results and discussion
In this section, we begin by introducing the experimental environment and the dataset used.Subsequently, we analyze the impact of hyperparameters on the model and conduct comprehensive comparisons between the proposed model and various baseline models.Finally, to validate the effectiveness of each module, we perform ablation experiments.Additionally, to assess the model's effectiveness in multi-step prediction, we conduct performance tests for both direct multi-step prediction and recursive multi-step prediction.
Experimental environment and experimental data
The experiment utilized PyTorch (version 1.13.1) to construct the ST-D3DDARN model.The specific experimental environment is outlined in Table 1.
To achieve optimal performance for each model, grid search was employed to fine-tune the hyperparameters of the baseline models.Each method underwent 10 experiments, and the average results were considered as references for model performance.
To validate the effectiveness of the proposed model, two representative trajectory datasets were utilized: the Beijing Taxi Trajectory dataset (TaxiBJ) and the New York City Bike Sharing dataset (BikeNYC).Detailed descriptions of the datasets are provided in Table 2. Description of TaxiBJ and BikeNYC.The step lengths for closeness, period, trend and external factors were set to 6, 1, 1 and 6, respectively.The train-validate-test split ratio was 8:1:1, with a batch size of 32, learning rate of 0.005, and maximum training epochs of 200 and 100 for the two datasets.The loss function used for both datasets was the mean squared error (MSE) between actual and predicted traffic flows, calculated as follows: Where M t represents the actual traffic flow, and Mt represents the predicted traffic flow.
Evaluation metric
To assess the accuracy of urban traffic flow predictions, two commonly used evaluation metrics, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE), are employed.The calculation formulas are as follows: ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi 1 N Where N is Total number of validation samples, M t represents the actual traffic flow, and Mt represents the predicted traffic flow.
Hyperparameter impact analysis
To further assess the effectiveness of the ST-D3DDARN model, this section analyzes the hyperparameters of two key modules: the number of layers in the D3DD module and the number of layers in the ARN module.
Analysis of impact of the number of D3DD module layers.The dense connections in the D3DD module enable feature reuse, thus establishing multi-scale spatio-temporal dependencies.The number of layers is an important parameter to reflect the fine-grained spatiotemporal dependence.The more layers are, the more spatio-temporal correlation is established.Through experiments on the ST-D3DDARN model with 2 layers of ARN modules, the relationship between the number of layers in the D3DD module and model performance was explored.The experimental results are shown in Fig 15 .Fig 15 illustrates that RMSE and MAE exhibit a pattern of initial decline followed by an increase as the number of D3DD module layers increases.This phenomenon can be attributed to the existence of multi-scale spatio-temporal correlations in urban traffic flows, which are captured more comprehensively with additional D3DD module layers.However, excessive use of these modules may introduce remote area effects that are irrelevant or weakly related to the target area.Furthermore, it is recommended to set the number of D3DD module layers at 4 and 2 for TaxiBJ and BikeNYC datasets respectively.This result proves that there are more scales of spatio-temporal correlation in the data set with larger city extent and more number ofgirds.
Analysis of impact of the number of aRN module layers.The number of layers in the Attention Residual Network (ARN) module is a crucial parameter that significantly impacts the performance of traffic flow prediction in urban areas.To thoroughly analyze the influence of ARN module layers on the performance of the ST-D3DDARN model, we conducted experiments by fixing the D3DD module layers at 4 and 2 for the TaxiBJ and BikeNYC datasets, respectively.We gradually increased the number of ARN module layers and observed the effects, as depicted in Fig 16 .From the observed trend in the graph, it is evident that with an increase in the number of ARN module layers, the model's performance initially improves and then tends to stabilize.This suggests that increasing the number of ARN module layers can enhance the model's ability to learn spatio-temporal correlations in urban traffic flow to a certain extent.However, beyond a certain threshold of layer count, the performance improvement becomes marginal, and there is a risk of introducing overfitting.Therefore, in practical applications, a trade-off between model performance and computational efficiency is necessary, and an appropriate number of ARN module layers should be selected.So we set the number of layers of ARN module to 2.
Comparison of baseline methods
This section compares the ST-D3DDARN model with baseline models to demonstrate the effectiveness of the proposed approach.
Introduction to baseline methods.Traditional Time Series Forecasting Models: • HA: Historical Average model calculates the historical average and combines it with the current traffic flow at the current time to predict future traffic flow; • ARIMA [17]: AutoRegressive Integrated Moving Average model predicts future traffic flow trends based on the autocorrelation of historical traffic flow data; • Deep Learning Models: • CNN+LSTM: A combination model of temporal and spatial features using CNN and LSTM to extract time and space features of traffic flow asynchronously; • GCN+LSTM: It is also a combination model of temporal and spatial.In this method, a graph G = (V, E) is constructed, where V and E represent the set of vertices and edges respectively.Each vertex in a graph G represents a region, and an edge between two vertices indicates that two regions are adjacent.
• ST-ResNet [9]: A deep neural network model for traffic prediction that captures spatio-temporal correlations separately from closeness, periodicity, and trend; • DeepSTN+ [11]: A deep neural network model that integrates closeness, period, and trend, followed by ResPlus units to fuse multi-scale features of traffic flow; • ST-3DNet [25]: The first model to use 3D convolution to capture spatio-temporal correlations in traffic flow, showing superior performance on bikeNYC and TaxiBJ; • 3D-ConvLSTMNet [28]: Captures short-term spatio-temporal correlations with 3D CNN, followed by ConvLSTM to capture long-term spatio-temporal correlations, and utilizes residual connections for long-distance spatial dependencies; • MS-ResCnet [13]: A multi-scale residual calibration network that fuses multi-scale spatiotemporal features through deep interleaved training; • MPCNN [14]: A multi-perspective convolutional network that convolves features from the top, front, and side perspectives to capture richer features.
Table 3 provides a brief comparison of ST-D3DDARN with various baseline models, helping to illustrate the differences and enhance the interpretability of each model.
Comparison with baseline performance.The experimental results of each method are presented in Tables 4 and 5.In the baseline model, classical time forecasting methods (ARIMA and HA) exhibit limited effectiveness as they solely rely on historical values for future predictions, disregarding spatio-temporal correlation and external factors.Deep learning methods encompass CNN+LSTM, GCN+LSTM, ST-ResNet, DeepSTN+, ST-3DNet, 3DConvLSTMNet, MS-ResCnet.CNN+LSTM and GCN+LSTM capture the spatio-temporal correlation of urban traffic in a spatio-temporal asynchronous manner, it overlooks traffic flow periodicity resulting in subpar prediction outcomes.ST-ResNet establishes closeness, period, and trend branches, leading to improved accuracy compared to traditional models.DeepSTN+ achieves remarkable results by establishing citywide spatial dependencies through ResPlus units.However, ResPlus units incur high computational costs when dealing with datasets with many grids.ST-3DNet simultaneously captures traffic flow's spatio-temporal correlation via 3D convolution while proposing a recalibration module that explicitly quantifies the contribution difference of spatial correlation, effectively addressing traffic flow heterogeneity.3D-ConvLSTMNet outperforms ST-3DNet by capturing long-term dependencies through the ConvLSTM architecture; Nevertheless, both methods fall short of expectations on the two datasets, indicating ordinary 3D CNN is unsuitable for capturing traffic flow's spatio-temporal characteristics adequately.MPCNN establishes traffic flow's spatio-temporal correlation from 1 Temporal correlation of traffic flow
Prediction error visualization
To showcase the predictive performance of each method, we visualized the prediction errors for three time periods on April 10, 2016 (TaxiBJ) and September 30, 2014 (BikeNYC), as shown in Fig 17 .The error plots reveal that the proposed method exhibits better error control during peak traffic hours or specific regions compared to the baseline models.
Efficiency evaluation
To assess the prediction efficiency of ST-D3DDARN, we compared its complexity with various baseline models, and the details of the comparisons are provided in Table 6 and Fig 18 .Due to the complexity of the parameters in each model, it is challenging to display their configurations one by one.For detailed parameters, please refer to the original papers of each baseline model or the source code provided in this study.Among the baseline models, DeepSTN+ exhibits the fastest convergence speed, but its embedding of fully connected layers within the ResPlus unit introduces a large number of parameters.ST-3DNet and 3D-ConvLSTMNet, utilizing standard 3D CNN, result in slow training speeds and poor convergence.Comparatively, MS-ResCnet and MPCNN have simpler structures, smaller parameter counts, and faster convergence speeds.The spatial self-attention mechanism in ST-D3DDARN introduces a certain number of parameters, fortunately, the computational load generated by these parameters is acceptable.Overall, whether considering prediction accuracy or model efficiency, ST-D3DDARN demonstrates certain advantages.
Ablation experiment
This section presents the ablation experiments of ST-D3DDARN to analyze the impact of each module on model performance.Due to space constraints, not all variant combinations are Table 7 shows various indicators for our proposed model and its variants in TaxiBJ.Our findings indicate that both coordinate attention and spatial self-attention have an impact on prediction accuracy; however, spatial self-attention has a more significant influence which highlights the importance of dynamically capturing global spatial correlation.The prediction accuracy decreases when using ordinary 3D CNN instead of decoupled 3D CNN with dense connections indicating that it is unsuitable for capturing spatio-temporal correlation in traffic flow.Results from experiments without external factors show that adding them improves traffic flow prediction while additional traffic frames are necessary to deal with translation phenomena.
Multistep prediction
Multi-step prediction is more practically significant than single-step prediction.In this section, we analyze the multi-step prediction results of each method.
Direct multi-step prediction.As shown in Fig 19 , our proposed method consistently outperforms other baselines at all steps on both datasets.This is attributed to our proposed method effectively capturing the spatio-temporal correlations in traffic flow.However, direct multi-step prediction involves building multiple models for each step, which can lead to a heavy computational and maintenance burden when predicting a large number of time steps.
Conclusions
In order to comprehensively and efficiently extract the spatio-temporal characteristics of traffic flow, this paper proposes the ST-D3DDARN model for urban regional traffic flow prediction.In this model, the D3DD module is utilized to extract multi-scale spatio-temporal features at a lower level, while a residual network integrating spatial self-attention and coordinate attention is designed.The experiment proves that the decoupled 3D CNN is more suitable for extracting the spatio-temporal phase of traffic flow than the 3D CNN.Additionally, the spatial self-attention mechanism employed in this study effectively establishes dynamic spatial correlations within city traffic flows.At the same time, coordinate attention accurately captures contributions from different channel features.Furthermore, supplementary traffic frames are proven necessary through extensive experimentation.The results indicate that ST-D3DDARN outperforms baseline models across all aspects evaluated.Notably, this model can be widely applied to predict spatio-temporal rasterized traffic data and provide reliable information for transportation management departments in intelligent transportation systems.
Future work may involve dividing cities into irregular regions based on their functional characteristics and modeling urban traffic flows using GCN to learn spatio-temporal information better.Moreover, incorporating functional area attributes (such as commercial district/ residential data and POI data) into the model could further enhance its predictive performance.
••
Local Spatial Correlation: Traffic flow is impacted by the flow in nearby areas.As depicted in Fig 3(B), the inflow traffic to M1 is influenced by the outflow traffic from M2 and M3.Simultaneously, the outflow traffic from M1 causes changes in the inflow traffic of M2 and M3.Remote Spatial Correlation: Traffic flow between remote regions also mutually influences each other.As shown in Fig 3(B), residents commuting through highways or subway lines, such as M5 to M1 or M1 to M4, enable rapid interaction of long-distance traffic flow, leading to the traffic flow being influenced by regions at a considerable distance.
Fig 5
illustrates a traffic flow comparison between rainy and clear days in the Sanyuan Bridge area of Beijing.
Fig 7
illustrates the comparison between 3D CNN and 2D CNN.In Fig 7(A), convolution on features can only establish connections in the spatial dimension.In Fig 7(B), 3D Due to the high computational cost and the risk of overfitting associated with 3D CNN, we decompose it into two parts, referred to as decoupled 3D CNN.Experimental results indicate that decoupled 3D CNN outperforms 3D CNN in capturing the spatio-temporal correlations of traffic flow.The first part of decoupled 3D CNN establishes spatial correlations similar to 2D CNN, while the second part establishes temporal connections across frames.The integration of both parts achieves the same spatio-temporal receptive field as 3D CNN but significantly reduces computational complexity.As illustrated in Fig 8(A), using a filter of size T×N×N for 3D CNN, where T is the temporal dimension, and N is the length and width of the spatial dimension.To reduce computational complexity, decoupled 3D CNN performs two consecutive convolution operations with filters of size 1×N×N and T×1×1, as shown in Fig 8 (B).
7 :
Fig 14 illustrates the prediction process for the overall urban traffic flow.Initially, trajectory data collected by GPS sensors are processed into traffic frames.Subsequently, these traffic frames are divided into closeness time slices, period time slices and trend time slices, along with external factors, are organized into training, validation, and test sets.The ST-D3DDARN model is then constructed, and the training and validation sets are input into the model for training.Finally, the test set is fed into the trained model to validate the experimental results.
Fig 17 .
Fig 17.Error comparison of different models.(a) TaxiBJ.(b) BikeNYC.https://doi.org/10.1371/journal.pone.0305424.g017 included in this study, but rather aims to verify the effectiveness of each module.Specifically, we analyzed six variants: • ST-D3DDARN-w/o-CA: This variant removes coordinate attention from the model; • ST-D3DDARN-w/o-SA: This variant removes spatial self-attention from the model; • ST-D3DDARN-w/o-D3DD: This variant removes the decoupled 3D CNN with dense connections in the model; • ST-D3DDARN-3D: This variant replaces the decoupled 3D CNN in the model with a normal 3D CNN; • ST-D3DDARN-w/o-EXT: This variant removes the external factor branch in the model; • ST-D3DDARN-w/o-ADD.This variant is not supplemented with additional traffic flow time slices.
Table 4 . Comparison of model prediction results.
thereby enhancing prediction accuracy to some extent.MS-ResCnet uses a two-channel ResCnet network method to extract the benchmark features and calibration features of traffic flow respectively, and combines multi-scale features to further improve the prediction accuracy.It shows that extracting multi-scale features of traffic flow can effectively improve model prediction performance.The proposed ST-D3DDARN model integrates the strengths of each model, and the decoupled 3D CNN effectively addresses the training challenges of conventional 3D CNN.Therefore, this model performs optimally on both datasets. | 8,615.6 | 2024-06-12T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Permanent 125I-seed prostate brachytherapy: early prostate specific antigen value as a predictor of PSA bounce occurrence.
PURPOSE
To evaluate predictive factors for PSA bounce after 125I permanent seed prostate brachytherapy and identify criteria that distinguish between benign bounces and biochemical relapses.
MATERIALS AND METHODS
Men treated with exclusive permanent 125I seed brachytherapy from November 1999, with at least a 36 months follow-up were included. Bounce was defined as an increase ≥ 0.2 ng/ml above the nadir, followed by a spontaneous return to the nadir. Biochemical failure (BF) was defined using the criteria of the Phoenix conference: nadir +2 ng/ml.
RESULTS
198 men were included. After a median follow-up of 63.9 months, 21 patients experienced a BF, and 35.9% had at least one bounce which occurred after a median period of 17 months after implantation (4-50). Bounce amplitude was 0.6 ng/ml (0.2-5.1), and duration was 13.6 months (4.0-44.9). In 12.5%, bounce magnitude exceeded the threshold defining BF. Age at the time of treatment and high PSA level assessed at 6 weeks were significantly correlated with bounce but not with BF. Bounce patients had a higher BF free survival than the others (100% versus 92%, p = 0,007). In case of PSA increase, PSA doubling time and velocity were not significantly different between bounce and BF patients. Bounces occurred significantly earlier than relapses and than nadir + 0.2 ng/ml in BF patients (17 vs 27.8 months, p < 0.0001).
CONCLUSION
High PSA value assessed 6 weeks after brachytherapy and young age were significantly associated to a higher risk of bounces but not to BF. Long delays between brachytherapy and PSA increase are more indicative of BF.
Introduction
Permanent seed prostate brachytherapy has become a standard treatment for localized prostate cancer [1,2]. The follow up of patients treated with this technique is mainly based on PSA screening, with PSA levels decreasing slowly over years to a nadir. The value of the nadir has been correlated with patient clinical outcome, which has led some authors to propose a threshold for defining biochemical complete response (i.e. 0.5 ng/ml for patients with at least 6 years follow-up), but no consensus has been reached on this issue for a long time [3][4][5]. The American Society for Therapeutic Radiology and Oncology (ASTRO) consensus conference held in San Antonio in 1997 defined biochemical failure after exclusive adjuvant external beam radiotherapy (EBRT) of the prostate as three consecutive increases in PSA levels [6]. According to this definition, date of relapse was calculated retrospectively as the midway between the date of nadir and the date of the first ascent of PSA. However, many criticisms were made on this back dating system, and on the fact that this definition did not account for clinical progression or survival and was not applicable in case of associated hormonal therapy. In 2006, the RTOG-ASTRO Phoenix consensus conference defined biological failure as a rise of 2 ng/ml or more above the PSA nadir [7]. This definition was rapidly assessed and was proposed in 2006 for use in prostate brachytherapy [8]. However, the decrease of PSA after prostate brachytherapy may be disrupted by the occurrence of PSA bounces, defined as a transient increase of the PSA value with spontaneous correction, which are frequent after prostate brachytherapy and may mimic or be mistaken for recurrences when strictly applying biochemical failure definitions.
The main objectives of this study were to identify predictive factors of bounce occurrence and criteria distinguishing bounces from true biochemical relapses.
Methods and material
Prostate brachytherapy with permanent iodine seeds was first used at the Centre Léon Bérard in November 1999 and so far more than 500 patients have been treated. In order to investigate PSA bounces and obtain a significant follow-up, our study population included all the men for prostate carcinoma classified in the low or intermediate group according to D'Amico et al. classification [9], with at least 36 months follow-up. Patients who received hormonal therapy or additional EBRT were excluded.
Procedure
Two different brachytherapy techniques were consecutively used: a free seeds technique with use of the Mick applicator during the first 3 years, then the "FIRST" technique from Nucletron (Veenendall, The Netherlands) characterized by the use of a seed projector. During both periods, the prescribed dose was 160 Gy to the entire prostate (applying TG43 guidelines) and an intra-operative dosimetry was performed based on ultrasound delineation of the prostate and organs at risk. PSA testing were performed 6 weeks and 6 months after the brachytherapy, then every 6 months up to 5 years, and at least once a year thereafter. The frequency of PSA testing was usually increased to every 3 months for patients who experienced a PSA increase.
Definitions
As suggested by the majority of authors, PSA bounce was defined as an increase of at least 0.2 ng/ml above the nadir, followed by a spontaneous decrease to or below pre-bounce level [10][11][12][13][14][15][16][17][18][19][20][21][22]. An isolated increase in PSA at the first time point, 6 weeks after implantation, was not considered as a bounce and PSA values corresponding to visits earlier than 3 months after brachytherapy were not taken into account in bounce screening. Alternative definitions of the bounce were applied for the description of the bounces (+0.1, +0.4 and +2 ng/ml), but the analysis were done using +0.2 ng/ml.
The duration of the bounce was defined as the time from the pre-bounce nadir to the first PSA level below this nadir. The magnitude of the bounce was defined as the difference between the nadir and the highest value of the peak. Time to onset was assessed by delay between brachytherapy and date corresponding to the first date where PSA increased by more than 0.2.
Biochemical relapse was defined according to Phoenix criteria (PSA nadir + 2 ng/ml). True biochemical relapse was defined as a PSA increase fulfilling Phoenix criteria but not bounce definition (spontaneous PSA decrease) or post-brachytherapy positive biopsy or start of a salvage treatment. In case of patients with a bounce and then a true biochemical relapse, data were treated in the analysis of bounce and relapse. In the case of patients having two or more bounces, only the first one was used for analysis.
PSA velocity (PSAV) and PSA doubling time (PSADT) were calculated with formula: PSAV = (PSAf -PSAn)/ΔT and PSADT = ln(2) ΔT/(ln(PSAf) -ln(PSAn)) where ΔT is the delay between PSAn (nadir) and PSAf (date of peak for bounce patients or date of true biochemical relapse). Date of true biochemical failure was defined as the delay between brachytherapy and the first date where PSA is higher than 2 ng/ml from the nadir, the date of post-brachytherapy positive biopsy or the date of start of a salvage therapy.
Statistical analysis
True biochemical relapse-free survival delay was calculated from the date of the brachytherapy to the date of relapse. The probability of true biochemical relapse-free survival was calculated using the Kaplan-Meier method.
Clinical or dosimetric factors possibly predictive of bounces were assessed by logistic regression. Factors included in the univariate analysis were: patient age, tumour risk group according to D'Amico et al. classification (low vs. intermediate), pre-BT PSA, PSA value assessed 6 weeks after the brachytherapy, brachytherapy technique (free seeds vs. FIRST) and intra-operative dosimetric parameters (prostate volume, V144, D95, D90, total number of seeds and seeds density).
Potential predictive factors of bounces with a 0.1 significance level in univariate analysis were included in a multivariate logistic regression.
These factors were also tested to predict presence of true biochemical relapse with a logistic regression model. Bounce patients were compared to patients with true biochemical relapse: PSAV, PSADT, nadir before rise and time to onset were tested with a wilcoxon test. In case of patients with bounce and relapse, they were considered only with bounce for this comparison.
All statistical analyses were done using SAS software v.9.1 for Microsoft Windows (SAS Institute, Cary, NC, USA).
Results
198 men fulfilled the inclusion criteria and were included in the study. Their characteristics are detailed in Table 1. A total of 2,219 PSA values were recorded, with a median number of 10.5 per patient (range: 3 -24). Figure 1 shows distribution of PSA values at each visit.
At the first visit, scheduled 6 weeks after the brachytherapy (median 6.4 weeks, range 4.1 -9.7), 20% of the patients had a PSA increase from its initial value (median increase: 0.83 ng/ml, range: 0.1 -5.2). This value was not taken into account in bounce identification as it may have been altered by implantation and prostate edema.
At the time of analysis, 80.3% of the patients had achieved a nadir < 0.5 ng/ml.
Bounces
Seventy one patients (35.9%) experienced at least one bounce defined by a PSA increase of at least 0.2 ng/ml followed by a spontaneous decrease to or below prebounce level. Ten patients experienced 2 bounces and one patient experienced 3 bounces. The respective proportion of bounces defined as a PSA increase with a threshold of 0.1, 0.4 and 2 ng/ml followed by a decrease to or below pre-bounce level were 48.5, 25.8 and 4.5% respectively. Characteristics of the bounces are presented in Table 2.
Median time to onset was 17 months (3.6-50.2), and 56.3% of the bounces occurred between 12 to 24 months after the brachytherapy. After 30 months, bounces were rare (8.5%).
The median bounce duration was 13.6 months (range: 4.0-44.9), and 75% were limited to 20 months. 18.3% lasted over 2 years. The median increasing part duration of the bounces, which is the most agonizing, was 6.4 months (1. , and was limited in 75% of the bounces to 11 months. The median magnitude was 0.6 ng/ml (0.2-5.1). It was lower than 1 ng/ml in 72% of the cases, but higher than 2 ng/ml in 12.5% of the bounces (9 patients).
In 9 cases, the PSA raise overcame the threshold of 2 ng/ml, but was followed by a spontaneous decrease to or below pre-bounce level fulfilling bounce definition. Thus, the "true recurrence rate" (true BF or salvage treatment) was 7.1% (21 BF + 2 salvage therapy -9 bounces = 14 pts) and a 4 year RFS of 95% (CI95% = 91-97).
Predictive factors for biochemical failures
"True recurrences" (definition above) occurred respectively in 5 pts belonging to the intermediate risk group (8.8%) and in 9 pts to the low risk group (7.1%) (p = 0.686).
In univariate analysis, no factor was found being predictive of true relapse. Results are shown in Table 4.
There was a better true biochemical relapse free survival in patients who experienced a bounce (true relapse free survival rate at 4 years: 100% vs 92%, logrank test: p = 0.0066).
Of the 71 patients who experienced a bounce, only one subsequently had a true biochemical relapse; he belonged to the intermediate prognosis group. Hence, none of the patients in the low risk prognosis group who had a bounce experienced a true biochemical relapse. Moreover, none of the patients who experienced a bounce > 2 ng/ml experienced a true biochemical recurrence.
Biochemical failures vs. bounce
One patient experienced a bounce followed by a true biochemical relapse. He was considered as bounce patient here. Table 5 presents results of comparison between the 71 bounces and the 14 true relapses.
Median PSADT was 11.5 months for the 71 patients who experienced a bounce (range: 1.0-70.2) and 12.0 months for the 14 patients with a true recurrence without bounce (range: 3.9-49.5). The median PSA velocity was estimated at 0.11 ng/ml/month for patients experiencing bounce (range: 0.01-0.85) and 0.12 ng/ml/month for patients with true relapse (range: 0.03-0.58). There was no difference between groups.
Bounces occurred significantly earlier than true BF (17 months vs. 44 months, p < .0001). Furthermore, the level of nadir + 0.2 ng/ml was reached significantly later in true BF cases than in bounce: 27.8 versus 17 months (p < 0.0001) [23]. They observed diffuse metabolic activity uncorrelated with residual malignancy or initial tumor mapping, suggesting that bounce could be related to inflammation.
One standard deviation has been evaluated as 0.1 ng/ml. Depending on the definition applied bounce occurrence is highly variable and ranges from 2.5 to 88% (Table 6) [ [10][11][12][13][14][15][16][17][18][19][20][21][22][24][25][26][27][28][29][30][31][32][33][34]. However, three studies focusing on bounces and applying similar inclusion criteria (no additional pelvic EBRT or hormonal therapy) and similar bounce definition reported closer bounce rates, between 37 and 50% (details in Table 7) [14,21,34]. From those studies, the highest bounce rate was reported by Zwahlen et al. who included early PSA assessments in their analysis whereas we excluded the 6 week value from the bounce search as we consider that this value could be increased by the edema due to the seed implantation. This could be an explanation to their high bounce rate. The incidence and the time to occurrence of bounces may also be closely related to the frequency of PSA assessments after the brachytherapy, explaining some fluctuations. In that way, Caloglu et al. looking for a predictive factor of bounce occurrence showed that the number of PSA assessed per year was significantly correlated with bounce in multivariate analysis (1.8 vs 1.7, p = 0.014) [26].
Predictive factors for bounce occurrence
Age is the most commonly reported predictive factor for a bounce (Table 6). It seems that young age could be correlated to bounce occurrence, and different thresholds have reported, from 60 to 70 years old. Critz et al. showed that patients who were ≤ 60 years old have a two fold risk of bounce than the patients ≥ 71 years old (57% vs 26%, p < 0.0001) [27]. Similarly in a recent report Thompson et al. showed that 60% of the observed PSA bounces occurred in young patients (≤ 59 years old), whereas they account for only 22% of true BF [31]. Age may, therefore, also have influenced the differences in reported bounce rates. However, patients seemed to be similarly aged in the study by Crook and ours: 66 years old (50-80) versus 67 (49-80) [14]. In the series by Mitchell, patients were younger (median: 62.1, 43-75) and the bounce rate was slightly higher, 37% [14,21]. Prostate volume (> 35 ml) has been identified as predictive of bounce by Stock et al. (23 versus 11% at 5 years, p = 00.1), but this correlation has not been observed by the other authors [25]. The transition zone volume to total prostate volume ratio has been assessed in two studies. A low ratio was significantly associated with bounce in the study by Merrick et al. [20], but no correlation was reported by Crook et al. [14] Das et al. tried to correlate PSA bounce to various events. They reported that 23% of the bounces may be subsequent to ejaculation, cycling, invasive exams, or radiation proctitis, [15] but those are not supported by scientific evidence.
Intra-operative and post-brachytherapy dosimetric factors have also been largely studied. Stock [17]. Some have speculated that high doses could lead to a greater likelihood of inflammatory and thus to PSA bounce. This dose/bounce correlation was also convenient to explain the correlation between bounce occurrence and biochemical control. Conversely, in Merrick et al.'s experience bounces are more likely associated with a low V150 (< 55%) [20]. In most studies, as in ours, authors failed to demonstrate a link between dosimetric factors and PSA bounce. Pre-brachytherapy PSA value has not been correlated with the occurrence of a bounce, except in Makarewicz's HDR brachytherapy experience. In this report, the investigators showed that patients who experienced a bounce had greater pre treatment PSA value than the others (16.7 ng/ml vs 14.7 ng/ml, p = 0.045) [19].
Nevertheless, Merrick et al. have reported a correlation between a first high post treatment PSA value and the occurrence of a bounce (1.2 ng/ml vs. 0.7, p < 0.001) [35], but in that study, PSA was evaluated every 3-6 months and the first evaluation date seemed to be variable. In our series, PSA level was systematically assessed 6 weeks after the implantation, and was shown to be highly predictive of the occurrence of a bounce. Moreover, this parameter was not correlated with the occurrence of a true biochemical recurrence. Unfortunately, we did not identify any threshold.
Dose rate/Isotope
Most of the studies were based on 125 I permanent implantation. Merrick stated that the use of 103 Pd lead to a half likelihood of bounce (17% versus 33%, p = 0.002) [35]. Those results were confirmed in a randomized trial comparing the use of 125 I and 103 Pd led by Bostancic et al. They have shown by multivariate analysis that 125 I is significantly associated with a higher frequency of bounce in hormone-naive patients (45.7% with 125 I vs. 14% with 103 Pd) and in patients receiving neoadjuvant hormonal deprivation (respectively 28.1 and 20.7%) [11]. The dose rate can also be modulated in brachytherapy by using HDR. McGrath et al. reported similar rates between LDR permanent seed brachytherapy (34%, n = 191) and exclusive HDR brachytherapy (36%, n = 93) [28]. Similarly, Makarewicz et al. reported equivalent bounce rate while combining EBRT with HDR brachytherapy (31%, n = 31%) [19].
Hormonal therapy
PSA bounce phenomenon and hormonal therapy is more confusing as the spike could be a consequence of the end of hormonal deprivation and of the testosterone recovery. For Patel and Toledano, ADT had no influence either on bounce rate or its magnitude [22,32]. Similarly, Ciezki et al., observed similar bounce rates between ADT treated patients (45%) and hormone naïve patients (48.4%, p = 0.67) [13]. Conversely, Pikles reported higher bounce rates in the ADT group (89% versus 71%, p = 0.001) [30].
PSA bounce: a predictive factor for biochemical control?
It has been hypothesized that PSA bounce after brachytherapy could be predictive of biochemical control, whereas it is known to be correlated with biochemical failure after EBRT. In a series of 4,838 patients treated with EBRT, Horwitz et al. observed a bounce (defined as a ≥ 0.4 ng/ml PSA increase) in 20% of cases, and this bounce was independently correlated to biochemical failure [36]. These results have been confirmed by several other reports [37,38].
Patel et al. analyzed a series of 295 patients treated with brachytherapy (combined with hormonal therapy in 2/3 of the patients), with quite a short median follow up of 38 months. They observed that the BF-free survival assessed using the ASTRO consensus was 100% in the bounce group (28% of the population) vs. 92% in other patients (p = 0.018) [22]. In an other study by Ciezki et al. with longer follow-up (73 months), biochemical-free survival rates for patients who experienced or not a bounce were respectively 96% and 79% (p = 0.015) using the ASTRO definition and 100% and 92% (p = 0.004) using the Phoenix criteria [13]. Recently, Hinnen et al. published a large study including 975 patients and showed a strong link between bounce and outcomes. Ten years freedom from BF, disease free survival, and overall survival were respectively 90%, 99% and 88% in case of bounce against 70%, 93% and 82% for "no bounce" patients. They also reported only one cancer death in the "bounce group" (0.32%), compared with 40 (6.05%) in the no-bounce group [16]. Furthermore, Caloglu et al. tested several bounce definitions (≥ 0.2, ≥ 0.4, ≥ 0.6, ≥ 0.8) and found that the only definition for which there was a significant difference in BF free survival between bounce and no-bounce patients [26]. In our series, we observed only one biochemical relapse after a PSA bounce in one patient who belonged to the intermediate prognosis group. With 64 months follow-up, the occurrence of a bounce was therefore statistically correlated to biochemical disease-free survival in the subgroup of patients with "favorable prognosis" (p = 0.039).
Differentiate benign bounces from genuine biochemical relapses
Distinguishing a benign PSA bounce from genuine biochemical recurrences is a major issue. On one hand it would reassure most patients with a PSA raise, and on the other hand, permit detection of true relapses in order to avoid expensive investigations such as 18-F choline PET/ CT or invasive biopsies. To date, only follow up permits distinction of bounce from true BF. As illustrated in this series, 4 patients experienced an increase of the PSA over +2 ng/ml, and had begun a spontaneous decrease of their PSA at the time of analysis (of more than 1 ng/ml for 2 of them), but still could not be classified as bounces as the PSA did not return to the nadir.
Despite these limitations, Phoenix criteria have appeared to be more accurate than the ASTRO criteria to predict clinical outcomes in prostate cancer patients treated with either EBRT or BT [7].
Kuban et al. led a comparison of 12 different BF definitions on a large series of 2,693 men treated with permanent seed brachytherapy. They concluded that nadir +2 ng/mg provides the best sensitivity/specificity balance (70% and 89% respectively) [18]. Pickles et al. came to the same conclusion with a smaller cohort [30]. However, using this definition as a surrogate for relapses still leads to false positive results, especially because of the bounce phenomenon. Crook and Mitchell reported 15% and 7.5% false positives [14,21]. In our series, 9 cases (41% of the patients who experienced a nadir + > = 2 increase) would have been considered as biochemical failures by strictly applying the Phoenix criteria. For those patients, prostate biopsy cannot reliably distinguish between bounces and biochemical relapses during the first 3 years. Reed et al. reported 8 cases of patients who underwent biopsies as their PSA level increased to 2.6 and 8.4 ng/ml above the nadir, 9 to 25 months after the brachytherapy. Biopsies showed residual cancer, but PSA spontaneously decreased to previous level in all patients [39].
In order to take into account the lack of specificity of the Phoenix criteria, several definitions have been proposed. Patel et al. have simply suggested that a bounce should never exceed the pretreatment PSA level [22]. This parameter could be appropriate for Mitchell's series where only one bounce exceeded the pretreatment level, but it would have led to false positivity for recurrence in 7% of our bounce patients and 15% of those reported by Crook et al., and therefore does not seem reliable [14]. Thompson et al. applied an alternative PSA bounce definition: Phoenix definition (+2 ng/ml) followed by a spontaneous decrease to ≤ 0.5 ng/ml, threshold which had been previously used by some authors as usefull criterion. 44% of the BF were reclassified as bounces [31]. As described in most series, the large majority of the bounces occur during the first 2 years. Based on this observation, Ghilezian et al. recently proposed an alternative BF definition: nadir + 5 for the initial 24 months, and then nadir +2. This definition might be a superior predictor for biochemical failure in patients treated with brachytherapy, particularly if aged < 60 years [40]. In our series, applying such a definition would lead to only one patient misclassified as BF.
In an attempt to differentiate bounce from true biochemical relapse, several parameters have been tested. Mitchell et al. reported a series of 205 patients, and observed 79 bounces defined as an increase of ≥ 0.2 from the nadir followed by spontaneous decrease to the nadir value or under, and 6 Phoenix true biochemical relapses. They found that PSA velocity was 0.08 ng/ml/month for bounces versus 0.28 ng/ml/month for true Phoenix biochemical relapses (p = 0.0005). Using the former ASTRO criteria, they did not observe any significant differences. The authors failed to demonstrate a predictive threshold for PSADT [21]. PSA velocity and PSADT were not significant in our study, possibly because of the lenghthy time interval between PSA assessments at the time of PSA raise (6 months), and the limited number of patients experiencing a nadir +2 ng/ml increase.
Time to onset of PSA increase has been shown to be useful to distinguish bounce and relapse. Merrick et al. reported that 83% of the bounces occurred in the first 30 months following brachytherapy [20]. Ciezki et al. have reported that failures occurred after a median of 22.3 months, using Phoenix biochemical failure definition, whereas bounces occurred after a median of 15.1 months (p = 0.013) [13]. Similarly, Crook et al. have observed bounces at 15.2 months and failures at 30.9 months (p = 0.02) [14]. Our study confirmed the validity of this criterion (17 versus 27.8 months, p < 0.0001). We also observed that the whole misleading bounces, higher than 2 ng/ml (9 cases), occurred within the 24 first months of the followup (median 17.9 months), whereas "true" BF occurred from 15.6 months with a median delay of 44 months. | 5,714.8 | 2012-03-26T00:00:00.000 | [
"Medicine",
"Physics"
] |
On the Vilkovisky unique effective action in quantum gravity
The divergent part of the one-loop unique effective action for quantum Einstein gravity is evaluated in the general parametrization of the quantum field, including the separated conformal factor. The output of the calculation explicitly verifies the independence on the field parametrization. The version of effective action introduced by Vilkovisky is unique if the metric in the space of quantum fields is chosen in a"natural"way. The uniqueness of the effective action enables constructing well-defined, individual renormalization group equations for both Newton and cosmological constants, which describe the running of these effective charges between the GUT scale in the UV and the extremely low energy scale in the IR.
Introduction
The off-shell effective action in gauge theories depends on the choice of the gauge-fixing and the parametrization of quantum fields. One of the important consequences of this ambiguity is that, even in the framework of effective low-energy quantum gravity, one cannot have well-defined individual renormalization group equations for the Newton constant G and the cosmological constant Λ. There is only one unambiguous equation, for the dimensionless combination of these constants. On the other hand, in the modified versions of effective action proposed by Vilkovisky [1] and DeWitt [2] there is no gauge or parametrization ambiguity. The purpose of the present work is to evaluate the divergent part of the one-loop Vilkovisky effective action for the quantum version of Einstein gravity in a general parametrization of the quantum field, and explicitly verify the independence of this construction on the parametrization.
The classical action of the theory of our interest has the form where G = κ 2 /(16π) is the (D-dimensional) Newton constant and Λ is the cosmological constant.
There is an extensive literature on the derivation and analysis of one-loop and two-loop divergences in the theory (1). The first calculations were performed in [3] for gravity coupled with the minimal scalar field and in [4] for gravity coupled to an electromagnetic field. The calculation in the nonminimal gauge was pioneered in [5]. The parametrization dependence was explored in [6][7][8] and, in a more general form, in the more recent Ref. [9]. In what follows we shall use some technical developments of the latter work, which can be also consulted for further references.
The unique effective action of Vilkovisky is independent of the parametrization of quantum fields by construction. On the other hand, this construction becomes complicated in gauge theories, where one has to combine corrections compensating gauge and parametrization ambiguities. In this regard, a special case is the two-dimensional quantum gravity. It was noted in [1] that, in this particular example, the gauge and parametrization ambiguities mix in such a way that the unique effective action may turn out to depend on the gauge fixing. Later on, this feature has been confirmed by a direct calculation in [10]. The origin of this contradictory result is that the unique effective action depends on the choice of the metric in the configuration space, or the space of the quantum fields, in the background field formalism. In gravity, the configuration-space metric has one arbitrary parameter a. And it happens that in D = 2 this parameter depends on the gauge fixing, because of the reduced number of the physical degrees of freedom. The D = 4 quantum gravity in the conformal parametrization has a lot of technical similarity with the D = 2 case, so one can suspect that some gauge or parametrization dependence may persist in this case too. This possibility makes the explicit verification of the full parametrization independence in D = 4 quantum gravity a decent problem to solve.
Another aspect of the unique effective action, which was explored earlier in [11], is the possibility to construct the well-defined, unambiguous, separate renormalization group equations for both Newton and cosmological constants in the theory (1). In what follows we consider these equations in a slightly different manner, i.e. within the framework of effective quantum gravity.
The outline of the paper is as follows. Sec. 2 briefly reviews the formalism of Vilkovisky's effective action. The main objective of this section is to make the paper self-consistent and to fix the notations. In Sec. 3 we formulate the one-loop quantum gravity using the background field method in a general non-conformal parametrization of quantum field and a special minimal gauge. The metric in the space of the fields, the Christoffel symbols and the improved bilinear form of the classical action are derived in Sec. 4. It is shown that the coefficients related to the parametrization nonlinearity are compensated by this correction. The corresponding one-loop divergences of the Vilkovisky effective action are computed, in the minimal DeWitt gauge, in Sec. 5. In Sec. 6 the result is generalized to the most general, conformal parametrization of the quantum metric. In Sec. 7 we construct, solve, and discuss the renormalization group equations for the Newton and cosmological constants. Using the framework of effective quantum gravity, it is shown that these equations are applicable in the extensive interval of energies, but do not provide the dramatically strong running. Finally, in Sec. 8 we draw our conclusions.
In this paper we adopt the condensed notations of Refs. [12] and [13].
Vilkovisky effective action: a short review
Vilkovisky's proposal for defining a parametrization-independent effective action [1] is based on the following observation: even though the classical action S(ϕ) is a scalar in the space M of fields ϕ i , the generating functional of vertex functions (effective action) is not a scalar functional of the corresponding mean fields. In the simplest, one-loop approximation the effective action depends on the Hessian of the action, S ,ij = δ 2 S δϕ i δϕ j , which does not transform as a tensor under field redefinitions ϕ i = ϕ i (ϕ ′j ).
To provide the scalar nature of the effective action, in Ref. [1] it was introduced an affine structure compatible with the metric G ij in the space M . For given two close points ϕ i and ϕ ′i , there exists a unique geodesic curve x i (λ) ⊂ M with affine parameter λ ∈ [0, 1] connecting them, x i (0) = ϕ i and x i (1) = ϕ ′i . Then, defining the two-point quantity (the tangent vector to the geodesic at ϕ ′i , see e.g. [12,14]), the modified definition of the effective action has the form where µ(ϕ ′ ) is an invariant functional measure and the comma denotes functional differentiation with respect to ϕ i . Because σ i (ϕ, ϕ ′ ) behaves as a vector with respect to ϕ i and as a scalar with regard to ϕ ′i , the effective action Γ(ϕ) constructed in this way is a scalar under field reparametrizations.
A qualitatively similar construction can be done for gauge theories, to restore the off-shell gauge independence, given that the effective actions calculated in different gauges are connected by changes of variables (in general, in the form of a canonical transformation [15][16][17]). However, in this case, the prescription (2) cannot be used directly since it is necessary to factor out the gauge group G in the functional integral. Namely, one has to take into account the gauge orbits and define an affine connection in the configuration space M /G of physical fields. For the sake of simplicity, we assume that the generators R i α of gauge transformations are linearly independent and their algebra is closed, with the structure functions F γ αβ being independent of the fields. Let the classical action be invariant under gauge transformations Given a metric G ij on M one can define the projection operator on M /G [1, 18] where N αβ is the inverse of the metric on G , Then the projected metric is The affine connection T k ij on the physical configuration space can then be obtained by requiring its compatibility with the metric G ⊥⊥ i j i.e. ∇ k G ⊥⊥ i j = 0 (see e.g. [19,20]). This yields [1] T which consists of the Christoffel symbol Γ k ij calculated with the metric G ij , and a non-local part T k ij related to the gauge constraints on the connection, The parenthesis in the indices represent symmetrization in the pair (i, j) and D i denotes the covariant derivative calculated with the Christoffel connection Γ k ij . The non-locality of (9) is due to the fact that N αβ is a differential operator and thus its inverse N αβ is formally a Green's function. In addition to that, this procedure provides the measure µ(ϕ) of the Faddeev-Popov quantization, see e.g. [21,22]. The effective action (2) constructed using the geodesic distance based on the connection T k ij is, therefore, reparametrization invariant, gauge invariant and gauge independent. For this reason this object is often called unique effective action 1 .
Performing the loop expansion of the Vilkovisky effective action (2) one gets where the one-loop quantum contribution is given by [1] As usual, in pure quantum gravity we can use κ as a loop expansion parameter, instead of .
Here χ α is a gauge condition introduced by the gauge-fixing action Y αβ is a non-degenerate weight function (the χ α -space metric) and M α β = χ α ,i R i β is the Faddeev-Popov ghost matrix. Comparing (11) to the loop expansion of the standard effective action, one notes that the second functional derivative of the classical action has been replaced by the second covariant variational derivative.
From the technical side, the computation of (11) is, in general, a very complicated task because of the non-localities of the term T k ij . For this reason, most of the evaluations found in the literature use some kind of DeWitt gauge [26], for which 1 Another gauge-and parametrization-invariant effective action was proposed by DeWitt [2] and subsequently discussed in Refs. [23][24][25]. Since both definitions coincide at the one-loop level, we do not present this construction.
The purpose of the present work is to evaluate the divergent part of (11) for the quantum gravity based on the general relativity. In this calculation, we follow the reduction method introduced in Ref. [13], which mainly consists in making a power series expansion in the equations of motion ε i and applying the generalized Schwinger-DeWitt technique. By using the DeWitt gauge (13) and the Ward identities, it is possible to write (11) in the form [13] whereN = Y αγ N γβ and N αβ was defined in (5), takes into account the nontrivial geometry of the space of fields M , and are two nonlocal operators responsible for restoring the off-shell gauge independence of the oneloop effective action. In (17),Ĥ −1 is defined by the relationĤ ·Ĥ −1 = −1. In the case of our interest, the terms of orders higher than ε 2 do not contribute to the divergent part of the one-loop effective action and, therefore, are not considered here. It is worth noting that the latter feature is not true for other models of quantum gravity. In fact, in the higher-derivative fourth-order gravity only linear terms in ε i contribute to the divergences [27,28], while in quantum general relativity in higher dimensions other terms are necessary. For explicit expressions of the O(ε 3 )-terms, see [29]. Calculations of the unique effective action in D = 4 gravity models can be found, e.g., in [20,[29][30][31][32][33]. Even though we are mainly interested in D = 4 results, for the sake of generality we let the space-time dimension D arbitrary in our intermediate calculations.
Field parametrizations and bilinear form of the action
In the traditional background field method the original field g ′ µν is split into a sum of a classical background g µν and a quantum field h µν , i.e, g ′ µν = g µν + κh µν . As in the present work we are interested in evaluating the one-loop divergences in a general parametrization of the quantum field, instead of performing the usual linear shift, we shall consider g ′ µν = f µν (g αβ , φ αβ ). Here the indices are lowered and raised with the external metric g µν (and its inverse g µν ) and f depends on the quantum field φ µν possibly in a nonlinear way. Assuming that f has a series expansion, we can define the most general (at one-loop order) parametrization of the quantum metric in the form [9] where A ... (n) µν are tensor structures depending only on the background metric, and κ is the loopexpansion parameter. Through covariance and symmetry arguments, the coefficient functions in (18) have the general tensor form In these expressions and γ i (i = 1, · · · , 6) are six arbitrary coefficients parameterizing the choice of the quantum variable. The restrictions γ 1 = 0 and γ 1 + Dγ 2 = 0 have to be imposed, to provide that the change of coordinates from g ′ µν to φ µν do not be degenerate. Terms of order O(κ 3 ) in (18) contribute only at the two-and higher-loop orders, hence are irrelevant and will be omitted in what follows. The one-loop contribution requires a functional integration of a quadratic form in φ µν , hence it is evaluated taking κ → 0 in Eq. (14).
Inserting expressions (19) and (20) in Eq. (18) we get where g µν φ µν ≡ φ denotes the trace of the quantum metric. The Eq. (22) represents a general parametrization of the quantum metric for one-loop calculations. Other choices of quantum variables based on the expansions of |g ′ | p g ′ µν and |g ′ | q g ′µν (see, e.g, Refs. [7,8,34]) can be reduced to particular cases of (22). The explicit values of γ i for these parametrizations are displayed in the Table 1. Let us note that it is possible to construct a parametrization of the more general type g ′ µν = e 2κrσ (g µν + · · · ), in which the conformal factor σ(x) of the metric is explicitly separated. Calculations using the conformal parametrization can be found, e.g., in [6,8,9]. We postpone the discussion on this choice to Sec. 6. The bilinear form of the action can be obtained by expanding (1) in powers of φ µν by means of (22). This yields [9] S(g ′ µν ) = S(g µν ) where and unnecessary superficial terms have been omitted. In the last formula and the tensor objects are defined as with It is worth noticing that all the dependencies on the parameters γ 3,··· ,6 of the nonlinear part of the field splitting (22) is encoded in the tensor M µν,αβ 2 . In the above-given formulas, and in the following ones, we may present expressions in a compact form in which all algebraic symmetries are implicit (for more details, see [9]).
Finally, from Eq. (23) it follows that the equations of motion read Now we have all basic elements to perform the desired calculation.
The improved bilinear form of the action
General relativity and other metric theories of gravity are gauge theories based on the diffeomorphism group G . The configuration space M is the set of all spacetime metrics, and the coset M /G is known as the space of spacetime geometries. In quantum gravity the invariant configuration space metric is defined, up to an arbitrary real parameter a, by [35] The non-degeneracy of G ′µν,αβ is ensured by the condition a = −1/D. Explicit calculations have shown that the Vilkovisky effective action depends on the choice of a [20,36,37]. The ambiguity owed to the parameter a can be fixed by an additional prescription.
A differential operator is said to be minimal if its highest derivative term is given by a power of the ✷ operator. In quantum gravity models, the minimal operator almost always has the form of G µν,αβ ✷ n with the parameter a unambiguously fixed by the choice of classical Lagrangian and the parametrization of the quantum field. In Ref. [1], it was proposed that a should be chosen correspondingly, namely, the field-space metric should be the expression in the highest-derivative term in the minimal version of the bilinear part of the classical action. For the quantum general relativity n = 1 and, in the standard simplest parametrization, this "natural" condition for choosing the configuration-space metric fixes the value a = −1/2. However, even in the minimal gauge, the coefficient a may be changed by modifying the parametrization of the quantum metric, that is by changing the coefficients γ i in Eq. (22). The purpose of this work is to check whether this change does not produce a modification in the divergent part of the one-loop unique effective action. But, for the sake of generality, in most of the paper, we regard a an arbitrary parameter.
The field-space metric in terms of the variable φ µν can be obtained by performing a change of variables in Eq. (32), which gives where with the coefficients Formula (35) can be rewritten using the definition of Eq. (27), One can see that for a = −1/2 the background configuration space metric reduces to the factor of the d'Alembertian in Eq. (25). This agrees with the Vilkovisky's prescription [1] for fixing the ambiguity in the one-parameter family of metrics, even for the general parametrization (22). The Christoffel symbol (8) associated with the metric (34) has the form Γ µν,αβ where the inverse of the configuration-space metric (34) is and K −1 µν,αβ is the inverse of (27), with A straightforward calculation of (39) yields Γ µν,αβ ρσ = κ c 1 δ µα ρσ g νβ + c 2 (δ µν ρσ g αβ + δ αβ ρσ g µν ) + c 3 δ µν,αβ g ρσ + c 4 g µν g αβ g ρσ + O(κ 2 ), where the coefficients are (1 + aD) Using Eqs. (31) and (43), the Christoffel correction term in the second covariant derivative where M µν,αβ 2 and x 1,2 were defined in Eqs. (29) and (30), respectively. We remark that the parameters γ 3,..,6 , which are related to the nonlinear terms in the parametrization (22), only occur in M µν,αβ 2 , just like as in (25). Because of this, the second functional covariant derivative of the action (23) only depends on the parameters γ 1 and γ 2 , where It is clear that the Christoffel symbol derived from the metric (34) should suffice to compensate the dependence of S ,ij on the nonlinearity of the field parametrization. In fact, for κ → 0 all the parameters γ 3, ··· , 6 only contribute to the last term in the r.h.s. of that represents the non-tensor nature of this transformation.
One-loop divergences of Vilkovisky effective action
Up to this point, we have considered the part of the Vilkovisky effective action based on the Christoffel symbols on the space M of field parametrization. However, it is still necessary to introduce the gauge fixing for the diffeomorphism invariance and take into account the contribution of the Faddeev-Popov ghosts as well the terms (16) and (17) related to the gauge constraints on the affine connection.
The standard general form of the gauge-fixing action in quantum general relativity is where χ µ is the background gauge condition. The use of a linear gauge-fixing 2 is not a necessary condition to ensure the invariance of the Vilkovisky effective action [18,23]. Nonetheless, as explained in Sec. 2, the DeWitt gauge (13) is crucial for deriving the expanded formula (14). In our parametrization it assumes the form where we used the explicit expression for the generators of the gauge transformations R µν,α of the field φ µν , presented in the Appendix. Comparing Eqs. (49) and (25) it is easy to see that the choice a = −1/2 provides the minimal form of the operator (15), Let us remark that another possible way of making the operator H µν,αβ minimal is through the use of a specific parametrization, namely, γ 1 = −Dγ 2 . However, as explained in Sec. 3, this is not acceptable since it makes the metric in the space of the quantum fields singular, see Eq. (40), and the operatorĤ in (50) undefined. Thus, a = −1/2 is the sole reasonable choice. For this value of a, the operator gets reduced to the standard form where1 = δ µν αβ is the identity operator (21) on the space of symmetric rank-2 tensors and with Furthermore, with the gauge condition (49), the ghost matrix readŝ Notice that in the DeWitt gauge all the dependence on the parametrization is cancelled in the ghost operator, and that a = −1/2 makes it also minimal. Hereafter, we choose this value for a, such that bothĤ andN assume minimal forms. The correction which is responsible to restore the gauge invariance of the effective action is based on the nonlocal operatorsÛ 1 andÛ 2 , defined in (16) and (17). These operators depend on the two new vertices Particularizing the formulas above for the gravity theory in the parametrization (22) and using the gauge generators (90) given in Appendix, after some algebra we get and We see that the dependence on the parameters γ 3,...,6 corresponding to the nonlinear part of the field splitting (22) gets cancelled in (V 1 ) µν γ , while the vertex (V 2 ) αβ is parametrization-independent automatically.
The operatorsÛ 1 andÛ 2 can be obtained by substituting the two previous equations into the formulas (16) and (17), together with the propagators Here O([m] k ) denotes a series of inessential terms of higher background dimension k. Remember that, according to [13], for a functional universal trace the background dimension (in mass units) is defined as the dimension of the tensorial coefficient C µ 1 ···µ k , and its superficial degree of divergence is expressed by the relation ω = D − 2n + k. Thus, in four dimensions only the traces with background dimension 0, 1, 2, 3 and 4 contribute to the ultraviolet (UV) divergences. With all these ingredients in hand, it is possible to evaluate the contribution of each term in (14), up to background dimension O([m] 4 ), to the effective action. In the case of the operatorŝ H andN (respectively given by Eqs. (51) and (53)), this can be obtained from the functional trace of the coefficientâ 2 of the Schwinger-DeWitt expansion [12]. On the other hand, the functional traces of the nonlocal operatorsÛ 1 ,Û 2 1 andÛ 2 can be evaluated using the table of universal functional traces within the generalized Schwinger-DeWitt technique [13]. For example, one can easily show that where h 1,2 were defined in Eq. (42) and we used the notations Skipping the algebra, the contributions of the terms in (14) to the 1 D−4 -pole of the Vilkovisky unique effective action is presented in Table 2. It is important to recall that only in D → 4 the displayed coefficients correspond to one-loop divergences; nonetheless, our calculation in arbitrary dimension shows that they do not depend on the field parametrization even for D = 4. Moreover, one can see that the parametrization dependence which remained after the Christoffel correction was taken into account is cancelled in the functional trace of each operator on its turn, as none of the coefficients depend on γ 1,2 .
Since the object of our interest is the one-loop logarithmically divergent part of the Vilkovisky effective action, in the framework of dimensional regularization we can take the limit D → 4 in the coefficient of the pole term, to obtain As usual, µ is the renormalization parameter. Formula (60) reproduces the results for the Vilkovisky effective action for general relativity calculated in the standard, particular, parametrization of the quantum variables in [13] (the coefficients of the terms related to the cosmological constant were calculated for the first time in [18]). Moreover, it is straightforward to verify that, on the classical mass shell, the divergences of Eq. (60) correctly reduce to the coefficients of the usual on-shell effective action [3,39], Table 2: Contribution of each operator in (14) to the coefficients of each curvature invariant in the divergent (at D → 4) part of the one-loop Vilkovisky effective action. Each invariant enters the effective action multiplied by the overall coefficient as in Eq. (60). The final coefficients, which are the sum of the coefficients of columns 2-6, are presented in the last column.
This is an expected result since the Vilkovisky correction term is proportional to the equations of motion. On the other hand, this result is known to be gauge-fixing and parametrization independent [9].
It is interesting to compare the result for the unique effective action (60) and the one-loop divergences of the standard (usual) effective action in an arbitrary parametrization (22), derived in [9]. It turns out that the two expressions coincide if the parameters satisfy the conditions In this case, the one-loop divergences of the conventional effective action calculated in the minimal gauge coincide to those of the Vilkovisky effective action (60). Curiously, this result can be achieved only if the parametrization is nonlinear. The last can be readily seen from Eq. (63), which implies γ 3 = 0. Let us note that the observation formulated above can be seen as a parametrization-dependence counterpart for the result of [40], where it was derived a gauge for which the one-loop divergences of the conventional effective action (in the particular simplest parametrization) reproduce those of the unique effective action. In this vein, it is also worth pointing out that the Λ-dependent terms in (60) can be obtained by means of the Landau-DeWitt gauge within the usual definition of the effective action [18]. Nevertheless, the simple use of this particular singular gauge in the standard effective action cannot give the other divergent terms of the unique effective action for Einstein gravity because the space of fields is not flat [18,23,25].
Conformal parametrization of the metric
Let us now consider a more general parametrization of the metric, which explicitly splits its conformal factor, namely, where g µν is the background metric, φ µν and σ are the quantum fields and γ 1,··· ,6 and r are arbitrary parameters. The one-loop divergences of the standard effective action for Einstein gravity were evaluated in this parametrization in Ref. [9]. It turns out, however, that it is not possible to construct the Vilkovisky effective action directly in this parametrization. The reason is that the insertion of the conformal factor σ as a new field increases the total number of scalar modes and, as a consequence, the quantum theory has an artificial conformal symmetry, which introduces an extra degeneracy making the transformation singular. For example, in this case we have the metric in the space of the field configurations where A, B, · · · take the labels φ µν , σ, and G µν,αβ(0) coincides with Eq. (38). The determinant of the O(κ 0 )-term of this metric reads It is straightforward to verify that the term inside curly brackets is equal to zero, proving that the field-space metric is degenerate. Therefore, it is not possible to evaluate the Christoffel symbols.
In view of this observation it is necessary to impose, from the beginning, the additional conformal gauge fixing with λ being the gauge-fixing parameter. Expanding the exponential in (65) one can see that, up to order κ 2 , this parametrization reduces to (22) via the substitutions Then, all calculations that we carried out for (22) also apply for the conformal parametrization (65). An alternative approach is to split the field φ µν in the trace and traceless part, that is, It is clear that g µνφ µν = 0. We now have a parametrization in terms of two independent quantum fields:φ µν and φ. Applying (68) and (70) in (65) we get g ′ αβ = g αβ + κ(γ 1φαβ +γ 2 φg αβ ) + κ 2 (γ 3φαρφ ρ β + γ 4φρσφ ρσ g αβ +γ 5 φφ αβ +γ 6 φ 2 g αβ ) where the new coefficients arē Now it is possible to define a nonsingular metric in the space of the fields 3 , whereδ µν αβ = δ µν αβ − 1 D g µν g αβ is the identity operator in the space of traceless symmetric rank-2 tensors, and the coefficients read The inverse metric (G −1 ) AB (A, B, · · · =φ µν , φ) is given by With these ingredients, we can proceed the evaluation of the Christoffel symbols, whose nonzero components are For the second covariant derivative of the action we have At this stage, it is clear that the dependence on the nonlinear quantum field parametrization was compensated by the Christoffel correction, just like in (45). In addition, the use of the parametrization in terms of the traceless and trace parts reveals that the improved bilinear operator can be written as constant matrix times a differential operator independent of γ 1 and γ 2 , thus this dependence is trivial.
We point out that the conformal gauge fixing (68) does not require Faddeev-Popov ghosts because the conformal transformation has no derivatives [41]. Moreover, under the diffeomorphism (87) the field σ transforms as δσ = −∇ µ σ ξ µ , and the terms in the ghost operator associated with the generators R µ = −∇ µ σ can be safely ignored at one-loop level since they produce third-order contributions in quantum field; as a consequence we get (53). Therefore, even in the conformal parametrization, the final result matches the one presented in Eq. (60).
Renormalization group based on the unique effective action
One can use the result (60) and its generalization in Table 2 for analyzing the renormalization group equations in the low-energy (infrared, IR) sectors of the theory. Such a construction has a direct physical sense. In the high-energy domain (UV) the theory (1) cannot be applied without restrictions, as it is non-renormalizable. As we explained above, at high energies the contributions of massive degrees of freedom, related to higher derivative terms, are supposed to modify the betafunctions. However, since the quantum gravity based on general relativity is a massless theory, it makes sense to explore the renormalization group running in the IR. Differently from the fourthand higher-derivative models, in the present case there is no chance to meet an IR decoupling of massive degrees of freedom [41] (see also the discussion of this issue in [42] and [43]).
Since the theory is massless, the quantum gravity based on general relativity can be regarded as an effective theory of quantum gravity at the energies between the Planck scale, where the massive degrees of freedom related to higher derivatives can become relevant, and far IR. Thus, the Vilkovisky-DeWitt unique effective action enables one to explore the scale dependence in this vast region in a gauge-fixing and parametrization independent manner.
From the classical action (1) and the expression for the divergences (60), it is easy to obtain the renormalization relations (we use dimensional regularization) where we introduced the notations for the coefficients depending on D = 4 + ǫ and disregarded O(ǫ 2 )-terms, The bare quantities κ 2 0 and Λ 0 are µ-independent, as it is the case for the renormalized effective action. Applying the operator µ d dµ to both sides of each of the relations (78), after a small algebra we arrive at the renormalization group equations µ d dµ In the D = 4 limit these equations are equivalent to those obtained in [11,37]. One can explore the 4 + ǫ version of the renormalization group equations, similar to what was done in the two-dimensional case (see e.g. [44,45]) and also in the four-dimensional fourthderivative models [42]. However, in the present case the main results do not change and we restrict ourselves to the strict D = 4 consideration.
To solve Eqs. (80) and (81), we define the dimensionless quantity γ = κ 2 Λ. Due to the uniqueness of this dimensionless combination of κ 2 and Λ, the equation for γ gets factorized, The solution of this equation has the standard form where γ 0 = γ(µ 0 ) and µ 0 marks a fiducial energy scale. We assume the initial values of the renormalization group trajectories of the cosmological constant Λ 0 = Λ(µ 0 ) and the gravitational constant G 0 = G(µ 0 ) as it is useful to come back from κ 2 to G at this stage. Now, using (83) in (80) which are certainly consistent with (83). The solutions (84) and (85) are remarkable in several aspects. First of all, such independent solutions for the two effective charges are impossible in quantum gravity based on the usual effective action neither in quantum general relativity nor the fourth-derivative gravity, as the individual equations for G(µ) and Λ(µ) are completely ambiguous. In the latter model, only the solution for the dimensionless quantity in (83) is gauge-fixing and parametrization independent 4 . Here we have a well-defined running for the two parameters only because of the use of the Vilkovisky unique effective action.
Let us note that the unambiguous solutions for G(µ) and Λ(µ) exist in the superrenormalizable gravity model [43], but there are two relevant differences. The advantage of the equations and solutions of [43] is that those can be exact, in the sense of not depending on the order of the loop expansion. On the other hand, the higher-derivative models that lead to such an exact result imply the functional integration over massive degrees of freedom, which can be ghosts or healthy modes. This means that the corresponding equations are valid only in the UV for the quantum gravity energy scale, i.e., only in the trans-Planckian region. Below the Planck scale the massive degrees of freedom decouple and we are left with the quantum effects of effective quantum gravity, such as the ones of quantum general relativity (see e.g. [46], the review [47] and the recent discussion of the decoupling in gravity in [48,49]).
On the contrary, the running described by (84) and (85) comes from the quantum effects of the purely massless degrees of freedom. Up to some extent, the running should be described by the same equations in both UV and IR. The equations (80) and (81) gain extra contributions at higher loops, but in the region of asymptotic freedom these contributions may be not very relevant.
It is clear that the physical interpretation of the solutions (84) and (85) depend on the sign of γ 0 . Since the positive sign of G is fixed by the positive definiteness of the theory, the sign of γ 0 depends on the one of Λ 0 . Due to the cosmological observations, we know that the sign of the observed cosmological constant is positive in the present-day Universe [50,51]. For a positive γ 0 the solutions (84) and (85) indicate the asymptotic freedom in the UV. In case of a moderate cosmological constant (remember κ ∝ M −1 P ) the value of γ 0 is very small. This implies a very weak running, that is irrelevant from the physical viewpoint. In particular, the running (84) and (85) is not essential for the cosmological constant problem between the electroweak scale and the present day, low-energy cosmic scale.
On the other hand, at the electroweak energy scale, the early Universe probably passed through the corresponding phase transition. At that epoch, the observable value of the cosmological constant could dramatically change because of the symmetry restoration. Does this change Λ in the action (1)? The answer to this question is negative. Let us remember that the observable cosmological constant is a sum of the two parts: one is the vacuum parameter in the gravitational action (1) and another is the induced counterpart, the main part of it coming from the symmetry breaking of the Higgs potential. The main relations are (see, e.g., [52] or [53]) where λ is the self-coupling and v 0 the vacuum expectation value of the Higgs field. As far as ρ ind Λ is negative and the magnitude of ρ obs Λ is negligible, the sign of ρ vac Λ = 2Λ κ 2 is positive, independent of the electroweak phase transition.
Thus, we conclude that the sign of γ 0 is always positive, at least between the present-day cosmic scale in the IR and the GUT scale in the UV, where the considerations based on the Minimal Standard Model formulas, such as (86), may become invalid. In all this interval, the value of γ 0 is numerically small, such that the running in (84) and (85) is not physically relevant.
One can imagine a situation in which another phase transition occurs at the GUT scale (that means about 10 14 -10 16 GeV), such that the new vacuum Λ between this scale and the Planck scale M P ≈ 10 19 GeV is negative. Then, the solutions (84) and (85) indicate the asymptotic freedom in the IR. Furthermore, if the cosmological constant in this energy scale interval has the order of magnitude of M P , these solutions describe the situation of a dramatically strong running of both constants G and Λ, which are strongly decreasing in the IR. It might happen that in this case one needs to use higher loop approximation, that can change the form of the running. Further discussion of this possibility and the construction of the corresponding model of GUT is beyond the scope of this work, so we just want to note that our results indicate this possibility.
Conclusions
We performed the calculations of the one-loop divergences of the Vilkovisky unique effective action in quantum general relativity in an arbitrary, most general, parametrization of quantum metric, including the conformal parametrization and the corresponding gauge fixing. Due to the similarity between conformal parametrization and the two-dimensional quantum gravity, one could suspect that the unique effective action may lose its invariance and universality. We have shown that this does not happen and the one-loop divergences are universal.
The dependence of the unique effective action on the parameter a of the configuration space metric is fixed by an additional requirement that this metric is chosen as a bilinear form of the action in the minimal gauge, in consonance with [1]. We have shown that this parameter changes under modified parametrization of quantum metric, but the one-loop unique effective action does not change. This confirms the consistency of the mentioned additional requirement.
Using the unique effective action in quantum general relativity we considered the renormalization group equations for the Newton and cosmological constants separately, as it was done earlier in [11], but our analysis is done from a different perspective. The one-loop equations come from the quantum effects of the purely massless modes and, therefore, can be used in both UV and IR. In the UV the renormalization group trajectories can be used only until the scale where the massive degrees of freedom coming from higher derivatives become active. However, in the IR there are no restrictions. In this respect the renormalization group equations under discussion strongly differ from the ones in renormalizable and superrenormalizable models of quantum gravity which are valid only in the UV regime, usually with respect to the Planck scale. Finally, using these equations we have shown that the running of both Newton and cosmological constants, caused by the quantum gravity, does not produce an essential numerical change for these effective charges, at least between GUT scale in the UV and the present-day cosmic scale in the IR. | 9,291.8 | 2020-06-07T00:00:00.000 | [
"Physics"
] |